Benjamin Bestgen: Neurolaw – mental privacy

Benjamin Bestgen: Neurolaw – mental privacy

Benjamin Bestgen

Benjamin Bestgen considers ‘mind-reading’ technology and the law in his latest jurisprudential primer. Read the last one here.

Imagine your annual review comes up and your supervisor presents you with a chart, depicting data collected by a little electroencephalogram (EEG) device built into the headrest of your chair. It’s also part of the new fashionable headbands your firm makes you all wear when you are working from home. The equity partners even have a reader directly implanted in their skull as a condition for partnership but that’s only a rumour.

The data shows the days and times your brainwave readings indicate that you were wakeful, concentrating, attentive – and when you were not. All the daydreams, tiredness, joys, upsets, distractions, stresses that form part of your working days and sometimes prevent you from getting through all your tasks or achieving your absolute best: they are captured as “times when your EEG readings indicate that you were distracted, fatigued, not concentrating”.

Your salary review, promotion, whether you should remain employed: your brain activity readings will influence those questions.

Bad science-fiction?!

No. Law professor and philosopher Nita Farahany notes that portable EEG devices are already being tested on train and truck drivers in China and Chile to measure attention, distraction and cognitive load (how “busy” your mind is). EEG cannot detect thoughts or memories – different technology is being developed for that.

Companies like Neurosky or Emotiv manufacture consumer-grade EEG devices. They can be used for health purposes, like sleep-monitoring or managing epilepsy. They can help disabled people control virtual or real objects like keyboards, lights or wheelchairs by thought. They measure attention levels for marketing research, education, gaming or health and safety purposes. They can indicate stress-levels in people for managing employee productivity and wellbeing.

Who wants this data?

Next to the obvious uses of brain data for research or marketing and sales, insurance companies may want it to develop your individual risk profile and calculate premiums, e.g. for your car or health insurance. Employers may wish to monitor your stress, “productivity” and how focussed you are at work. States could include this data into personal identification and surveillance measures for crime and anti-terrorism programmes, usefully combining it with CCTV and facial recognition technologies.

Doctors, judges, pilots, engineers, soldiers, factory workers, managers: any job where concentration, wakefulness and levels of stress and distraction really make a difference, EEG data has a role to play.

A right to mental privacy

Today’s data protection regulation gold standard, the EU’s GDPR, is an important piece of legislation to reign in some of the privacy violations committed by corporations, governments and media. But arguably, it came years too late, as technology innovates much faster than legal systems adapt.

Bioethicists Marcello Ienca and Roberto Andorno (2017) flag important questions around our brain data that our laws aren’t fully ready for:

  • For what purposes and under what conditions can brain information be collected and used?
  • What brain information could legitimately be disclosed to others or made accessible?
  • Who is entitled to access such data?
  • What should the limits for consent be in this area?

Existing legal frameworks provide some guidance. Article 9 GDPR protects people’s genetic, biometric or health data, though numerous exceptions apply. Article 12 of the Universal Declaration of Human Rights and Article 8 of the European Convention on Human Rights also expressly state that privacy should be protected from “arbitrary interference” and be “respected”, vague as these formulations are. And there still is no generally agreed legal definition of “privacy” to date.

It’s therefore unclear if existing regulations afford sufficient protection to data contained in and generated by our own minds. Ienca and Andorno note further that current privacy and data protection laws in Europe and the US are mainly concerned with personal data clearly separate and distinguishable from the individual. But mental data cannot readily be distinguished from the source that produced it, i.e. the individual’s neural processes. Our neural activity is the very basis to all other data we produce. Where source and data are indistinguishable, privacy laws should be adjusted to expressly protect the source, not only the data.

Protection from what?

Consent and data use

With portable, non-invasive and discrete EEG technology, certain brain data could be recorded with relative ease, without individuals’ awareness. Think about a new car with an EEG reader built into the driver’s headrest, into the hat or helmet of your uniform, a new bed you bought or close to your head somewhere at your workstation. Questions arise regarding consent, how this data is used, by whom and what for.

Surveillance and security

Brain data like EEG readings can also be used as unique identifiers of individuals, similar to fingerprints, voice profiles or irises. EEG-based technology for identification, authentication and recognition of individual persons already exists. Next to its use as a surveillance tool, loss or misuse of such data could be a considerable privacy and security risk.

Brain data can also be accessed without directly recording an individual. Researchers have databases full of brain recordings. Developers of neurotechnology and related apps also allow for storing of such data in clouds. Data leaks, hacks and theft of data storage facilities are risks like with any other stored data.

Informed consent

Privacy, Ienca and Andorno consider, is a right and an ability. We already make daily decisions about data we disclose or keep private (e.g. on social media or completing application forms). But how we could “filter” our brain data is a complex technical question as well as one of knowledge: how many people really understand their brain processes (conscious and unconscious ones) and could competently select which data remains private and which is appropriate for disclosure?

Innocence and self-incrimination

Farahany (2012) further points out that emerging “mind-reading” technologies could endanger the presumption of innocence and erode the right not to incriminate oneself. Jurisprudence from the European Court of Human Rights such as Funke v France (1993) or John Murray v United Kingdom (1996) indicate that EU law does not have an express right not to incriminate oneself but such a right is implied in Article 6 ECHR, the right to a fair trial. US law has the “5th Amendment to the Constitution” as well as the “Miranda rights” against self-incrimination (Miranda v Arizona 1966).

No accused can be compelled or coerced to say anything against himself and legal privilege rules further afford certain protections. But the jurisprudence also discusses various exceptions: for example, “real evidence” which has an existence independent from the free will of an accused can often be compelled through legal warrants, such as disclosure of documents or the taking of biological samples like blood, hair or saliva.

Thoughts and memories are brain activities, internal operations in the mind. Recording them is less painful than drawing blood and with developing technology may become quicker, more accurate and convenient. Merely recording mental activity does arguably not require the free will of the accused and is closer to “real evidence”. No answers to questions are compelled, the accused is not forced to say a word or do a single thing against himself.

Farahany notes that in time this could dilute the right to protection against self-incrimination: an accused would remain protected against giving oral evidence against himself but not against the source of such evidence: his own thoughts and memories. Additionally, an argument could gain traction to “just scan his mind and we’ll see whodunnit!”, which risks dismissing the presumption of innocence.

What to do?

Given the untrustworthiness of many governments and corporations and the potential for manipulation and oppression, some people argue for an absolute, unconditional ban of any compulsory mind-reading technologies.

Others are concerned with human dignity: just because we can potentially read a person’s thoughts, memories or feelings does not mean we should cross that line, as it would be the ultimate invasion of a person’s innermost being. It is debatable whether anybody could even meaningfully and knowledgeably consent to such a thing.

But others assert that privacy is not an absolute right. Most jurisdictions acknowledge that reasonable expectations of individual privacy have to be balanced with public interest considerations like protection of public health, crime detection or protecting the rights and freedoms of others (e.g. Article 8(2) ECHR). Many jurisdictions already permit, under special circumstances, with various safeguards and under court warrant, the compulsory taking of private data, e.g. for genetic testing or to capture dangerous criminals. Why would the use of brain data, painlessly collected, be any worse than the use of blood, tissue or the surrender of personal documents like diaries, medical or criminal records?

There seems no present solution to these questions but public debate and education about them should be encouraged now and not only when the technology is already widespread.

In the next article, I’ll introduce the proposed rights to mental integrity and psychological continuity.

Benjamin Bestgen is a solicitor and notary public (qualified in Scotland). He also holds a Master of Arts degree in philosophy and tutored in practical philosophy and jurisprudence at the Goethe Universität Frankfurt am Main and the University of Edinburgh.

Share icon
Share this article: