Published by New Law Journal By Harry Lambert
Examines the burgeoning neurotechnology field, and considers in turn the three primary legal causes of action that are relevant to privacy and neurotechnology: breach of confidence, misuse of private information, and breach of the General Data Protection Regulation.
In contemporary society, individuals already relinquish substantial amounts of personal privacy to corporations in exchange for negligible benefits. As neurotechnology develops, the stakes will be higher. The benefits will be greater (for example, writing a text or controlling a computer game with your thoughts), but so too will be the risks. If we are not careful, the pact society makes with Big Tech is going to become increasingly Faustian. To quote Nita Farahany, author of The Battle for your Brain (2023)), neurotechnology is now encroaching upon the ‘last fortress’ of our freedom.
This article addresses the interplay between neurotechnology and privacy, considering how existing legal frameworks might respond to emerging challenges.
Normative underpinnings
The right to mental privacy is paramount and regarded as a ‘secular deity’ in modern society (Blitz et al, The Law and Ethics of Freedom of Thought (2021)). In Ashcroft v Free Speech Coalition 535 US 234 (2002), the US Supreme Court recognised that ‘the right to think... is the beginning of freedom’.
The emerging discourse surrounding neurotechnology gave rise to the recognition of ‘four ethical priorities’ (Yuste et al, ‘Four ethical priorities for neurotechnologies and AI’, Nature 551 (2017)), one of which was ‘privacy and consent’.
Yuste et al’s paper paved the way for other scholars to add further flesh to the bone, such as Marcello Ienca and the Institute of Neurotechnology & Law’s (INL’s) Roberto Andorno, who coined the term ‘neurorights’—a framework of rights intended to protect individuals from the invasive capabilities of neurotechnological advancements.
One such neuroright is mental privacy, which they define as the protection of ‘private or sensitive information in a person’s mind from unauthorized collection, storage, use or even deletion’ (Ienca & Andorno, ‘Towards new human rights in the age of neuroscience and neurotechnology’ Life Sciences, Society and Policy (Dec 2017)), and that is our focus in Part 3 of this series. This article therefore examines whether and to what extent this neuroright is already protected and how well placed the domestic law of privacy is for the brave new world to come.
In England and Wales, three primary legal causes of action are relevant: breach of confidence (BOC), misuse of private information (MOPI), and breach of the General Data Protection Regulation (GDPR).
Breach of confidence
BOC itself is inapposite as a vehicle to protect neurorights. To establish liability, the claimant must establish that the information: (i) had the necessary quality of confidence; (ii) had been imparted in circumstances imposing a duty of confidence; and (iii) was used to the detriment of the confider (although there is some doubt based on recent authorities as to whether detriment is a constituent element of the cause of action).
This is an awkward factual fit—for example, it is not clear whether consumers using wearable neurotech are ‘imparting’ their brain data, nor what ‘detriment’ might look like if not made public and not leading to pecuniary loss.
It is also a poor conceptual fit. In Bloomberg LP v ZXC [2022] UKSC 5, [2022] AC 1158, [2022] 3 All ER 1 at [45], the Supreme Court referred to the fact that misuse of private information and breach of confidence protect ‘different interests’. That is because breach of confidence is primarily underpinned by confidentiality in relationships, whereas misuse of private information is premised on the autonomy and dignity of the person to whom the information relates. Yet many of the most concerning neurotech scenarios involve tortfeasors with no prior relationship to the victim, rendering MOPI the more appropriate vehicle for any claim.
MOPI is also more suitable because ‘the concept of “private life” is much broader than what is confidential in equity’ (The Duke of Sussex and others v MGN Ltd [2023] EWHC 3217 (Ch), [2023] All ER (D) 94 (Dec) at [1359]). The example Fancourt J gave for this proposition is the decision in Von Hannover v Germany [2004] EMLR 21. In that case, the European Court of Human Rights held that the right to privacy under Art 8 protected aspects of each person’s personality such as their ‘name’, ‘picture’ and ‘physical and psychological integrity’ (at [50]). He considered that the protection afforded by BOC, unlike MOPI, did not necessarily extend this far.
Likewise, BOC does not protect trivial or useless information. The equivalent limiting factor in respect of the tort of misuse of private information is that a breach has to attain a threshold level of ‘seriousness’. Although these notions are related, their application may differ on the facts of specific cases. For example, the editors of Duncan and Neill on Defamation and Other Media and Communications Claims (fifth edition, 2020) observe at [28.10] that, despite the seriousness threshold, in some contexts, ‘courts have been willing to protect relatively trivial information’ through the MOPI. This is in stark contrast to BOC, where ‘a duty of confidence will not be imposed so as to protect useless information’ and there is a ‘limiting principle… that the duty of confidence applies neither to useless information, nor to trivia’ (AG v Observer [1990] 1 AC 109, [1988] 3 All ER 545 at [149C] (Scott J) and at [282D] (Lord Goff)).
Misuse of private information
At the heart of MOPI is a balancing exercise between the right to privacy under Art 8 of the ECHR and the competing right to disclosure or freedom of expression under Art 10.