What if your thoughts were no longer private?
This week, I’m joined by Nita Farahany, Duke Professor, TedX Speaker, Global Policy Advisor and Author of ‘The Battle for Your brain’. We dive into the urgent reality of neurotechnology where brain data can be tracked, emotions decoded and decision-making influenced by AI. Nita shares why mental privacy and cognitive liberty could become the next fundamental human rights.
If you care about freedom, ethics and the future of technology, this episode will change the way you think. Tune in now, the battle for your brain has already begun.
So why should you be listening in?
You can hear Rob and Nita discussing:
– The Numerous Benefits and Risks of Neurotechnology
– Ethical Concerns Regarding Mental Privacy
– Cognitive Liberty as a Crucial Human Right
– Global Policies and Frameworks for Safeguarding Mental Privacy
– The Future of Technology Redefining What it Means to be Human
Connect with Nita here – https://www.linkedin.com/in/nitafarahany
Transcript
Nita Farahany 0:00
Neurotechnology could really give us the insights into our brain that we haven’t had until now, we’re talking about the exciting prospect of being able to look into the brain and being able to see things that we’ve never been able to see before, but it’s also the seat of everything for us. We have access to it. So do the companies who are providing the products, and as we already know, that means that governments also have a window into it in ways that they wouldn’t have otherwise. This is what’s still in our brain of threat of surveillance and misuse of that data in ways that really fundamentally add at these concepts of a right to mental privacy and a right to freedom of thought, a right to self determination. On
Robert Hanna 0:40
today’s legally speaking podcast, I’m delighted to be joined by Nita Farahany. Nita is one of the world’s foremost experts on the ethical, legal and societal implications of emerging technologies, a professor at Duke University and the founding director of the Duke initiative for science and society. She has shaped global discussions on research and advisors in a high profile leadership roles. Nita is also the proud author of the battle for your brain. Appears regularly in international media. Her TEDx talks have more than 3 million views, and she’s spoken at the World Economic Forum. So a very big, warm welcome to the show, Nita, thank you for having me. Oh, it’s an absolute pleasure to have you on the show. Really delighted for today’s discussion. I’ve had the privilege of actually watching you live at one of your talks. But before we get into all the amazing things you’ve been doing, we do have a customary icebreaker question here on the legally speaking podcast, which is on the scale of one to 1010, being very real, what would you rate the hit series suits in terms of its reality of the law if you’ve seen it on a scale of one to 10.
Nita Farahany 1:46
So it’s got to be a not applicable because I’ve never seen it. And
Robert Hanna 1:50
with that, we can give it a solid zero and move swiftly on to talk all about you. So to begin with, Nita, would you mind telling our listeners a bit about your background and career journey,
Nita Farahany 2:00
sure. So if you look at my background, it looks almost as if it was intentional to get to where I am, to do the things that I’m doing, because, you know, it started with if you think about it, academically, from my undergraduate career focused on science. I was a Genetics major at undergraduate and really passionate about science, but I was also already a debater for policy debate at the time, and a government minor, so not your typical like science going to medical, medical school kind of person, so I sort of felt my way through which was to, you know, then do more science, focusing on neuroscience, eventually found myself doing a JD and a PhD in philosophy. And all of this was actually because what I was really interested in, and just didn’t know where it would take me career wise, was the intersection between science, technology, philosophy and law. And I thought maybe I would practice as a lawyer for a little while, and maybe I do IP law or something like that. That’s not what I was interested in. What I was interested in were these really thorny ethical and legal legal questions with emerging technologies and emerging scientific discoveries. So that eventually landed me in an academic job, which I was thrilled about. It sort of found me, and it found me at just the perfect time, because people were really interested in the intersection between neuroscience and law and philosophy, and there didn’t happen to be that many people who were trained in those different areas at the time. And so I started as an academic at Vanderbilt University and did a quick stint for a year at Stanford University, and then landed here at Duke University, where I focus on emerging tech, philosophy and law, and have really loved it, yeah,
Robert Hanna 3:54
and it’s an inspiring journey, it has to be said. But I guess was there a particular moment for you, you mentioned, obviously, the intersection of law, neuroscience and ethics. Was there a particular moment for you that kind of sparked that sort of real interest at all? Yeah,
Nita Farahany 4:10
there really was. So, you know, I knew I was interested in these hard questions of policy and science, but I didn’t know what to do with it, and so I was getting a master’s degree at Harvard in their extension programme, and I was taking a class on behavioural genetics, and there was a chapter in that book which looked at a number of tests that had been done on criminals in prison, and it found that there was an increased incidence of x, y, y, a particular genetic syndrome, which, at the time, they thought maybe helped to explain some of the increase in criminality that the people who had that seemed to exhibit more criminal behaviour. And I had this moment where I was just like, yeah, I need to study that. There needs to be something that is at the intersection of law and behavioural sciences and philosophy of like figuring out, is that person more or less responsible, if they have a genetic predisposition to being how they are, should we be holding them criminally culpable, or is there some different way of thinking about it? And so I ended up applying for JD, ma and JD PhD programmes, and I ended up doing my dissertation on the impact of behavioural sciences on criminal law. And it was literally that chapter I keep that very outdated textbook now in my office, just because, you know, sometimes you find inspiration like the aha moment in small places. It could be a journal article you read. It could be some idea, but it’s something that lights you on fire. It makes you know, like, that’s That’s it. That’s the thing that really drives me, that I feel like I need to get to the bottom of I love that
Robert Hanna 5:53
story. Thank you for for sharing that before again, you move to your academic journey. I believe you also clerked at the US judge for the US Court of Appeals. So what did you learn during your time about law there?
Nita Farahany 6:08
Yeah, that was an amazing experience. So I clerked for Judge Judith Rogers on the DC Circuit. And it was great, because I went from law school, I was still working on my dissertation, and clerked for her for the year. And one of the things that’s really neat about the DC Circuit is that all of the judges are in the same building, and so you don’t get to know just your judge. You get to know the other judges on the DC Circuit, and you get to know all of the other law clerks as well. And there’s all kinds of special events meeting the other judges. You work closely with the other law clerks, and the result is that you’re not just working on the cases that are in your judges chambers. You’re working on a number of different cases and hearing a lot about the law. So you know, seeing how judges made decisions, seeing what like, how collaborative it was behind the scenes, seeing the exchange of opinions and the drafting of opinions, and the care with which specific sentences were written or specific concepts were written. You know, it makes you appreciate a judicial opinion in an entirely different way. To be behind the scenes of seeing it being created. I became the FERC expert in chambers, which is the like FERC cases. Once you have to figure out, like the natural gas pipelines in the US, it’s a special expertise generally, like whichever clerk ends up with that first case in the beginning of the term, like they’re gonna end up with all of the FERC cases because there’s too steep of a curtain, like learning curve of figuring out the natural gas pipelines and how all of that works, I totally geeked out on it, like I thought that the FERC cases were fantastic, maybe because I’m, like, a science and tech geek. I thought it was really cool, but I thought they were fascinating. I’ve done nothing with FERC since the DC Circuit days, but I really like developed a strong passion for it. You know, it’s interesting because the DC Circuit has a much more admin law focus than some of the other circuits do. And so it was also neat just to be steeped in admin law and really understand it, which, you know, in the kind of current moment that we’re in, in the US is interesting too, to see that kind of the attempts to, in many ways, disassemble the administrative state, and to just know how much law and how complex it is in the administrative administrative state. From that perspective, it’s I’ve probably been watching even more closely than many people, as each of the agencies have been changed with the Trump administration.
Robert Hanna 8:47
Yeah, no, absolutely. And it sounds like you’ve got some incredible experiences there and learn some new things along the way as well. So let’s fast forward then to the world of neurotechnology, because you are a well renowned expert when it comes to all things ethical, legal and societal implications of emerging tech. But for our listeners who may be less familiar, would you mind explaining what neurotechnology is?
Nita Farahany 9:10
So you know, I mentioned that I have this passion for behavioural sciences, and part of that led me to really looking at a lot of the technologies which are aimed at decoding brain activity. And earlier on, there was a bridge for me, for that with criminal law, which is there was some early attempts to start to use neurotechnology, that is, any technology that helps peer into the brain, and then software that helps to decode what’s happening there, and to use that to try to figure out if a criminal was guilty or innocent or had a brain abnormality, and to explain it that way. Increasingly, today, my area of focus has shifted away from that criminal law focus to a different way that neurotechnology is developing. So some of. Uh, your listeners will be familiar or heard of Elon Musk, and they might be familiar also that he has a neurotechnology company called neuro link. And neuro link is a company that focuses on implanted neurotechnology. So this is taking, you know, small electrodes, tiny, tiny, like, hair, like, like, hair, like, of width electrodes, and putting them into the brain. And so this involves, like, drilling a little hole in the skull and putting it deep into the brain. And why would anybody do this? You know, generally, the people who are having that done, or people who have lost some capacity to be able to communicate with the outside world. And so being able to get their brain signals to directly communicate with technology, for example, can enable them to type again or to move equipment around. You know that is responsive to their brain signals. And so think of it as sensors in the brain, then complex AI that is trained to understand what those signals mean and to help decode it into what you would intend to do in the world, like type or swipe or say yes or no or move an object. The that’s the implanted world, and neuralink is the one that’s, I think, most famous for people, but it’s it’s not the one that’s furthest along. There are companies like Synchron and others who are in later phases of clinical trials that are doing that that could affect hundreds of millions of people who have some mobility or other disability issue that’s limiting for them and help them regain self determination. On the other side, there’s the neurotechnology for the rest of us, where we’re not going to have a little hole drilled in our skulls, and instead, what we’re really talking about is very similar to wearable sensors that people are more familiar with, like a heart rate sensor in their watch, or maybe they’re wearing like an oura ring or something that’s picking up some of their bodily signals. Up until now, there’s been very few opportunities to have brain signals picked up, but brain signals, just like heart rate and other signals from the body, can be picked up by sensors. And so these are either standalone, like headbands, or, you know, something that can be worn in a baseball cap that picks up the electrical activity in the brain. There are other ways of doing it, but that’s the biggest modality right now. Or it could be something like air pods that have or ear buds that have EEG electroencephalography sensors that pick up electrical activity in the brain or you have on headphones right now, one of the big companies that have that have launched in this space have really high quality headphones that you can listen to music or do a podcast or whatever else, all while the cups around it are loaded with sensors that pick up your brain activity, and that’s all neurotechnology too. It’s just neurotechnology for the rest of us where it can start to do a lot of the same things I was talking about with neurolink, you know, use it to swipe around your screen or, you know, do things like that. But for the most part, the applications are a little bit more limited to doing things like tracking your attention or focus or otherwise.
Robert Hanna 13:22
Yeah. And again, thank you for giving such a good overview and some some great examples there. You know, we’re a very positive Tech for Good show, try and look at the benefits of technology. And you gave some examples there, but I guess in your own words, perhaps for people who might be like, Whoa, this is going too far. What do you see as the very obvious and helpful benefits of neurotechnology to people who may be less convinced?
Nita Farahany 13:48
Yeah, I mean, so the easiest is for people who have health constraints, right? So those, I think, are the simplest ones. There’s also, I mean, you know, within that world, you know, there are people, for example, who suffer from epileptic seizures, and already they’re starting to integrate wearable neurotechnology, because it can detect, thanks to advances in AI, an epileptic seizure minutes or even up to an hour before it occurs, sending potentially life saving alerts to a mobile device or to a loved one, which can be really transformational for people who suffer from depression. There is neurotechnology that is stimulates the brain in particular ways, like, there’s a company called Flow neuroscience, and that can really be helpful. There’s also a focus in femtech these days. So FEM neurotech, where there are, like a headband that people can wear to try to treat premenstrual syndrome, or even they’re doing research on menopause to see if it can ease the symptoms of menopause, and it seems to be quite effective. Or if a person suffers from tremor in their hand or a. Tremor, they can wear an EMG electromyography neurotechnology device on the wrist that picks up the brain activity as it goes from the brain down the arm to the wrist, and then sends an inhibitory response that’s precise to their pattern of tremor back to their brain and stops the tremor without having to take drugs. So that’s pretty incredible. I’ve personally used it for Neurofeedback quite a bit in order to ease my migraines, which has been great. And I’ve also used other kinds like 10s units that provide stimulation to the brain to decrease pain and decrease the likelihood of needing to take drugs. And then for the rest of us, if it’s not like a health condition. You know, there is stress levels that can be moderated by doing it for meditation. And, you know, learning more effective meditation and neurofeedback, or the new headphones from neurable help people to track their attention levels and then see when they’re most focused, or when they’re more likely to have periods of mind wandering and inability to focus, and then to train to have longer periods of focus over time, which can be really helpful in our increasingly distracted world. So I could keep going, but there are, you know, a tonne of benefits, and the best way to think about it is, most people are very familiar with doing things like exercise to tone the body or to improve their heart rate and bring it down over time, or that they need to get sleep in order to, you know, adequately be able to enable their body to rest and recover. But we have almost no insights into probably the part of our body that we most associate self with and that we’re probably the most worried about losing, right and so neurotechnology could really give us the insights into our brain that we haven’t had until now, in which case, I think the kinds of applications and opportunities for being able to improve our brain health and wellness will only increase over time as the ability to pick up those signals and to code those signals become more powerful.
Robert Hanna 17:10
Yeah, and I completely agree. And you know, obviously you’re on a we’re on a legal show, and you know, I have a legal recruiting business, and you know, a number of people that you know struggle when it comes to workplace wellness and mental health and things like that, I think this is, you know, Tech for Good in the right way. I want to now talk about your your book, which I do have a copy of in my office today, which I’m a big fan of, the battle for your brain. And again, I mentioned to our listeners that I heard you speak live at Clio Con last year in Austin, and was just blown away, really, by by the insights that you share. But in your in your book, you explore how neurotechnology is advancing at very incredible pace. So what is the most exciting, I guess the most exciting development you have seen recently that you’d like to share with our audience?
Nita Farahany 17:57
Hmm, I think probably the biggest is, you know, that meta has launched their surface EMG device together with their Orion glasses. Now, I say that’s exciting in the sense of it’s most likely to be one of the wide, like more widespread and early widespread devices that become, you know, kind of more likely that this becomes a phenomena across society that becomes part of our everyday lives. I wouldn’t put it in the I’m excited that meta is the first ones, or that I have, like, great faith or trust, and how they will handle the data in ways that I think or have the appropriate safeguards for individuals. But you know what I what motivated me to write the battle for your brain was that device knowing about it and seeing it before meta acquired it in 2018 two of the things that had really kept neurotechnology from going mainstream were the form factor. That is that most of the devices were like in forehead bands that were, you know, kind of silly looking. Most people weren’t going to spend their everyday lives wearing these. And so you would have to wear neurotechnology and use it in really limited use cases. And then the second was not just the form factor, but the applications. The applications were quite limited to just things. Like, you would use it to meditate for 20 minutes, and then you’d be done like it wasn’t something that had really like the killer app that would make it go widespread and in 2018 when I saw the device that eventually became this surface EMG device at meta, the person who was showcasing it was one of the, you know, kind of early company employees and founders, and he was, you know, saying, like, why are we humans such clumsy output devices? Like, we’re really good at taking in information, but we’re really bad at getting it out of our bodies. And what if we could offer. Operate octopus like tentacles with our brains instead. And so he was showing what was a neurotech in the form of a watch, something that could be embedded into an everyday device we already use, and people are already quite familiar with smart watches. And then what he was talking about wasn’t just solving the form factor by putting it into an everyday device, but he was talking about neural interface. That is using neurotechnology to interface with the rest of our technology, like eventually replacing other peripheral devices. So your mouse and your keyboard or your joystick, you know that you interact with your glasses or your VR devices, it’s awkward that we have something in between, right? Rather than actually just thinking about it and moving more natural between them. And so that’s what he was talking about. And I was like, Oh, wow. This is, this is the killer app. Like, once it actually becomes brain to the rest of our technology communication, it’s fundamentally different. I really thought Apple was going to acquire that company, because they already had the most popular watch, and so I was really surprised when a year later, meta acquired them. And now meta has launched their surface EMG device together with Orion their sunglasses. So this is their augmented reality glasses, which you can interface with using this, you know, surface EMG device, electromyography device that picks up your brain signals as goes from your brain down your arm to your wrist. So that’s incredible to me. And you know, at the time that my book came out, I predicted, like, what’s going to happen is you’re going to have headphones, and you’re going to have earbuds, and you’re going to have all of this and all and all of that has launched now, right? So neurables headphones have launched, and emotive has new, like their second generation of earbuds, and there’s, you know, Apple’s patent, which has been released, which shows that they’re coming forward with AirPods that are packed with EEG. And so it’s just, it’s gratifying to see I got it right, exciting that like that’s the direction that I saw it going, and it really is going in that direction, which means all of the good side and all of the rest that I’m worried about are now, you know, realities that we have to grapple with. Today’s
Robert Hanna 22:18
episode is brought to you by Clio host of the biggest conference in legal Clio con. This year’s Clio con, set for Boston, Massachusetts, and October the 16th and 17th, is taking shape. The latest exciting news, the full agenda is now live. You don’t want to miss the chance to see what’s happening in the legal industry. Connect with your peers and level up how you use legal technology, plus Clio has made it easier than ever to make your conference experience your own. They’ve sorted keynotes, workshops and sessions into different tracks, meaning you can choose the track that best suits your needs, or mix and match from multiple tracks to build a customised experience further curate your Clio con experience when you book your hotel with exclusive pricing and secure tickets to the iconic evening wrap party. It’s no surprise, then that Clio con is already over 50% sold, so don’t miss out. Pass prices will never be this low again. So act now. Visit cliocon.com to secure your past today. Now back to the show. You hold a number of roles, obviously, I think the co chair as well of UNESCO expert group on ethics and neurotechnology, what are some of the global policies you think are in need of change? And why?
Nita Farahany 23:31
It’s a great question. So I mean, what’s nice is that the world is waking up to the issues. And as you mentioned, UNESCO launched this process that is currently underway to introduce a global standard around the ethics of neurotechnology, which includes a number of recommendations with respect to legal measures that countries should take to safeguard people against the misuse of neurotechnology. And so we should probably define some of those categories of misuse, which I go through and a lot of depth in the book to show how the misuse is already occurring and that it will likely happen on a much larger scale. So, you know, we’re talking about the exciting prospect of being able to look into the brain and being able to see things that we’ve never been able to see before. But it’s also the seat of everything for us, right? It’s it’s our thinking, it is our emotions, it is our fatigue levels, it’s our neurological disease, it’s all of those things. And if we have access to it, so too do the companies who are providing the products. And as we already know, that means that governments also have a window into it in ways that they wouldn’t have otherwise, and the risks to that are really quite profound, because it’s not like other kinds of data, where, at the very least, we’re expressing intentionally, our communication with the rest of the world, whether it’s watching a video or hitting a like button or typing a message, at least, it is something that has come out of our brain and. Into the world. This is what’s still in our brain. And you know, it’s it introduces a kind of threat of surveillance and misuse of that data in ways that really, fundamentally, I think, get at these concepts of a right to mental privacy and a right to freedom of thought and a right to self determination, to be able to choose, you know, how we use our brains, whether we have access to them, who has access to them? All of those questions. And so those are really tracking to fundamental human rights. The UNESCO process tracks that as well and frames it around human rights. And should it be adopted in November 2025, by the 194 member states, that would at least serve as a really powerful soft law mechanism. But if you’re listening to this podcast, you are familiar with law, you understand that soft law only goes so far. And so the question is, what kind of hard law mechanisms might be put into place that could actually protect people in the US. There have been a couple of laws that have been passed to try to protect neural data. I think they’re misguided because they’re very narrow. One of them’s in Colorado. It protects as sensitive data, neural data when it’s used for identification, it almost is never used for identification. So that’s not that helpful, and there was a significant lobbying effort by tech net to narrow the definition, to make it so that it didn’t really apply. And then in California, they passed a broader law that protects neural data when it’s derived from neurotechnology, and it narrowly defines it again, whereas there’s lots of ways to get neural data, like you can get neural data without literally using neurotechnology, you can get cognitive data by doing things like even heart rate when you’re applying AI in order to make inferences about brain and mental states. Or eye tracking is incredibly powerful for being able to make inferences about mental states, and so it intentionally excludes all of the rest of that data as non neural data, so therefore not sensitive. Places like Chile have passed mental integrity laws. Some other countries have moved toward doing this. I have an article I published called Beyond neural data, cognitive biometrics and privacy, where we lay out in a table like all of the different kind of laws that are happening in the space, what I think we need is to move toward recognising that at the one hand, we do need these stronger human rights. At the other hand, we need context specific rights. So like in the US, we had the genetic information non discrimination act, which looked at misuse against the use of genetic information and employment or the use of it for health insurance, and I think trying to figure out the context in which we think there is misuse of brain data and cognitive data, like in educational settings or in employment settings, or for government surveillance purposes, and creating protections against that would allow people to enjoy the upside of the technology and the upside of access, while also being able to share with like medical researchers or places where it could really lead to great insight. So we have to find that right balance between, you know, sharing with people who aren’t going to misuse it safeguards against the misuse in the specific context that we’re concerned about it. Yeah,
Robert Hanna 28:25
no, really, really fascinating insights. And I want to pick up on one of the things you mentioned there around the cognitive and cognitive liberty, because I think that’s been a new human right. Correct me if I’m wrong. So what is cognitive liberty for people who might be less familiar, and what does it mean in practice, and how can we go about protecting it? Yeah,
Nita Farahany 28:46
it’s a really good question. So, you know, I define cognitive liberty as the right to self determination over our brain and mental experiences, which is both the right to access and use your brain if you choose to do so or to enhance it, for example, if you choose to do so, and a right from interference with your mental privacy and freedom of thought. Now you could think of that as a new human right, and that would be useful, from a guiding principle perspective, to say, like this helps guide us forward and understand the different rights that have to be updated. Or you can understand it as a guiding principle right, the fundamental concept of liberty that directs us to update existing human rights, I’d say, from, you know, a like, from my philosophy hat on, I’d like us to recognise it as, like, the fundamental human right that underlies all of these other human rights. From my practical what can we get past in the real world perspective? I’d say, if you understand it as a guiding principle right, the principle that underlies existing human rights. What it really directs you to do is to say there is an existing human right to self determination, there’s an existing human right to privacy and there’s an existing human right to freedom of thought, but the interpretations of each of those is quite narrow right now, and if you understand that the modern threat to. Those rights from the digital era and from neurotechnologies is a threat to mental privacy is a threat to freedom of thought that goes well beyond religion, but to broadly you know, rights against interception and manipulation and punishment for your thoughts, then you would use cognitive liberty as your north star to say, these are the rights that have to be updated in terms of our understanding and interpretation of
Robert Hanna 30:23
them. Yeah, no. Fascinating once, once again. And, you know, I really like the way that you’re able to articulate quite complex and, you know, new, new terminology to people in a very easy way to understand. You’ve touched on this. But I want to talk a little bit more around, around fairness, and it’s linked to AI and decision making, because obviously AI is becoming more and more part of decision making. For example, you know, judging job job applicants in my world as a legal recruiter, sentencing criminals, which you’ve touched on, you know, diagnosing patients and so on. So how do you ensure fairness and accountability when it comes
Nita Farahany 31:00
to this? So, you know, I think fairness is tough, right? It is, you know, what do we mean by fairness and what do we mean by accountability? I think it’s context specific. You know, for some people, fairness means access, you know, an equitable access to technology. That’s a principle I think, that some people would find with respect to fairness. For others, accountability means responsibility of companies to the end user and being accountable and transparent in their practices. I’ve been working on developing some sort of like trust index that we could look at for how we would think about accountability and transparency for neurotech companies. And one of the efforts internationally that is underway is the World Economic Forum has launched a new global futures Council on neurotechnology. And I’m delighted to get to co chair that, and to focus on what are some of the gaps in international governance, and particularly this question of accountability and fairness. And how do we think about, you know, some industry standards or otherwise that could incentivize behaviour, because law is the stick often right, but it also can be the incentive if we put into place the right laws that actually drive incentives for behaviour. And sometimes, you know, the complement of the two really helps you to imagine change. And so, you know, one thing that we’ve already started to see is there’s significant variability between the companies that are already out there with neurotech process, like neurotech devices, about what are they doing with the brain data that they’re collecting? What is the nature of the brain data that they’re collecting? Because if you collect, for example, very high resolution or kind of broad spectrum EEG data, it can be mined to learn a whole lot of additional things, right? So the thing that you are agreeing to with the company is that you want to see your attention levels. But it turns out that they’re reporting full spectrum raw EEG data. What’s happening to that data? Why are they collecting raw, full spectrum EEG data, as opposed to just the inference, what’s leaving the device and what’s staying on the device, what’s being processed on what we call the edge, like local storage, versus where does the rest of it go? Are they selling the data to third parties? You know, all of these questions are transparency practices which are necessary for accountability, but nobody’s really systematically looking at this, realising that’s where one of the major risks are. So what if we develop something that actually looked at every company’s policy and then had a kind of transparency index that said, across these dimensions, with respect to the collection, the use the nature of the data? Is it high resolution? Low resolution, medium resolution. Is it inferences only that leave the device, or is it, you know, the full data that leaves the device or the third parties, and then just publish that right and say, here’s a regular index like, see where the company that you’re considering buying a product from actually falls and regulators see the companies that you you know should be having oversight from and see where they fall with respect to that transparency is really critical to accountability, and so we need mechanisms that actually increase transparency so that we can hold companies accountable for their practices and fairness. Then I think ties into that, which is, how do you know if a company is behaving in a way that is fair toward the consumers or fair towards society, unless you really understand what the practices are that underlie their behaviour. And you know, in AI, there are, like, the first generation of laws that have been passed. I think. You can think of a lot of them as transparency laws. Like I taught AI law and policy in the fall semester at Duke, and we looked closely at each of the laws. And many of them don’t have much by way of teeth for enforcement, but what they have that’s interesting is a lot of transparency forcing mechanisms. And I think the first step in any emerging technology is to gain greater transparency into the practices that could present legal risks, that could present, you know, concerns for the legal community, and we need the same when it comes to neurotechnology in order to lead to fair practices. Yeah,
Robert Hanna 35:35
again, some some really fascinating insights. And I love that sort of transparency for accountability. It’s almost got to be a ring to it, but it’s ring to it, but it’s so true, isn’t it? I think the more that it can be can lead to that. Okay, I want to now talk about the Presidential Commission, because you were appointed by President Obama at the time to serve as the Presidential Commission for the Study of Bioethical Issues. So can you tell us more about your experience, and generally, a bit more about your time there probably
Nita Farahany 36:05
one of the best experiences of my life was really quite an honour. So in 2010 pretty early in my academic career at you know, five years in, I was appointed to the Presidential Commission for the Study of Bioethical Issues. There were 13 members. I was by far the youngest, most junior member. And so, you know, when I first got there, it was a steep learning curve, because we had, you know, the presidents of two universities, University of Pennsylvania and University of Emory, who were the chair and the vice chair, and people who were very seasoned in bioethics. And, you know, the various fields that they had come from that included, you know, science, that we had a, you know, Colonel from the army that, you know, was there who was the chief medical officer. We had somebody from Homeland Security. And you know, it was maybe one of the first or second meetings where one of the first topics we were asked by the President to take on was Craig Venter had announced the first synthetic organism that had been created, you know, entirely by a computer, he said, was its parent. And, you know, we were asked to evaluate the risks and benefits of it. We’re at one of the early meetings. And, you know, we had these open hearings that were regularly held. And the chair at the time, Amy Gutman, turns to me, and she says, you know, Nita, what do you think? And I was like, well, that’s a really interesting question. You know, on the one hand this, and on the other hand that, like, I was true academic at that point, yeah? And she’s like, Yeah, we all know that. That’s why we’re here. So what do you think? And I, you know, the part that was, I think the most valuable for me was moving very early in my career from that more like philosophical, just balance perspective to having to take a stand figure out exactly where I came out on issues, be able to explain why I came out on issues in a particular way and then drive that toward practical solutions. Both of the people who were JDS on the commission, me and Anita Allen from Penn were JD PhDs. And so it was, you know, this kind of mix of the two approaches to bring to something. But you know, I learned to not only come up with very pragmatic and practical, you know, solutions to problems, but to direct them and target them at particular actors. So if you go through our reports, you know, there’s all of the justification to kind of understand and analyse the issue, but then it, you know, results in a concrete recommendation this actor, right, should do this thing in the following timeframe in order to achieve the following results. And that was incredible. We took on some amazing issues over the course of the time. We were on the commission for seven years, and, you know, some really hard and complex issues. And I gained a lot of confidence in my ability, I think, to be able to take on very hard issues, think through them, get the information and the process that I need in order to address them, be comfortable with taking a stand about exactly where I come out on it, and then say, and this is how we should address it.
Robert Hanna 39:16
Love that, yeah, and it sounds like absolutely brilliant experience. But as you say, you always say, you know, the comfort zone is great, but nothing grows there, right? And it sounds like you enabled you to grow and, you know, go on to all the other amazing things that you’ve done. I’ve referenced them briefly as well, that we connected at Clio con in 2024 obviously Clio sponsor our show. You were one of the keynote speakers in the flagship event in Austin, Texas, and you discussed ethical implications of neurotechnology and the importance of mental privacy. We’ve touched on that. But can you tell us a little bit more about the importance of mental privacy from your own perspective?
Nita Farahany 39:53
Yeah. So, so what I really try to do and that talk, and in talks like it. Is to both help people understand, like, what is neurotechnology? Why should you care right now? Because it is here and it is coming. What is the upsides, right? So this is the answer to this is, don’t try to ban it, right? That’s not the right solution. And what are the risks? And those risks are risks that are being realised today. And so if we start from thinking about the workplace, where this really touches everyone, there is a huge asymmetry right between employer and employee in terms of information access, and there is a growing use of surveillance tools in the workplace during the COVID pandemic that went from some surveillance tools to more than 85% of companies saying that they now use surveillance tools, and that even includes things like cameras that they were turning on during the COVID work from home to check and see what a person’s doing, or the kind of bossware software that’s on computers that can look and see what people are doing. Now, just imagine companies having access to literally, your brain activity while you are working. And so when you get work issued headphones and work issued earbuds, just like your work issued other technology, and that gives the company access to what you know, your attention levels, your fatigue levels, potentially your cognitive decline over time. Now this might sound like, oh, that’s never gonna happen to my company, but, you know, I talk about the fact that already, for more than a decade, there’s a company called Smart cap that has been selling enterprise technology that enables companies to track, you know, truck drivers and mining companies and others to track the employees on fatigue level. They do it better than you know most, and that they only provide a score of one to five for fatigue levels that keep the data on device and overwrite it on device. But there are other companies that don’t do that, and in China, companies are, you know, the the employees who work at state based factories or, you know, drive the high speed train between Beijing and Shanghai have their brain activity monitored throughout their work day. And that data has already been used to do things like also put up communist messaging in front of them and see how they react to it, punishing them based on what that data reveals, or, you know, sending them home based on their emotional, you know, kind of levels of their brain believing that they could be disruptive. It’s not hard to see how that leads to a pretty dangerous result in autocratic regimes where even your brain activity that could reveal your political leanings. You know, I’d say people in the US are afraid right now that we’re kind of entering into more of a McCarthy like era, and that your loyalties are, you know, the kind of loyalties that might be tested. Your brain is much more likely to reveal that even if you try not to respond in any other setting, how you react to political messaging that’s put in front of you is very revealing from brain activity. Likewise, in educational settings, there are educational institutions worldwide that have required students to wear EEG headsets in the classroom, and use that data for the teacher, for the parents in China, even the state, to make choices about students, to punish them if they’re not paying attention adequately. And you know, there are already law enforcement agencies worldwide that have used neurotechnology to interrogate the pre conscious signals in the brain to see how a person reacts to different crime scene details that they shouldn’t know about, and seeing if their brain pre consciously recognises something even that your conscious you know kind of awareness isn’t able to screen out There like the examples abound in the book. Like I go through countless real world examples, and I was very intentional in the book to make it grounded in the here and then now, because there’s so few people who understand it’s already happening now, the question is just scale from here. So, you know, then the question is like, why do we need mental privacy. That’s just a few of them, right? People know that we’re the product on things like social media, but when those same companies are selling to you, you know, EMG devices or EEG devices, and then tracking how you respond to videos or advertisements, not just by how much time you spend on them, but your unconscious brain based reactions to them, the capacity for neuromarketing, but also for manipulating brain behaviour and nudging it in ways that you’re never aware of become really disturbing. And so the right to cognitive liberty, this right to self, deter. Nation over your brain and mental experiences isn’t just to protect against the mental privacy. It’s to protect against the misuse of data to manipulate and to punish people based on what their thoughts reveal. The ship
Robert Hanna 45:12
has left the harbour, hasn’t it? So with all of this, it’s only going in that direction. That’s why we were super keen to have this conversation, to educate our listeners, because I came away from that keynote, just, you know, mind blown. And I’m someone who’s very involved in, you know, tech and web three and, you know, always looking forwards, but this stuff, we all need to kind of pay attention. So I’ve been super grateful for everything you’ve shared thus far. And I guess, you know, you’re well recognised. You were named the 2024 Vox future perfect 50, a list recognising the world’s most impactful thinkers and innovators shaping the future. So, you know, building on what you just said, how will you continue to champion cognitive liberty and, indeed, mental privacy? Yeah,
Nita Farahany 45:54
I really focus, you know, as much as I can on impact, right? I’m, I’m really interested in working with existing institutions to help leverage their convening power and their engagement with the world to bring about the change that we need in order to make this an accountable, transparent technology that is governed to align with human values and human flourishing. So, you know, in the illegal arena, for example, the uniform laws Commission has recently launched a study committee on mental privacy, and I’m chairing that for them. And excited about that, because one of the things that I think would be incredibly beneficial is to have model laws to help us think about from a uniform perspective across the United States, what would be a sensible way to reach that right balance between the benefits of the technology as well as the downside risk to safeguard against them, without a piecemeal state by state approach that really could frustrate innovation and also leave consumers differentially protected in ways that were problematic, the American Law Institute and the European law Institute have recently launched a joint project on the principles of biometrics, obviously a slightly different focus than neurotechnology, but I think a nice related effort which looks broadly across biometrics, including for identification, but also the growing use of things like facial emotional recognition software, and, you know, other technologies that are being directed to try to make inferences about brain and mental states. And think about like, how does that interrelate with things like mental privacy and with freedom of thought, and more generally, what does that mean for you know, what all technologies might be included within a broader conception of what biometrics are. So those are the kinds of things that I’m engaged in deeply and I continue to take on both talks to educate the public more generally and to bring in the stakeholders that we need, but to work with these kinds of organisations that are moving ahead so that we can do it in a really thoughtful, deliberate way to try to reach this point where technology better serves our interests. Yeah,
Robert Hanna 48:14
and thank you for the work that you you do, because, as I said, before you know, we are a Tech for Good show, and I think if we have the right people, advising, looking into this, educating, then hopefully we could have a better society. And my next question isn’t one I’m going to hold you to, but just as someone who is a thought leader in this in the next 20 years, and I think anything is impossible to predict in 20 years now with the world we’re in, but what do you think might be the biggest ethical debate surrounding technology? So
Nita Farahany 48:40
I think what it means to be human is fundamentally changing. And you know, as you look at the CO evolution of humanity with technology, I think what we will come to understand as what is human, what is not? What are other intelligences that are part of and deserve the same rights that we do, or similar rights to we do? You know, we are in a rapidly evolving era of shifts to what it means to be human, and new entrance to the space of intelligence. And I think 20 years from now, we’re going to be in a different place than we are right now. I think we’re going to have other intelligences at the table, and it’s really going to be how are we, how are we addressing them, and how are we engaged with them? And is it really us or them, or is it some blend that we see?
Robert Hanna 49:55
Yeah, and it’s a really sort of, you know, thought out view, and, you know, just as a father speak. Speaking, you know, you think about what the future might look like for that generation, that’d be wildly different. I know we’ve spoken of years of my great grandparents, grandparents, everything else, but, you know, we’ve had people come onto the show and talk generally about, you know, AI, not just once in a tech revolution, but once in a species, you know, revolution, and that’s going to be interesting to see what happens. Neither. This has been fantastic.