Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday November 19, @07:36PM   Printer-friendly
from the Altman-Bezos-Gates-and-Musk-again dept.

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html
https://archive.ph/mgZRE

As neural implant technology and A.I. advance at breakneck speeds, do we need a new set of rights to protect our most intimate data — our minds?

On a recent afternoon in the minimalist headquarters of the M.I.T. Media Lab, the research scientist Nataliya Kosmyna handed me a pair of thick gray eyeglasses to try on. They looked almost ordinary aside from the three silver strips on their interior, each one outfitted with an array of electrical sensors. She placed a small robotic soccer ball on the table before us and suggested that I do some "basic mental calculus." I started running through multiples of 17 in my head. After a few seconds, the soccer ball lit up and spun around. I seemed to have made it move with the sheer force of my mind, though I had not willed it in any sense. My brain activity was connected to a foreign object. "Focus, focus," Kosmyna said. The ball swirled around again. "Nice," she said. "You will get better." Kosmyna, who is also a visiting research scientist at Google, designed the glasses herself. They are, in fact, a simple brain-computer interface, or B.C.I., a conduit between mind and machine. As my mind went from 17 to 34 to 51, electroencephalography (EEG) and electrooculography (EOG) sensors picked up heightened electrical activity in my eyes and brain. The ball had been programmed to light up and rotate whenever my level of neural "effort" reached a certain threshold. When my attention waned, the soccer ball stood still.

For now, the glasses are solely for research purposes. At M.I.T., Kosmyna has used them to help patients with A.L.S. (Amyotrophic Lateral Sclerosis) communicate with caregivers — but she said she receives multiple purchase requests a week. So far she has declined them. She's too aware that they could easily be misused.

Neural data can offer unparalleled insight into the workings of the human mind. B.C.I.s are already frighteningly powerful: Using artificial intelligence, scientists have used B.C.I.s to decode "imagined speech," constructing words and sentences from neural data; to recreate mental images (a process known as brain-to-image decoding); and to trace emotions and energy levels. B.C.I.s have allowed people with locked-in syndrome, who cannot move or speak, to communicate with their families and caregivers and even play video games. Scientists have experimented with using neural data from fMRI imaging and EEG signals to detect sexual orientation, political ideology and deception, to name just a few examples.

Advances in optogenetics, a scientific technique that uses light to stimulate or suppress individual, genetically modified neurons, could allow scientists to "write" the brain as well, potentially altering human understanding and behavior. Optogenetic implants are already able to partially restore vision to patients with genetic eye disorders; lab experiments have shown that the same technique can be used to implant false memories in mammal brains, as well as to silence existing recollections and to recover lost ones.

Neuralink, Elon Musk's neural technology company, has so far implanted 12 people with its rechargeable devices. "You are your brain, and your experiences are these neurons firing," Musk said at a Neuralink presentation in June. "We don't know what consciousness is, but with Neuralink and the progress that the company is making, we'll begin to understand a lot more."

Musk's company aims to eventually connect the neural networks inside our brains to artificially intelligent ones on the outside, creating a two-way path between mind and machine. Neuroethicists have criticized the company for ethical violations in animal experiments, for a lack of transparency and for moving too quickly to introduce the technology to human subjects, allegations the company dismisses. "In some sense, we're really extending the fundamental substrate of the brain," a Neuralink engineer said in the presentation. "For the first time we are able to do this in a mass market product."

The neurotechnology industry already generates billions of dollars of revenue annually. It is expected to double or triple in size over the next decade. Today, B.C.I.s range from neural implants to wearable devices like headbands, caps and glasses that are freely available for purchase online, where they are marketed as tools for meditation, focus and stress relief. Sam Altman founded his own B.C.I. start-up, Merge Labs, this year, as part of his effort to bring about the day when humans will "merge" with machines. Jeff Bezos and Bill Gates are investors in Synchron, a Neuralink competitor.

Even if Kosmyna's glasses aren't for sale, similar technology is on the market. In 2023, Apple patented an AirPods prototype equipped with similar sensors, which would allow the device to monitor brain activity and other so-called biosignals. Last month, Meta unveiled a pair of new smart glasses and a "neural band," which lets users text and surf the web with small gestures alone. Overseas, China is fast-tracking development of the technology for medical and consumer use, and B.C.I.s are among the priorities of its new five-year plan for economic development.

"What's coming is A.I. and neurotechnology integrated with our everyday devices," said Nita Farahany, a professor of law and philosophy at Duke University who studies emerging technologies. "Basically, what we are looking at is brain-to-A.I. direct interactions. These things are going to be ubiquitous. It could amount to your sense of self being essentially overwritten."

To prevent this kind of mind-meddling, several nations and states have already passed neural privacy laws. In 2021, Chile amended its constitution to include explicit protections for "neurorights"; Spain adopted a nonbinding list of "digital rights" that protects individual identity, freedom and dignity from neurotechnologies. In 2023, European nations signed the Léon Declaration on neurotechnology, which prioritizes a "rights oriented" approach to the sector. The legislatures of Mexico, Brazil and Argentina have debated similar measures. California, Colorado, Montana and Connecticut have each passed laws to protect neural data.

The federal government has started taking an interest, too. In September, three senators introduced the Management of Individuals' Neural Data (MIND) Act, which would direct the Federal Trade Commission to examine how neural data should be defined and protected. The Uniform Law Commission, a nonprofit that authors model legislation, has convened lawyers, philosophers and scientists who are working on developing a standard law on mental privacy that states could choose to adopt.

Without regulations governing the collection of neural data and the commercialization of B.C.I.s, there is the real possibility that we might find ourselves becoming even more beholden to our devices and their creators than we all already are. In clinical trials, patients have sometimes been left in the lurch; some have had to have their B.C.I.s surgically explanted because funding for their trial ran out.

And the possibility that therapeutic neurotechnologies could one day be weaponized for political purposes looms heavily over the field. Musk, for example, has expressed a desire to "destroy the woke mind virus." As Quinn Slobodian and Ben Tarnoff argue in a forthcoming book, it does not require a great logical leap to suspect that he sees Neuralink as part of a way to do so.

In the 1920s, the German psychiatrist Hans Berger took the first EEG measurements, celebrating the fact that he could detect brain waves "from the unscathed skull." In the 1940s and '50s, scientists experimented with the use of electrodes to alleviate tremors and epilepsy. The Spanish neurophysiologist José Delgado made headlines in 1965, after he used implanted electrodes to stop a charging bull in its tracks; he bragged that he could "play" the minds of monkeys and cats like "electronic toys."

In a 1970 interview with The New York Times, Delgado prophesied that we would soon be able to alter our own "mental functions" as a result of genetic and neuroscientific advances. "The question is what sort of humans would we like, ideally, to construct?" he asked. The notion that a human being could be "constructed" had been troubling philosophers, scientists and writers since at least the late 18th century, when scientists first manipulated electric currents inside animal bodies. The language of electrification quickly seeped out of science and into politics: The historian Samantha Wesner has shown that in France, Jacobin revolutionaries spoke of "electrifying" people to recruit them to their cause and writers toyed with the possibility that political sentiment could be electrically controlled.

Two centuries later, when Delgado and his colleagues showed that it had become technically possible to use electricity to alter the workings of the animal mind, this too was accompanied by an explosion of political concern about the relation between the citizen and the state. Because the thinking subject is by definition a political subject — "the very presence of mind is a political presence," argues Danielle Carr, a historian of neuroscience who researches the political and cultural history of B.C.I.s and related technologies — the potential to alter the human brain was also understood as a threat to liberal politics itself.

In the U.S., where the Cold War fueled anxiety about potential brainwashing technologies, Delgado's work was at first approached with wonder and confusion, but it soon fell under increasing suspicion. In 1953, the director of the C.I.A., Allen Dulles, warned that the Soviet government was conducting a form of "brain warfare" to control minds. In a forthcoming book, Carr traces how the liberal doctrine of universal human rights and freedoms, including the freedom of thought, was positioned as a protective umbrella against communist mind-meddling, co-opting pre-existing struggles against psychiatric experimentation. While the United States warned of brain warfare abroad, it also worked to deploy it at home. Dulles authorized the creation of the C.I.A.'s clandestine MK-Ultra program, which for 20 years conducted psychiatric and mind-control experiments, often on unwitting and incarcerated subjects, until it was abruptly shut down in 1973.

Around this time, the University of California, Los Angeles, sought to create a Center for the Study and Reduction of Violence leading to widespread speculation that the center would screen people in prisons and mental hospitals for indications of aggression and then subject them to brain surgery. An outcry, led in part by the Black Panthers, shut down funding for the initiative. These developments raised public awareness of neural technologies and contributed to the elevation of laws and rights as a stopgap against their worst uses. "We believe that mind control and behavior manipulation are contrary to the ideas laid down in the Bill of Rights and the American Constitution," the Republican lawmaker Steven Symms argued in a 1974 speech. Over the next decades, the development of neurotechnology drastically slowed. By the 1990s, the end of the Cold War dispelled concerns about communist mind-meddling, and the political climate was ripe for reconsideration of the promises and perils of neurotech. In 2013, President Barack Obama created the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) program, which poured hundreds of millions of dollars into neuroscience. In 2019, the Pentagon's Defense Advanced Research Projects Agency announced that it was funding several teams working to develop nonsurgical neurotechnologies that could, for example, allow service members to control "swarms of unmanned aerial vehicles" with their minds. As experimentation progressed, so did the medical and therapeutic uses of B.C.I.s. In 2004, Matthew Nagle, a tetraplegic, became the first human to be implanted with a sophisticated B.C.I. For individuals who cannot move or speak — those who are living with degenerative disease or paralysis, for example — advances in neural implants have been transformative.

Earlier this year, Bradford Smith, who lives with A.L.S. and was the third person to receive a Neuralink implant, used the A.I. chatbot Grok to draft his X posts. "Neuralink does not read my deepest thoughts or words I think about," Smith explains in an A.I.-generated video about his experience. "It just reads how I want to move and moves the cursor where I want." Because he received the implant as part of a clinical trial, Smith's neural data is protected by HIPAA rules governing private health information. But for mass-market consumer devices — like EEG headbands and glasses, for example — that can be used to enhance cognition, focus and productivity rather than simply restore brain functions that have been compromised — there are very few data protections. "The conflation of consumer and medical devices, and the lack of a consistent definition of neural data itself, adds to the confusion about what is at stake," said Leigh Hochberg, a neurointensive care physician, neuroscientist and director of BrainGate clinical trials. "It's a good societal conversation to have, to reflect on what we individually believe should remain private."

In 2017, driven by a sense of responsibility and terror about the implications of his own research, Rafael Yuste, a neuroscientist at Columbia University, convened scientists, philosophers, engineers and clinicians to create a set of ethical guidelines for the development of neurotechnologies.

One of the group's primary recommendations was that neurorights protecting individual identity, agency and privacy, as well as equal access and protection from bias, should be recognized as basic human rights and protected under the law. Yuste worked with the lawyer Jared Genser to create the Neurorights Foundation in 2021. Together, they surveyed 30 consumer neurotech companies and found that all but one had "no meaningful limitations" to retrieving or selling user neural data. There is consensus that some regulation is necessary given the risks of companies and governments having unfettered access to neural data, and that existing human rights already offer a small degree of protection. But neuroscientists, philosophers, developers and patients disagree about what kinds of regulations should be in place, and about how neurorights should be translated into written laws.

"If we keep inventing new rights, there is a risk that we won't know where one ends and the other begins," said Andrea Lavazza, an ethicist and a philosopher at Pegaso University in Italy who supports the creation of new rights. The United Nations, UNESCO and the World Economic Forum have each convened groups to investigate the implications of neurotechnologies on human rights; dozens of guidance documents on the ethics of the field have been published.

One of the fundamental purposes of law, at least in the United States, is to protect the individual from unwarranted interference. If neurotechnologies have the potential to decode or even change patterns of thought and action, advocates believe that law has the distinct capacity to try to restrain its reach into the innermost chambers of the mind. And while freedom of thought, conscience, opinion, expression and privacy are all recognized as basic human rights in international law, some philosophers and lawyers argue that these fundamental freedoms need to be updated and reinterpreted if they have any hope of protecting individuals against interference from neural devices, because they were conceived when the technology was only a distant dream. Farahany, the law and philosophy professor at Duke, argues that we need to recognize a fundamental right to "cognitive liberty," which scholars have defined as "the right and freedom to control one's own consciousness and electrochemical thought process" — to protect our minds. For Farahany, this kind of liberty "is a precondition to any other concept of liberty, in that, if the very scaffolding of thought itself is manipulated, undermined, interfered with, then any other way in which you would exercise your liberties is meaningless, because you are no longer a self-determined human at that point."

To call for the recognition of a new fundamental right, or even for the enhancement of existing human rights is, at the moment, a countercultural move. Over the past several years, human rights and the international laws designed to protect them have been gravely weakened, while technologies that underlie surveillance capitalism have grown only more widespread. We already live in a world awash with personal data, including sensitive financial and biological information. We leave behind reams of revealing data wherever we go, in both the physical and digital worlds. Our computer cameras are powerful enough to capture our heart rates and make medical diagnoses. Adding neural data on top of this might not constitute such an immense shift. Or it might change everything, offering external actors a portal into our most intimate — and often unarticulated — thoughts and desires. The emergence of B.C.I.s during the mid-20th century was greeted and ultimately torpedoed by Cold War liberalism — Dulles, the C.I.A. director, warned that mind-control techniques could thwart the American project of "spreading the gospel of freedom." Today, we lack a corresponding language with which to push back against the data economy's expanding reach into our minds. In a world where everything can be reduced to data to be bought and sold, where privacy regulations offer only a modicum of protection and where both domestic and international law have been weakened, there are few tools to shield our innermost selves from financialization.

In this sense, the debate over neurorights is a kind of last-ditch effort to ensure that the hard-won protections of the past century carry over into this one — to try to prevent the freedom of thought, conscience and opinion, for example, from being effectively suppressed by the increasingly algorithmic experience of everyday life. How much privacy might we, as a society, be willing to trade in exchange for augmented cognition? "In three years, we will have large-scale models of neural data that companies could put on a device or stream to the cloud, to try to make predictions," said Mackenzie Mathis, a neuroscientist at the Swiss Federal Institute of Technology, Lausanne. How those kinds of data transfers should be regulated, she said, is an urgent question. "We are changing people, just like social media, or large-language models changed people." Under these conditions, the challenge of ensuring that individuals retain the ability to manage access to their innermost selves, however defined, becomes all the more acute. "Our mental life is the core of our self, and we used to be sure that no one could break this barrier," said Lavazza. The collapse of that surety could be an invitation to dread a future in which the unrestricted use of these technologies will have destroyed society as we know it. Or it could be an occasion to rethink the politics that got us here in the first place.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by bzipitidoo on Wednesday November 19, @08:10PM (5 children)

    by bzipitidoo (4388) on Wednesday November 19, @08:10PM (#1424760) Journal

    Suppose a new brain enhancement enables authoritarians to be more open minded and far less authoritarian. We may get that, but less directly. Authoritarians are notoriously poor at reasoning, susceptible to wild conspiracy theories. Such as, the crazy idea that there are microchips in vaccines. They also seem to have an above average liking for violence. Merely improving cognition may be enough to change that.

    Supposing we have it, what should we do with such tech? At this time, authoritarianism would seem to be one of the greatest dangers to humanity. From what I've read about this, I have begun to wonder if most or nearly all wars ever have their roots in authoritarianism. WWII sure did. If this tech can save the world from that, maybe the best thing to do is rush ahead with it.

    • (Score: 2) by ikanreed on Wednesday November 19, @09:46PM (1 child)

      by ikanreed (3164) on Wednesday November 19, @09:46PM (#1424770) Journal

      Okay, but look at how information availability through the Internet affected that same problem.

      Cognitive ability isn't the limiting factor here, either. Basic critical thinking directed towards your own ideas is the floor that needs to be raised if you want to fix authoritarian social patterns

      And sticking wires in your brain won't do that.

      • (Score: 2, Touché) by Anonymous Coward on Wednesday November 19, @11:28PM

        by Anonymous Coward on Wednesday November 19, @11:28PM (#1424774)

        And sticking wires in your brain won't do that.

        Not with the piddling little voltages they are currently using.

    • (Score: 4, Interesting) by Thexalon on Thursday November 20, @02:52AM (2 children)

      by Thexalon (636) on Thursday November 20, @02:52AM (#1424783)

      Big tech being big tech, you can be sure that the thoughts that these chips would allow you to have would be those that benefit big tech and the thinking of the people that run those businesses. Which probably isn't so much explicit authoritarianism as much as mindless consumerism, presenting their victims with something resembling what TV advertising and later TikTok had been trying to mainline into people's brains.

      After all, if you have a population that's not thinking, they won't think about rebelling against those currently in power, so you won't need those expensive jackbooted thugs to keep them in line. Readers of Brave New World probably get the idea here, or maybe the humans in Wall-E.

      --
      "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
      • (Score: 2) by acid andy on Thursday November 20, @10:12AM (1 child)

        by acid andy (1683) on Thursday November 20, @10:12AM (#1424791) Homepage Journal

        They probably will still need some thugs, but they will be cheap robots instead of expensive humans. The robot thugs will become cheaper than offering UBI, basic food and shelter to millions of unemployed who need somewhere warm and dry to recharge their brain interface devices. Basically, Futurama was probably right about the robots.

        --
        "rancid randy has a dialogue with herself[...] Somebody help him!" -- Anonymous Coward.
        • (Score: 2) by Thexalon on Thursday November 20, @01:18PM

          by Thexalon (636) on Thursday November 20, @01:18PM (#1424794)

          Of course, knowing how competent these guys and their companies tend to really be, those robots will be much more ED-209 than something actually effective against the correct target.

          --
          "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
  • (Score: 4, Insightful) by looorg on Wednesday November 19, @09:03PM (2 children)

    by looorg (578) on Wednesday November 19, @09:03PM (#1424765)

    As with Mars and all the other fancies -- After you MOFO! If they install it first, they are the first test subject. Then we can start to test it on others. Spare the poor monkeys and rats.

    I'm really looking forward to this so that they can start to stream advertisements straight into the brain. Not to mention that when they get hacked for the first time cause they had not patched the brain-0-day or their brain-firewall wasn't up to date ... If you can't even have an unprotected machine online for minutes without being scanned and probed. I wouldn't want my brain to online unsupervised. But they can go right ahead. Have fun when you are a drooling salad of a human cause you got hacked and jacked and can't afford to pay whomever the 1BTC to unlock whatever they did to you.

    • (Score: 1) by fen on Wednesday November 19, @10:35PM

      by fen (54588) Subscriber Badge on Wednesday November 19, @10:35PM (#1424772)

      Weren’t we trying to dump Elon on Mars? So know we want him to wirehead?

    • (Score: 2) by jelizondo on Thursday November 20, @01:05AM

      by jelizondo (653) Subscriber Badge on Thursday November 20, @01:05AM (#1424776) Journal

      You are right. Even though I despise equally Musk and Bezos, at least Bezos has gone up on own of his own rockets. Why Is Elon so set on sending someone else to orbit and then Mars?

      My hypothesis is that he deems himself of greater value to risk his miserable life on a malfunctioning rocket...

  • (Score: 2, Funny) by Anonymous Coward on Wednesday November 19, @09:30PM (1 child)

    by Anonymous Coward on Wednesday November 19, @09:30PM (#1424768)

    "Big Tech" can bite my shiny metal ass.

    • (Score: 2) by Ingar on Thursday November 20, @08:55AM

      by Ingar (801) on Thursday November 20, @08:55AM (#1424789) Homepage Journal

      Lightspeed briefs

      For the discriminating crotch

      --
      Love is a three-edged sword: heart, soul, and reality.
  • (Score: 4, Interesting) by SomeGuy on Wednesday November 19, @10:59PM

    by SomeGuy (5632) on Wednesday November 19, @10:59PM (#1424773)

    What brains?

    What would they even get out of this? A zombie army of fat smart phone caressing cows and tattoo covered country hicks setting up a base at the local Walmart?

    Eh, it's just another bullshit AI hype article, safe to ignore.

(1)