Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by hubie on Tuesday December 02, @11:40AM   Printer-friendly
from the is-this-your-card? dept.

Ethicists say AI-powered advances will threaten the privacy and autonomy of people who use neurotechnology:

Before a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as 'Twinkle, Twinkle, Little Star', rang out

But there was a twist. For Smith, it seemed as if the piano played itself. "It felt like the keys just automatically hit themselves without me thinking about it," she said at the time. "It just seemed like it knew the tune, and it just did it on its own."

Smith's BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.

[...] Andersen's research also illustrates the potential of BCIs that access areas outside the motor cortex. "The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas," says Andersen. "There's a wide variety of things that we can decode."

The ability of these devices to access aspects of a person's innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private. It also poses ethical questions about how neurotechnologies might shape people's thoughts and actions — especially when paired with artificial intelligence.

Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people's internal reactions to online and other content.

Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. "Whole-brain interfacing is going to be the future," says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. "It leads you to the final question: how do we make that safe?"

[...] Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023.

Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. "There's a wild west when it comes to the regulatory standards," he says.

In 2018, Ienca and his colleagues found that most consumer BCIs don't use secure data-sharing channels or implement state-of-the-art privacy technologies2. "I believe that has not changed," Ienca says. What's more, a 2024 analysis3 of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it.

Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person's mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person.

"The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating," Ienca says. Adding neural data, he says, "is like giving steroids to the existing data economy".

Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development, have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected.
Heading to the clinic

While their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron's device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn't require neurosurgery. It has proved safe, robust and effective in initial trials4, and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval.

Elon Musk's neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials.

At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology.

As for what's next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. "All of them hope to go back further in time in the brain," she says, "and to get to that subconscious precursor to thought."


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by aafcac on Tuesday December 02, @05:22PM (17 children)

    by aafcac (17646) on Tuesday December 02, @05:22PM (#1425603)

    That right there is why I've thought that developing this stuff is an extremely bad idea. The stuff that requires surgery is slightly less bad, but if a government wants in badly enough and doesn't care what the consequences are if the procedure fails, they'll force the procedure to implant the device.

    I do get that there are some people that would legitimately need this like those that are completely paralyzed and locked in without any ability to communicate in any fashion, but I'm not convinced that this deal with the devil is worth it. The effort really should be put into actually preventing the conditions that lead to that rather than providing such a tempting technology to abuse.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2, Touché) by khallow on Tuesday December 02, @05:37PM (16 children)

    by khallow (3766) Subscriber Badge on Tuesday December 02, @05:37PM (#1425606) Journal
    If we don't research it, we won't understand it well enough to develop countermeasures.
    • (Score: 2) by liar on Tuesday December 02, @06:11PM (3 children)

      by liar (17039) on Tuesday December 02, @06:11PM (#1425612) Journal

      PICARD: So what went wrong? Where are it's creators? Where are the people of Minos?
      SALESMAN: Once unleashed, the unit is invincible. The perfect killing system.
      PICARD: Too perfect. You poor fools, your own creation destroyed you. What was that noise?
      SALESMAN: The unit has analysed its last attack and constructed a new, stronger, deadlier weapon. In a moment, it will launch that weapon against the targets on the surface.
      PICARD: Abort it!
      SALESMAN: Why would I want to do that? It can't demonstrate its abilities unless we let it leave the nest.

      --
      Noli nothis permittere te terere.
      • (Score: 1) by khallow on Tuesday December 02, @08:25PM (2 children)

        by khallow (3766) Subscriber Badge on Tuesday December 02, @08:25PM (#1425626) Journal
        The obvious rebuttal here was to use the weapon against itself and vacate the premises until the shooting stops.

        There's always a countermeasure.
        • (Score: 2) by liar on Wednesday December 03, @12:15AM (1 child)

          by liar (17039) on Wednesday December 03, @12:15AM (#1425647) Journal

          [Cavern]

          PICARD: That sound again.
          DATA: Another weapon has been launched, sir.
          PICARD: We've got to find some way to stop this system.
          DATA: I would need to see the programme schematic.
          SALESMAN: You've got it.
          PICARD: Is it possible to re-adjust the targeting sequence?
          SALESMAN: Absolutely. It wouldn't be much good without it.
          PICARD: Data, assign it a neutral target.
          DATA: The target must be specific, sir.
          PICARD: Itself, then. Itself or it's own power source.
          DATA: The force of that explosion would destroy this cavern and everyone on the surface.
          SALESMAN: Watch now. This is the fourth and final projectile. The Echo Papa series Six Oh Seven is about to complete this phase of its mission.

          --
          Noli nothis permittere te terere.
          • (Score: 1) by khallow on Wednesday December 03, @12:18AM

            by khallow (3766) Subscriber Badge on Wednesday December 03, @12:18AM (#1425648) Journal
            And remember my bit about not being there? There's always a countermeasure.
    • (Score: 3, Interesting) by aafcac on Tuesday December 02, @06:18PM (11 children)

      by aafcac (17646) on Tuesday December 02, @06:18PM (#1425614)

      The problem is that the only way to ensure that it never gets abused is if it's never researched in the first place. I'm not generally much of a luddite, but some technologies are fundamentally so dangerous that we need to be careful about developing in that direction. We've got atomic bombs, we can't undo that. Even if we destroy all of them, destroy all the documents everywhere and summarily execute everybody involved in the project, there is an awareness that it is possible and would in all likelihood be done again by somebody.

      This is the same sort of a thing, without it being done it's just a theoretical like nuclear fusion or space elevators, but the moment somebody manages to make one of these practical, it effectively means that it's a permanent threat to humanity.

      There is no countermeasure to this needed if nobody develops it. And there's no guarantee that anybody is going to develop it. But, if everybody is convinced that somebody else is going to develop it and acts accordingly, that will guarantee that somebody creates it and we all have to suffer for it. It reminds me a bit of Nash Equilibrium or the Prisoner's Dilemma where the winning move is for everybody to refuse to play.

      • (Score: 2, Insightful) by khallow on Tuesday December 02, @08:08PM (10 children)

        by khallow (3766) Subscriber Badge on Tuesday December 02, @08:08PM (#1425623) Journal
        You can't ensure that it never gets researched. For a powerful country like China, it's an obvious future tool of social control. That alone is likely to see this technology developed. And they obviously aren't the sole party in the world with both capability and interest.
        • (Score: 2) by aafcac on Tuesday December 02, @10:32PM (4 children)

          by aafcac (17646) on Tuesday December 02, @10:32PM (#1425638)

          Of course not, but that is one of the dumbest justifications for creating something with that sort of potential for harm. While it won't be as fast to know if the technology has been successfully developed as the atomic bomb was, people will work out that it exists afterwards.

          This sort of short-term thinking is why the Chinese are likely to come out ahead eventually. They can afford to think 10 years out, the US mostly doesn't.

          • (Score: 1) by khallow on Wednesday December 03, @12:12AM (3 children)

            by khallow (3766) Subscriber Badge on Wednesday December 03, @12:12AM (#1425646) Journal
            Only thing dumber would be to dismiss that justification.

            This sort of short-term thinking is why the Chinese are likely to come out ahead eventually. They can afford to think 10 years out, the US mostly doesn't.

            This is great caveman logic.

            Caveman1: Me worry Sun God Thog will bake the Earth. He angry all the time.

            Caveman2: You foolish for say so. Thog be very angry and bake Earth. You fault!

            Interesting how one has stick their head in the sand to avoid a foreseeable problem! But I guess it worked for Thog-induced climate change...

            • (Score: 2, Disagree) by aafcac on Wednesday December 03, @05:52AM (2 children)

              by aafcac (17646) on Wednesday December 03, @05:52AM (#1425673)

              Nonsense. Your line of reasoning is unsophisticated, the Prisoners Dilemma wasn't even a studied thing until the mid-20th century. For most of the history of our species, the assumption would have been to do whatever it is before the other group did and thus ensure that it happened in most cases. The sophisticated line of reasoning is to recognize that if nobody does it first that it won't be done at all. As dangerous as this technology is, it's not so dangerous that it is likely to end a major country, there are strategies to address that which could be employed.

              • (Score: 2, Touché) by khallow on Wednesday December 03, @01:02PM

                by khallow (3766) Subscriber Badge on Wednesday December 03, @01:02PM (#1425697) Journal

                Your line of reasoning is unsophisticated,

                It's unsophisticated because it doesn't need to be sophisticated. I don't believe in sophistication for the sake of sophistication.

                The sophisticated line of reasoning is to recognize that if nobody does it first that it won't be done at all.

                The obvious rebuttal: "IF".

                Your argument is a wish fulfillment fantasy. We have a scary-dangerous technology so the obvious solution is to not research it. When pressed about what we should do when someone develops the technology anyway, you just dig the hole deeper.

                Sorry, it's time for plan B. We wouldn't be seeing this sort of story coming out if there was a comfortably large obstacle to anyone developing the technology.

              • (Score: 2, Insightful) by khallow on Wednesday December 03, @07:03PM

                by khallow (3766) Subscriber Badge on Wednesday December 03, @07:03PM (#1425725) Journal

                the Prisoners Dilemma wasn't even a studied thing until the mid-20th century. For most of the history of our species, the assumption would have been to do whatever it is before the other group did and thus ensure that it happened in most cases.

                Group identity is itself a cooperative solution to the prisoners dilemma. The prisoners dilemma is one of the oldest problems in existence. We see prisoners dilemma issues in microbe behavior, for example [phys.org]. The usual solution whether microbes, primitive societies, or modern civilization is to change the payoffs so that cooperation is advantageous.

                Here, we have not only payoff for defection, but opportunity for a social wood deception strategy. Defection is even more advantageous if you can get the majority of the world to cooperate.

                My view? If you don't want such technology to be abused then be prepared to provide a military-grade response.

        • (Score: 2) by liar on Wednesday December 03, @03:56AM (4 children)

          by liar (17039) on Wednesday December 03, @03:56AM (#1425667) Journal

          If you haven't seen it, a movie you might find interesting: The Creator (2023) From Wikipedia:
          Plot
          In 2055, an artificial intelligence created by the U.S. government detonates a nuclear warhead over Los Angeles, California. In response, most of the Western world pledges to eradicate AI to prevent humanity's extinction. Their efforts are resisted by New Asia, a region comprising East, South and Southeast Asia, whose people continue to embrace AI. The U.S. military aims to assassinate "Nirmata",[a] the chief architect behind New Asia's AI advancements, using the USS NOMAD (North American Orbital Mobile Aerospace Defense), a space station capable of launching destructive attacks from orbit.

          --
          Noli nothis permittere te terere.
          • (Score: 1) by khallow on Wednesday December 03, @09:16PM (3 children)

            by khallow (3766) Subscriber Badge on Wednesday December 03, @09:16PM (#1425737) Journal
            Glancing at the cribbed plot, I wonder how someone accidentally blows up a city. I know it's possible. Both the US and USSR came close during the Cold War.
            • (Score: 2) by liar on Wednesday December 03, @10:19PM

              by liar (17039) on Wednesday December 03, @10:19PM (#1425748) Journal

              I have only watched part of this movie so far, but as far as I can tell, the AI is built into multiple bodies ( a synthetic race?)... individuals, viewed as Human in Asia, and machines in the west. And, was the detonation an accident...

              --
              Noli nothis permittere te terere.
            • (Score: 0) by Anonymous Coward on Wednesday December 03, @10:22PM (1 child)

              by Anonymous Coward on Wednesday December 03, @10:22PM (#1425749)

              Glancing at the cribbed plot, I wonder how someone accidentally blows up a city.

              Oh c'mon! Did you ever question how how the Enterprise goes back in time?

              • (Score: 1) by khallow on Wednesday December 10, @03:12AM

                by khallow (3766) Subscriber Badge on Wednesday December 10, @03:12AM (#1426369) Journal
                Bike shed effect. Time travel is hard. Not blowing up cities is easy. Guess which one gets questioned?