Musk's Newest Startup is Venturing into a Series of Hard Problems:
Tonight [Tuesday, July 16, 2019], Elon Musk has scheduled an event where he intends to unveil his plans for Neuralink, a startup company he announced back in 2017, then went silent on. If you go to the Neuralink website now, all you'll find is a vague description of its goal to develop an "ultra-high-bandwidth brain-machine interfaces to connect humans and computers." These interfaces have been under development for a while, typically under the monicker of brain-computer interfaces, or BCIs. And, while there have been some notable successes in the academic-research world, there's a notable lack of products on the market.
The slow progress comes, in part, because a successful BCI has to tackle multiple hard problems and, in part, because the regulatory and market conditions are challenging. Ahead of tonight's announcement, we'll take a look at all of these and then see how Musk and the people who advise him have decided to tackle them.
[...] An effective BCI means figuring out how to get the nervous system to communicate with digital hardware. Doing so requires solving three problems, which I'll call reading, coding, and feedback. We'll go through each of these below.
[...] The first step in a BCI is to figure out what the brain is up to, which requires reading neural activity. While there have been some successes doing this non-invasively using functional MRI, this is generally too blunt an instrument. It doesn't have the resolution to pick out what small populations of cells are doing and so can only give a very approximate reading of the brain. As a result, we're forced to go with the alternative: invasive methods, specifically implanting electrodes.
[...] Once we can listen in on nerves, we have to figure out what they're saying. Digital systems expect their data to be in an ordered series of voltage changes. Nerves don't quite work that way. Instead, they send a series of pulses; information is encoded in the frequency, intensity, and duration of these pulse trains, in an extremely analog fashion. While this might seem manageable, there's no single code for the entire brain. A series of pulses coming from the visual centers will mean something completely different from the pulses sent by the hippocampus while it's recalling a memory.
[...] One possible aid in all of this is that we don't necessarily need to get things exactly right. The brain is a remarkably flexible organ, one that can re-learn how to control muscles after having suffered damage from things like a stroke. It's possible that we only need to get the coding reasonably close, and then the brain will adapt to give the BCI the inputs it needs to accomplish a task.
Also at NYT, The Verge, Bloomberg, and TechCrunch.
(Score: 2) by Snospar on Wednesday July 17 2019, @03:16PM (1 child)
I tend to agree with you, something non-invasive like a sub-vocal pickup and either an ear piece or HUD for
instantresults. I think we're getting close to good sub-vocal recognition now and filling in the other parts of this are Alexa/Siri/Google Assistant level tech. This would give you the "superhuman" ability to search the internet thus appearing knowledgeable (careful with the source data) and even better you could set reminders so easily it would wow your peers (sub-vocal: "remind me I want a beer when I get to the kitchen" sure beats "Now, what the hell am I doing in the kitchen?").Mind you, rather than praise for these new found skills I can imagine being told "No one likes a smart arse" instead.
(Score: 2) by ElizabethGreene on Wednesday July 17 2019, @06:14PM
One of my memory devices is to picture myself dragging a note up to the upper right corner of my vision. I have a small working memory, and being able to actually do that and then scroll through those would be a superpower for me.