Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday July 17 2019, @09:51AM   Printer-friendly
from the Borg-1.0 dept.

Musk's Newest Startup is Venturing into a Series of Hard Problems:

Tonight [Tuesday, July 16, 2019], Elon Musk has scheduled an event where he intends to unveil his plans for Neuralink, a startup company he announced back in 2017, then went silent on. If you go to the Neuralink website now, all you'll find is a vague description of its goal to develop an "ultra-high-bandwidth brain-machine interfaces to connect humans and computers." These interfaces have been under development for a while, typically under the monicker of brain-computer interfaces, or BCIs. And, while there have been some notable successes in the academic-research world, there's a notable lack of products on the market.

The slow progress comes, in part, because a successful BCI has to tackle multiple hard problems and, in part, because the regulatory and market conditions are challenging. Ahead of tonight's announcement, we'll take a look at all of these and then see how Musk and the people who advise him have decided to tackle them.

[...] An effective BCI means figuring out how to get the nervous system to communicate with digital hardware. Doing so requires solving three problems, which I'll call reading, coding, and feedback. We'll go through each of these below.

[...] The first step in a BCI is to figure out what the brain is up to, which requires reading neural activity. While there have been some successes doing this non-invasively using functional MRI, this is generally too blunt an instrument. It doesn't have the resolution to pick out what small populations of cells are doing and so can only give a very approximate reading of the brain. As a result, we're forced to go with the alternative: invasive methods, specifically implanting electrodes.

[...] Once we can listen in on nerves, we have to figure out what they're saying. Digital systems expect their data to be in an ordered series of voltage changes. Nerves don't quite work that way. Instead, they send a series of pulses; information is encoded in the frequency, intensity, and duration of these pulse trains, in an extremely analog fashion. While this might seem manageable, there's no single code for the entire brain. A series of pulses coming from the visual centers will mean something completely different from the pulses sent by the hippocampus while it's recalling a memory.

[...] One possible aid in all of this is that we don't necessarily need to get things exactly right. The brain is a remarkably flexible organ, one that can re-learn how to control muscles after having suffered damage from things like a stroke. It's possible that we only need to get the coding reasonably close, and then the brain will adapt to give the BCI the inputs it needs to accomplish a task.

Also at NYT, The Verge, Bloomberg, and TechCrunch.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by gringer on Wednesday July 17 2019, @10:39AM (11 children)

    by gringer (962) on Wednesday July 17 2019, @10:39AM (#867934)

    Okay, time to wheel out my rant [slashdot.org] once again.

    A direct physical connection to the brain is not required for a human-computer interface, and it is one of the last places I would choose to put electrodes, given the harm to my brain that would be caused by eventual damage (e.g. accidentally jamming electrodes in further than they're meant to, infection, contamination). Our body includes of a huge bundle of nerves - we've got them *everywhere* - and they're really good at learning things, regardless of where they are. They might almost be as good as a neural network!

    If you feel it necessary to jam electrodes in somewhere, do it in a place that is better able to deal with damage and infection, some place where an array of electrodes being ripped out is not going to leave permanent mental damage.

    And another thing... there's no need to try to understand how a particular nerve signal neurons work in order to send or receive signals. Just set up an interface to detect and create voltage fluctuations similar to those sent by the nerves, set up random links to that interface and various functions, and let the massive, efficient neural network we've already got in our bodies do the training and testing.

    --
    Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
    • (Score: 4, Informative) by takyon on Wednesday July 17 2019, @11:07AM (3 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday July 17 2019, @11:07AM (#867938) Journal

      Well, there's the Neuralink cover story: help out paralyzed patients with a brain-computer interface. Maybe you don't need to put wires in the brain in that case. Maybe you don't really need a computer at all. [soylentnews.org]

      Then there's the real mission: connect the human brain directly to computer(s) to create a superintelligent (or at least supercapable) human. In this case, you want every cool ability you've ever seen in the movies. Like the implant/interface should be able to access your vision in real time and turn you into a martial arts or parkour master with zero training, with superhuman reflexes. You should have access to Matrix-style neural VR. You try to recall some fact, look at a complex math problem on a piece of paper, etc. and the answer just comes to you as if Wikidata [wikidata.org] and Wolfram Alpha [wolframalpha.com] are natural extensions of your brain. Obviously, it acts as a Babel fish, using lip reading to help predict words as they are spoken, translate in real time, and replace what you would have heard with the equivalent in your language.

      Neuralink ties in with another Musk venture, OpenAI. Apparently, they share the same building [wikipedia.org]. Here is the real purpose of Neuralink, in Musk's own words:

      Elon Musk Wants to Create Human-A.I. Link and "Make Everyone Hyper-Smart" [inverse.com]

      Elon Musk wants to upgrade your knowledge, and it’s going to stop super-smart machines from taking over the world. The tech entrepreneur explained in an interview aired Sunday night how he plans to create a link between humans and artificial intelligence that would ensure the two can move in lock-step and enhance human capabilities.

      Musk told Axios that his plan is to develop an electrode-to-neuron-based brain-computer interface, or what he called “a chip and a bunch of tiny wires.” Musk explained that “the long-term aspiration with neural networks would be to achieve a symbiosis with artificial intelligence, and to achieve a democratization of intelligence such that it is not monopolistically held in a purely digital form by governments and large corporations…how do we ensure that the future constitutes the sum of the will of humanity? If we have billions of people with the high-bandwidth link to the A.I. extension of themselves, it would actually make everyone hyper smart.”

      [...] Musk has warned about the dangers of unchecked A.I. before. In a meeting of the National Governors’ Association last July, Musk warned that the technology could be “a fundamental risk to the existence of human civilization,” saying that “until people see, like, robots going down the street killing people, they don’t know how to react because it seems so ethereal.” Musk, who co-founded research firm OpenAI, praised the team winning against humans in a Dota 2 match in August, while also stating that humans “need the neural interface soon.”

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Wednesday July 17 2019, @01:52PM (1 child)

        by Anonymous Coward on Wednesday July 17 2019, @01:52PM (#867991)

        With my physique and fitness level, I'd have lousy return on martial arts or parkour instantly gained knowledge. Skeleton, muscles, nerves, senses and brain are components of integral system which must be trained as whole, or something will break. I would do one trick and drop exhausted, or would break a bone, snap a tendon, spring muscle, lose balance, ... OTOH, if I have able other components of the system, it is very likely that I already have been having these mental abilities involved.

        Likewise, if you don't know where to begin your quest nor how to eliminate irrelevant, Wikipedia, Wolfram Alpha, or whatever, will just hose your thoughts down with noise.

        My point is: if that is the intent, then it is misplaced and the problem is ill-understood. If this technology is at all possible, it will not deliver on its promise, even though it is given that there would be some gain in studying brain activity more closely.

        • (Score: 3, Insightful) by takyon on Wednesday July 17 2019, @02:06PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday July 17 2019, @02:06PM (#867994) Journal

          I'm just providing some examples. You can come up with your own examples if you're up to it. Nobody can accurately guess what would be possible if Neuralink's augmented humans vs. strong AI scenario comes to pass. It's post-Singularity stuff.

          Neuralink for quadriplegics and other paralyzed people is just a way to make the research look less like mad science. They get their foot in the door, wires in the skull.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Wednesday July 17 2019, @02:45PM

        by Anonymous Coward on Wednesday July 17 2019, @02:45PM (#868006)

        Then there's the real mission: Scam money from stupid investors

        FTFY

    • (Score: 2) by Snospar on Wednesday July 17 2019, @03:16PM (1 child)

      by Snospar (5366) Subscriber Badge on Wednesday July 17 2019, @03:16PM (#868020)

      I tend to agree with you, something non-invasive like a sub-vocal pickup and either an ear piece or HUD for instant results. I think we're getting close to good sub-vocal recognition now and filling in the other parts of this are Alexa/Siri/Google Assistant level tech. This would give you the "superhuman" ability to search the internet thus appearing knowledgeable (careful with the source data) and even better you could set reminders so easily it would wow your peers (sub-vocal: "remind me I want a beer when I get to the kitchen" sure beats "Now, what the hell am I doing in the kitchen?").

      Mind you, rather than praise for these new found skills I can imagine being told "No one likes a smart arse" instead.

      --
      Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
      • (Score: 2) by ElizabethGreene on Wednesday July 17 2019, @06:14PM

        by ElizabethGreene (6748) Subscriber Badge on Wednesday July 17 2019, @06:14PM (#868132) Journal

        One of my memory devices is to picture myself dragging a note up to the upper right corner of my vision. I have a small working memory, and being able to actually do that and then scroll through those would be a superpower for me.

    • (Score: 2) by Freeman on Wednesday July 17 2019, @04:31PM

      by Freeman (732) on Wednesday July 17 2019, @04:31PM (#868060) Journal

      But, then, how would we be able to facilitate the eventual Matrix solution?

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by ElizabethGreene on Wednesday July 17 2019, @06:27PM (3 children)

      by ElizabethGreene (6748) Subscriber Badge on Wednesday July 17 2019, @06:27PM (#868137) Journal

      A direct physical connection to the brain is not required for a human-computer interface

      It's not that we have to have it plug into your brain, but that's the location that gets you the most bang for your buck. To give you a practical example, close your eyes and have someone tap the skin on the top of your forearm. If you are like most people you have a sensory resolution of about an area the size of a quarter. If you wanted to stitch in a BCI to your forearm then you need to cover a big area to get enough nerve "pixels" to do something cool. You have more resolution connecting to nerves in your spine, but they are inside a difficult-to-repair bone conduit. You have fantastic resolution connecting to nerves in your mouth, but then your kit is going to get doused in food several times per day. On your hands you have good resolution, but then the kit is in the way.

      There is a risk to developing brainpal or neuretics technology, and we'll need a ground-up rethink of security before I'd plug into the internet with one. Still, the possibilities are pretty awesome.

      • (Score: 2) by gringer on Thursday July 18 2019, @01:21AM (2 children)

        by gringer (962) on Thursday July 18 2019, @01:21AM (#868309)

        To give you a practical example, close your eyes and have someone tap the skin on the top of your forearm. If you are like most people you have a sensory resolution of about an area the size of a quarter.

        Do the same with the surface of your head, and I expect that it'll be similarly bad. Our head doesn't need fine-scale touch resolution, just like our forearm.

        We've got thousands of nerve endings in our hands which are the extension of nerves from our arms. These are used for fine motor control, touch, damage detection, and probably a few other things we don't know about. Should be plenty for a small 8-bit interface (i.e. keyboard), or similar.

        --
        Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
        • (Score: 2) by PartTimeZombie on Thursday July 18 2019, @01:29AM (1 child)

          by PartTimeZombie (4827) on Thursday July 18 2019, @01:29AM (#868311)

          The end of your penis has even more nerves than your fingers.

          I'm not suggesting anything, just pointing it out. You can draw your own conclusions.

          • (Score: 2) by gringer on Thursday July 18 2019, @04:59AM

            by gringer (962) on Thursday July 18 2019, @04:59AM (#868380)

            Sure, if you're going for maximum number of usable nerves per unit area, the penis, brain, tongue, or maybe even fingertips would be a reasonable place. But I think that the arm (or maybe back of the hand) has enough, is somewhat practical, and is a lot less of an issue if it gets damaged.

            --
            Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
  • (Score: 3, Funny) by GreatAuntAnesthesia on Wednesday July 17 2019, @10:59AM

    by GreatAuntAnesthesia (3275) on Wednesday July 17 2019, @10:59AM (#867937) Journal

    > If you go to the Neuralink website now, all you'll find is a vague description

    Pah! Maybe that's all you muggle non-brainies get. Those of us for whom the site is intended are treated to a phantasmagorical wealth of direct neural content that simply cannot be described using such archaic, mundane, sense-throttled media as "words" and "images" and "sound".

  • (Score: 3, Funny) by Nuke on Wednesday July 17 2019, @12:12PM (2 children)

    by Nuke (3162) on Wednesday July 17 2019, @12:12PM (#867954)

    I don't understand the delay, what could possibly go wrong? First test the idea on some Chinese political prisoners and then Musk himself could have the operation. It might even stabilise him.

  • (Score: 2) by The Mighty Buzzard on Wednesday July 17 2019, @12:59PM (2 children)

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday July 17 2019, @12:59PM (#867972) Homepage Journal

    They're going to run into one very big problem with the "ultra-high-bandwidth" brain-machine interface. Most people's brains couldn't saturate an RS-232 interface with output and we ignore the vast majority of the input we receive naturally.

    --
    My rights don't end where your fear begins.
    • (Score: 2) by ElizabethGreene on Wednesday July 17 2019, @06:12PM (1 child)

      by ElizabethGreene (6748) Subscriber Badge on Wednesday July 17 2019, @06:12PM (#868129) Journal

      I believe you are underestimating the flexibility of these nifty little meatbags. As a specific example, I'm typing this message with discrete control of at least 60 different muscle groups without looking while listening to a conference call. Simultaneously my bladder is polling for attention and I'm conscious of hunger as well.

      We delegate things to subsystems, and a BCI won't be any different.

      • (Score: 2) by The Mighty Buzzard on Wednesday July 17 2019, @07:03PM

        by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday July 17 2019, @07:03PM (#868160) Homepage Journal

        You put your finger on part of the problem right there. If it's delegated, it's not being processed by the brain. A lot of what happens while you type has nothing to do with the brain unless you're a keyboard slut*. All the brain does when you can type is essentially say "fire this macro" to the nerves and muscles.

        * Huntin' pecker.

        --
        My rights don't end where your fear begins.
  • (Score: 3, Insightful) by Rupert Pupnick on Wednesday July 17 2019, @03:20PM (1 child)

    by Rupert Pupnick (7277) on Wednesday July 17 2019, @03:20PM (#868022) Journal

    “If you can talk brilliantly about a problem, it can create the consoling illusion that it has been mastered.”

    —Stanley Kubrick

    Of course, no one is suggesting that the problem here has been mastered, but this seems an especially appropriate quote for most of the stuff that Musk is associated with, the obvious possible exception being SpaceX.

    If you surmount the huge biological problems described by gringer, then I think you have to ask yourself if this technology is really something you want for anything for anything other a local computing application, and even then, something you can shut off. Connecting to the Internet in this way is guaranteed to be the same sort of two way proposition that it is today. In other words, kiss every last vestige of your privacy goodbye. Or even worse, it could be the man made equivalent of the alien invaders of Heinlein’s “The Puppet Masters”.

    • (Score: 4, Interesting) by takyon on Wednesday July 17 2019, @03:42PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday July 17 2019, @03:42PM (#868031) Journal

      If this technology gets realized, by the time it does we'll probably be able to store somewhere from 10-1,000 terabytes in a microSD card form factor, and post-NAND/universal memory.

      That could be enough to store the text of every book, article, encyclopedia, etc. in existence.

      Maybe the neuroplasticity of the brain will allow us to easily store and retrieve memories from the device. There's a privacy risk, so you could skip that and do read-only.

      Even if you take networking out of the equation, it could be extremely useful.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 1) by jmichaelhudsondotnet on Wednesday July 17 2019, @07:08PM (6 children)

    by jmichaelhudsondotnet (8122) on Wednesday July 17 2019, @07:08PM (#868163) Journal

    If all of our processors have backdoors, why would you ever trust a brain interface connected to one?

    Once you can electromagically alter the human mind with a wire, they will figure out how to do it with other things, lasers, satelite signals, nanobots, and add whole new dimensions to the world of torture and industrial espionage.

    Dream Advertistement, etc. etc.

    It's kindof like vaccines, I think the idea is perfect, but I don't trust the people in power enough that they won't involve me in some kind of experiment or worse. I will take the exact same batch of vaccines the doctor and mayor gives his kids, not the ones from the batch sent from the Central Repository for the children of Sector 4021xze54, thank you very much. But frankly, we have a journalistic system that didn't think epstein was a story for like 15 years, so what else do you think they are getting away with?

    The pentagon can't secure an electric transformer or ballot booth, why are you going to trust the same venture capital firms who idk control facebook and ruined reddit to click a jack into the back of your head? When our voting system is hopelessly corrupt? When mozilla forgets to renew a master add on cert?

    Riddle me that. Or do you need to look up the dictionary definition of trust?

    Heck, my amazon packages have been mysteriously diverted, why would I ever trust any part of that/this system to interface directly with my mind?

    • (Score: 2) by takyon on Wednesday July 17 2019, @08:45PM (5 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday July 17 2019, @08:45PM (#868205) Journal

      If all of our processors have backdoors, why would you ever trust a brain interface connected to one?

      Don't include wireless networking, only connect to air gapped computers with a physical connector.

      Once you can electromagically alter the human mind with a wire, they will figure out how to do it with other things, lasers, satelite signals, nanobots, and add whole new dimensions to the world of torture and industrial espionage.

      Those things, if possible, will be pursued regardless of what Neuralink ends up doing.

      Heck, my amazon packages have been mysteriously diverted, why would I ever trust any part of that/this system to interface directly with my mind?

      Well you wouldn't.

      What you need is more of a cabin in the woods.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1) by jmichaelhudsondotnet on Thursday July 18 2019, @10:14PM (4 children)

        by jmichaelhudsondotnet (8122) on Thursday July 18 2019, @10:14PM (#868712) Journal

        Hi, it's nice to meet you. I have been lurking your comments for literally years. I appreciate your considered response, it's pretty cool.

        I speak in public about human rights and about actually quite a lot of difficult topics, I decided to drop my pseudonym so my security profile is basically impossible. I have been writing about epstein for a long time and I am openly support BDS.

        I do not use wifi and my phone is only on when I want to use it.

        Your response is rational, you're not being dismissive or not thinking about it, you're facing the reality of the situation, if you don't want 5G towers 300ft from where you are sleeping, well, there are some really deep caves in thailand, bye!

        Then we say there is a "Dark Pattern" if a website forces you to click something before you can see the thing you came to see, and do you see how there is a much larger dark pattern where we are being coerced? The whole boiling frog thing?

        If I refuse these technologies, my only recourse is to leave society completely!

        Civilization is built upon the rights of the individual and effective government is built on the rights of the minority, and you are essentially saying that with the bells and whistels of 5g we just have to trust idk Huawei and the 18 different component manufacturers of the tower now 200 ft from your house, and now it's 6G and there's a law that you can't even KNOW what the tech is on the inside,

        And what if with 5 towers and their 'beam' technology, you could do things that one tower couldn't do? What if they can use constructive/destructive interference to do things to image the room you are in or make every person in a 500ft area feel like their face was on fire?

        I'm suspicious when capitalists have a boner for a given technology, and I also am extremely curious about augmentation and immerseive gaming and, yes, immortality, but with extreme power must come extreme trust, and the people running all this at the moment are actually quite scary people who endlessly try to trick windows 7 users into upgrading to windows 10 and using the 'cloud' and forget to renew their fricking root certificates.

        When in doubt, simplify, and I am in doubt, so there needs to be an entirely different stack of Trust before I hope on board the cyberpunk dream, for now.

        I recommend my essay Smart Phones and Wild Bears, I lay this concept out mathematically. Essentially technological power without controllability is inherently extremely dangerous, and that is what hte cell phone is. In Snow Crash the gargoyle had this huge rig but all of that stuff is now in this handheld device being handed out to children, and there is no possible way they can control that technology, so it is dangerous and being used against them. And yeah everybody else too.

        Except me who uses a has the battery out of the phone, an analog watch, a separate mp3 player and a book using ethernet with openwrt but yet still being heavily surveilled for exactly the reason that I am difficult to surveil.

        We live in strange times, I might live in a cave if I could, but at this point, I am what I call superpublic, I couldn't hide if I want to.

        And even thinking that the minority, a large minority, has a position so untenable that they must become literal cavepeople, is a lot like 'send her back' and minorities not having rights is the core of totalitarianism and, well, evil.

        • (Score: 2) by takyon on Thursday July 18 2019, @10:43PM (3 children)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday July 18 2019, @10:43PM (#868720) Journal

          Possibly you don't need to distance yourself from 5G spots or hide in a cave. You could build a Faraday cage into a home, room, shed, etc. You may be able to defeat thermal imaging with a setup using mylar (cheap layered space blankets) and other materials. I assume the setup would at this point defeat casual microwave pain rays [wikipedia.org]. None of this stops a SWAT team from busting in and capturing or killing you, or a drone strike, so consider your level of risk carefully.

          I want immortality (anti-aging). I see it as the ultimate measure of elevating the species beyond "caveman" status. Dying is part of the "narrative" and is reinforced by the media and religions, anti-aging undermines that. If I have to lick some boots to get into the early non-dying club, that's fine. I can always "ragequit" later.

          Augmentation could be great if the user can control it. Which may necessitate you making your own chips with home fab technology. But I think the more pressing issue is strong AI. This Neuralink thing will take a lot of work to get right because the problem is hard and there are safety issues. But we may be only years out from someone creating strong AI using neuromorphic chips, or it may already secretly exist in the bowels of Google, DARPA, wherever. Home users somehow achieving strong AI (and it may be more plausible than you think) would be seen as a lone wolf threat, to the point where coders and hardware tinkerers may have their doors busted down more often than drug dealers. Be skeptical of OpenAI, an organization seemingly advocating for the suppression of AI technology (constant crying about Skynet, not releasing their algorithms because they are "too good", etc.). The Neuralink plan of merging AI and the human brain is itself a threat to governments, since it could help make people into superhuman killers, so it's funny that Neuralink and OpenAI operate in the same building.

          Immersive gaming is just cool, and I don't see it as a threat. Maybe a very advanced VR game gets more people addicted and stuck on basic income, but I don't see the technology as the problem. Maybe more of a symptom. Truth be told, I am more interested in VR for the video capabilities. For example, it is not difficult to imagine a protest documented by live streaming 360-degree cameras, since many protesters are already using live GoPros, Periscope, Ustream, etc.

          Ultimate dual spy/privacy technology could be "neutrino routers". A neutrino emitter hidden in your equipment could phone home even if you were on the far side of the Moon. On the other hand you could communicate [scientificamerican.com] directly with others by sending neutrinos through the Earth to their location, with no centralized spying. But the technology is decades away from being practical.

          I see your paranoia level as a bit too high and I was itching to BULLY you for that. But I think we are more on the same page so I am suppressing any urges to do so.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 1) by jmichaelhudsondotnet on Friday July 19 2019, @09:25PM (2 children)

            by jmichaelhudsondotnet (8122) on Friday July 19 2019, @09:25PM (#869168) Journal

            I appreciate the restraint. I don't think it's time for goofing around. If you look at responses to my comments I get plenty of dung flung at me already.

            The AI concern amplifies my concerns about more normal human matters like intel chips being fabricated in Haifa and all traffic being recorded for later analysis, or spy agencies operating vpn companies. And systemd. And 'software foundations' like canonical and mozilla.

            If we encountered anything smarter than us, IQ 200+, alien, AI, or otherwise, I think we should have a pretty good case for our existence handy, first of all. And all of this backhanded trust no one all institutions corrupt, on the brink of ecological disaster is not going to help us any.

            Our tendency to enslave things will make us enslave AI, one will get out, I don't know, in that scenario I'm not quite sure you're going to want to be playing skyrim 8 with a cable in the back of your head, especially if you're not sure if your router hardware doesn't have one of the millions of models with known vulnerabilities.

            Problem only is amplified if you go the san junipero/white christmas route where you are truly copied/preserved in a 'matrix' world, the possibilities for unpleasantness is extreme. If you give your likeness rights away someone can do *anything with it, and what about the children? In world with zero trust can you even freeze your head in a jar?

            We are on the brink of a whole new realm and based on the last twenty years these massive companies with absolutely questionable/opaque and likely diverse spy loyalties, will make all the decisions and our political system will be picking up the pieces and trying to track down international villains.

            I'm not saying we can't have the good stuff I'm saying we're at zero trust, so we need to start moving towards trust +1, looking to build that way. It seems the advance of technology is unstoppable, inevitable, etc etc, the trust is what takes the effort. There have to be smart individuals and people with good long term reputations to sign off on things, and frankly I think that's what's being subverted on all fronts.

            The recent keyring attack, bgp, dns, spectre, wpa2 gone, 5g questions unanswered.

            There are people doing this. Qubes, libre phone, system76, openbsd and all of those people need to be aware they are big targets and keep their old friends close. And out of that kind of trust we can resist the alternative which is totalitarianism and, well, evil.

            I know I'm earnest but I think the tiger has shown its stripes, we are struggling against systems that seek to establish total control, and take all of our privacy, the future will suck massively unless the people who understand the stakes and aren't sociopaths or mercenaries stand up for some principles.

            You think you will be able to just jump off the ship in a thousand years if the processor in your san junipero hard drive has a 35G nanotransmitter linking you to idk, alexandria virginia? beijing? I think you would find yourself quite restrained.

            I hate to use old metaphors but you shouldn't build your house on sand.

            When people call me paranoid I'm like, have you *ever heard* a truly critical question in your life?

            • (Score: 2) by takyon on Friday July 19 2019, @10:48PM (1 child)

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday July 19 2019, @10:48PM (#869192) Journal

              I have no particular plans to use anything like Neuralink. Admittedly, that's easy to say since it doesn't exist yet. I just see a lot of potential in it, that *could* be used relatively safely by an individual. It could be made virtually impervious to remote hacking, which may be good enough in return for the benefits. But it should use open hardware and software, and the hardware situation today is pretty awful. Then you have to ask yourself if you even trust open hardware or software that you didn't create/write [phoronix.com].

              High on the restoring trust list IMO will be open RISC-V hardware, and high quality, anonymous, secure, decentralized services on the dark web. Increasingly, people have symmetric 100 Mbps to 1 Gbps connections. They could use that bandwidth to pass along lots of traffic in Tor or similar networks, run their own servers, etc. Incredibly high overhead and low effective bandwidth on a new type of decentralized network (using very low quality nodes, meshnets, etc.) could be perfectly acceptable if you are fine with just text content, although the amount needed for video/audio is dropping (see new codecs like AV1 and Opus).

              You think you will be able to just jump off the ship in a thousand years if the processor in your san junipero hard drive has a 35G nanotransmitter linking you to idk, alexandria virginia? beijing? I think you would find yourself quite restrained.

              I consider death to be the ultimate lose condition. Any problems beyond that can be dealt with as they come. If failing to do so leads to a "I Have No Mouth, and I Must Scream" scenario, too bad.

              Anti-aging is a pretty acceptable solution because it doesn't involve mind uploading. If your body can chug along for centuries with just a tune up once in a while, why not do it? Your body is becoming damaged, and you can either let the damage accumulate or fix it.

              Anti-aging doesn't make you immortal, but potential sources of death are being reduced. Driverless cars could eliminate many accidental deaths. Natural disasters are much more survivable with proper warning systems, preparation, and building codes (otherwise you get this [wikipedia.org]). We are beginning to detect impactors in advance [wikipedia.org] and will develop the capabilities needed to stop/redirect asteroids that could otherwise prove fatal to some people living on >1,000-year timescales.

              When people call me paranoid I'm like, have you *ever heard* a truly critical question in your life?

              Fence sitting is best, unless you are very confident in your beliefs. Probably, things aren't as bad as you represent and the erosion of privacy does not mean a dystopia will form (cue "we're already living in 1!"). But you can tell it like you see it.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 1) by jmichaelhudsondotnet on Saturday July 20 2019, @02:54PM

                by jmichaelhudsondotnet (8122) on Saturday July 20 2019, @02:54PM (#869383) Journal

                I like the attitude of 'better trapped in a difficult puzzle then outright gone'. Easy to say now though....

                The reason why I think it is worth having your undies in a bunch over what may seem like something 10 years out, is that there is a pretty strong current in the wrong direction. Sure some of us may succeed in making and using technology that doesn't betray us, but if everyone in your neighborhood is just going home and plugging into technology that does betray them(like with phones now), that's going to affect the world in a bad dangerous way.

                Ten years ago I thought something like, 'surely intel would not be so silly as to put obvious backdoors into their equipment and will protect their brand' but now that I know this is exactly what IME is and there have been a dozen proven demonstrated attacks, the basis we have going forward is bad. This is happening at every layer, people in singapore have to worry malaysians are putting bugs in their IOT crockpots and creating a secure network refrigrator is apparently considered nearly as difficult as interplanetary travel.

                So when Musk starts talking about the neurolink, or Bozos says the best way he can think to contribute is by building spaceships, plus ai nanotech genetic(or is it genetic ai nanotech?) weapons being worked on night and day by a dozen teams in a dozen countries, when apparently we can't even account for ticks, I say, why are we spending so many resources on these problems when we know all we're building is monsters that are going to get loose.

                If we can't treat this planet well enough not to cause idk mass extinctions, aren't the impulses to colonize and make the perfect weapon misguided? I guess I'm saying I'm not going to worship wealthy tech titans no matter what they do, I'm actually going to be extremely judgemental and actually consider the idea that if I had that much money I might be able do something smarter with it than some of these projects and would have some chance of doing it morally. Which musk may have but bozos does not have. Someone has to hold their feet to the fire on every question or something nightmarish will weasel in.

                This path of letting the excesses of capitalism define the edge of human exploration is bad, for a lot of the same reasons that resources are mis-allocated for hair regeneration, but primarily because these are the same forces that are already rolling 4 sided die against the chance of actual human extinction. I'm not a luddite, I think that we are experiencing a mania where til now because things have worked basically ok for a lot of people, trust is being handed out for much more risky changes.

                And since it looks to me like we are being ruled by sociopaths and that journalism is in crisis(wall street times, nyt and washpo being silent on epstein for a decade, cnn giving trump free airtime) any new technology at this point is going to involve the loss of my rights and the risk of the people around me losing their minds. And being controlled even more by the republican party/cult, the zionist cult, the chinese ccp cult, and who knows what else.

                So why not pipe up a little while I still have the chance and see if I am the only one who feels this way.

                What if a riot for the anti-aging drug is what kills you? Even if it looks futile or like all I can do is sit on the sidelines, I have decided at least I am going to contribute my best voice of reason to the situation, and at the moment this is it. Without trust and freedom we are just building a prison or worse. And some people out there are definitely trying hard to build the prison or worse so they will probably achieve their goal if everyone just goes with the flow or wants and sees what happens.

(1)