Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by LaminatorX on Saturday April 04 2015, @05:28PM   Printer-friendly
from the Hippocratic-AI dept.

Several weeks back, Bill Gates was proclaiming the dark side of artificial intelligence (AI). Teradata's John Thuma believes that Bill Gates is wrong, and explains how big data and machine intelligence could be a massive game changer if only we can get over our fear of progress:

If I can come up with a computer doctor better than your current doctor, would you as a patient consider it? Would you as a doctor use it? For example, if we do an analysis of common genes between diseases such as obesity and asthma, we can construct a virtual dictionary that defines those genes. We can then take the human genome and check it against that dictionary to see who's got those genes and use a proven data source to see who's afflicted with either of the diseases. With that information we can predict who's obese and who's asthmatic, and vice versa. If we can do that across a collection of diseases, we would have a tool for being proactive with healthcare and promoting wellness.

I'm not saying we'll ever want to get rid of doctors, but we must overcome fear that stops us from making progress. Right now, humans are on the front line of fields like healthcare and machine intelligence is in the background. In the future we'll see machines move closer to the front under the governance of doctors.

So Soylentils, do you agree with Thurma, or do you think that we are treading a very dangerous path?

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by JNCF on Saturday April 04 2015, @05:36PM

    by JNCF (4317) on Saturday April 04 2015, @05:36PM (#166446) Journal

    Does nuclear power have the potential to produce cheap energy and kill cancerous tumors or could it be used to destroy us all in a fiery apocalypse?

    Pick a side, Soylentils; no wishy-wash bullshit!

    • (Score: 0) by Anonymous Coward on Saturday April 04 2015, @05:53PM

      by Anonymous Coward on Saturday April 04 2015, @05:53PM (#166451)

      Send me better scoops to help avoid dud posts.

    • (Score: 3, Interesting) by Mr Big in the Pants on Saturday April 04 2015, @07:20PM

      by Mr Big in the Pants (4956) on Saturday April 04 2015, @07:20PM (#166472)

      The difference is that we have had the capacity to *actually* end the world for half a century and that capacity has been in the hands of the aggressive sociopaths that run nuclear nations.
      AI on the hand cannot even get out of bed as yet.

      And I am FAR more afraid of a sociopath with a weapon than an autobot with one....
      One is a cold, unfeeling, unsympathetic robot that will kill you by design or mistake to further the ends of its master, the other...ok so there is little difference...

      But seriously folks. You sit on the bomb with terrorists killing hundreds at universities while the world gets hotter and have the time to wax lyrical about potential future threats from rogue AI?

      No wonder the world is in the state it is....

      • (Score: 0) by Anonymous Coward on Saturday April 04 2015, @08:59PM

        by Anonymous Coward on Saturday April 04 2015, @08:59PM (#166490)

        Oh, please. The terrorists are just a blip on the radar.

        • (Score: 2) by Mr Big in the Pants on Saturday April 04 2015, @11:48PM

          by Mr Big in the Pants (4956) on Saturday April 04 2015, @11:48PM (#166526)

          Nice straw man...it appears you only attacked a single straw though and forgot the rest.

          You will find my straw man has an evil AI behind it...

          Yes, I am playing with the trolls...

      • (Score: 2) by gidds on Tuesday April 07 2015, @01:04PM

        by gidds (589) on Tuesday April 07 2015, @01:04PM (#167415)

        I am FAR more afraid of a sociopath with a weapon than an autobot with one...

        Considering that one of the closest times the world has come to nuclear devastation was caused by a major malfunction of an automatic system, and prevented only by a technician [wikipedia.org] using his judgement, I think I'll continue to be frightened of the machines...

        --
        [sig redacted]
        • (Score: 2) by Mr Big in the Pants on Wednesday April 08 2015, @07:37PM

          by Mr Big in the Pants (4956) on Wednesday April 08 2015, @07:37PM (#167961)

          Actually it was the Cuban missile crisis. Were two groups of sociopaths acted like schoolyard bullies fighting over a playground and refusing to back done, ironically petrified of looking "weak".

          From the article you linked:
          "There are questions about the part Petrov's decision played in preventing nuclear war, because, according to the Permanent Mission of the Russian Federation, nuclear retaliation requires that multiple sources confirm an attack"

          This sounds like more Russian propaganda/storytelling.

          My point still stands and works across the board at all levels of destruction rather than for one cherry-picked example that is VERY dubious.

  • (Score: 2) by HiThere on Saturday April 04 2015, @06:13PM

    by HiThere (866) Subscriber Badge on Saturday April 04 2015, @06:13PM (#166456) Journal

    Yes, AI is horribly dangerous. But it's probably not as dangerous as leaving humans in charge of weapons of mass destruction. We've already had several "close calls".

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 3, Interesting) by frojack on Saturday April 04 2015, @10:22PM

      by frojack (1554) on Saturday April 04 2015, @10:22PM (#166510) Journal

      You've got a better chance with a human mind at the controls than a transistor.

      Humans fail all the time, so dangerous systems managed by humans have safety built in. We've already paid the price for learning that lesson.
      With AI, we will have to learn those lessons all over again, because if the current state of technology and AI is any indication, safeguards will be the last thing implemented.

      Mankind simply can't help itself from developing skynet.

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 3, Insightful) by bram on Saturday April 04 2015, @06:14PM

    by bram (3770) on Saturday April 04 2015, @06:14PM (#166457)

    this reminds me of a great quote I read somewhere:
    "As long as my PC has trouble finding my Printer, I don't have to fear AI overlords."

    • (Score: 2) by pkrasimirov on Sunday April 05 2015, @07:47AM

      by pkrasimirov (3358) Subscriber Badge on Sunday April 05 2015, @07:47AM (#166580)

      You mistake PC wit AI. The PC is just a tool. The AI is a program that can use PC, and many more. The AI will have no problems finding your printer.

  • (Score: 4, Insightful) by kaszz on Saturday April 04 2015, @06:32PM

    by kaszz (4211) on Saturday April 04 2015, @06:32PM (#166462) Journal

    Is it progress for our slave driving spylords? or some altruistic organization that scoops up funding from thin air?

    Sure progress is nice. You just have to ask in whose interest?

    It can be used to mitigate and cure diseases or to screen people from health insurance and deter any free thinking genes. And considering the current developments the safest bet is to distrust it all until proven good.

    Even Stephen Hawking has warned about AI. Bill would only warn about things that would hinder Microsoft from lining their profits.

    • (Score: 4, Interesting) by Ethanol-fueled on Saturday April 04 2015, @07:26PM

      by Ethanol-fueled (2792) on Saturday April 04 2015, @07:26PM (#166475) Homepage

      Just posted this in Non Sequor's journal:

      AI is a useful tool, but people who refuse to become one with the borg should not be marginalized. The more people rely on technology to make their decisions for them, the more they are just proxies for the artificial intelligence. Given what we know about the interests of governments and control, we should remain skeptical of technology's potential to control often unwittingly populations and compel them to make decisions against their best interests.

      You can see that people who don't embrace Facebook or Twitter or similar bullshit are already being in some cases marginalized:

      " What, you don't have a facebook, are you some kind of weirdo? What are you hiding? No, sorry, we chose not to move forward with your application because [and they may never tell you this directly] you don't have a Linkedin profile even though you just gave us a fuckin' resume. "

  • (Score: 5, Insightful) by Justin Case on Saturday April 04 2015, @07:01PM

    by Justin Case (4239) on Saturday April 04 2015, @07:01PM (#166467) Journal

    > we can predict who's obese and who's asthmatic

    and create a world where some machine will declare you forever unfit for health insurance because of something that might happen, with no way to traceback how it reached that decision, or why, or what you can do to appeal the ruling or improve your odds. The minimum wage humans will mindlessly obey the computer and not even comprehend your point when you try to suggest there might have been a mistake.

    Don't believe me? Try going to your local supermarket and ask why they don't carry $Product any more. Nobody will know. The computer stopped sending it. Yes, but why? What do you mean why? It just does what it does.

    > we must overcome fear that stops us from making progress

    because we must progress toward something, no matter how awful it might be. Change is always good, especially when driven by big complicated distant systems.

    • (Score: 4, Informative) by dyingtolive on Saturday April 04 2015, @07:19PM

      by dyingtolive (952) on Saturday April 04 2015, @07:19PM (#166471)

      'Five' would not be a high enough score for this comment. Thank you.

      --
      Don't blame me, I voted for moose wang!
    • (Score: 2) by Hairyfeet on Saturday April 04 2015, @07:44PM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Saturday April 04 2015, @07:44PM (#166479) Journal

      No shit, the manager of the local supermarket keeps having to deal with that shit, stupid computer keeps sending piles of cheese and supreme pizzas and NO pepperoni..the #1 topping in America for something like 10 damned years. WHY does the computer keep sending pizzas that nobody buys? He can't get a damned answer, it "just does" so he keeps having to fricking argue with the supplier while he has no supplies of what actually sells. Try dealing with paying off your mother's mortgage while dealing with half a dozen computer systems, none of which talk to each other, because the original bank has changed hands a dozen fucking times thanks to buyouts and see how dangerous people blindly trusting the machine is, if I wouldn't have fought my ass off my mom could have easily lost of her home because the master computer wouldn't talk to the branch computer and register she made her fricking payments...argh!

      So until they show me they can do better than what we've seen so far? Yeah screw your AI, if you make so many messes with regular PCs I don't even want to know what some poor AI would end up like after having to deal with a dozen buyouts with competing systems that won't talk to each other, would probably end up completely batshit and suicidal!

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 3, Informative) by MostCynical on Saturday April 04 2015, @09:46PM

        by MostCynical (2589) on Saturday April 04 2015, @09:46PM (#166502) Journal

        Computers (so far) are programmed by people.
        People defer to The Computer becuase they have been systematically devalued/deskilled/limited in their ability to make decisions. Risk managment (haha) removes risk to The Company. Not to the user, just to the possibility of doing so ething that causes the insurers grief.

        Find an empowered employee. See them get "counselled" for doing things that help customers.

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
      • (Score: 3, Interesting) by frojack on Saturday April 04 2015, @10:00PM

        by frojack (1554) on Saturday April 04 2015, @10:00PM (#166506) Journal

        On the other hand, we've brought in empty packaging (bar-coded) of products that we want them to stock, and, surprise!, they get it. Usually takes a couple weeks, but once they get it in stock, if it flies off the shelf, they will get more of it.

        The problem is you are talking to high-school drop outs at the registers. You have to at least track down the floor manager to find any competence, or knowledge about their ordering system.
        The ordering guy works 9-5, and if you show up at 6pm looking for your favorite pizza, he will never know you went away pissed.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 2, Interesting) by Hairyfeet on Sunday April 05 2015, @06:55AM

          by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Sunday April 05 2015, @06:55AM (#166577) Journal

          I talk to the fricking owner, as my daddy always says "never talk to an Indian son, always talk to the Chief if you want shit done" and he was cursing so damned much the assistant had to nudge him to keep it down! Anyway he says the supplier has one hot mess of a computer system, it'll send you a ton of shit you don't want while not sending you jack of what you do. he has already made them eat the cost of a load of chips when he told them flat footed DO NOT send any Wasabi and DO send Sweet Southern heat which he can't keep on the shelves....wanna guess what they tried bringing in on the last shipment? If you said not a single bag of SSH and a shitload of Wasabi, you'd be correct sir! Needless to say spending nearly a week without his #1 snack seller had him fricking POed man.

          And as I said I had to deal with the "fun" of people who blindly trust computers when I was paying off mom's mortgage...what a fucking clustershit nightmare THAT was! I had district managers that would just stare blankly at the screen like a fricking Mickey D's worker when it gave 'em bullshit info, it got sooo fucking bad I actually had to call the former bank president,who waaay back in the day was the one who set the original loan up when she was paying off my late sister's place in 93, out of fricking retirement to come down there and straighten their damned stupid asses out because not only could they not even figure out who had been paying what and when but their ancient POS system was saying the place we paid off in 93 still had a fricking mortgage that had somehow not been paid since 93 and no flags? Yeah their shit is one big giant fucking mess and I feel sorry for anybody that doesn't know the right people to navigate that maze!

          --
          ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 2) by aristarchus on Sunday April 05 2015, @06:47AM

        by aristarchus (2645) on Sunday April 05 2015, @06:47AM (#166575) Journal

        Methinks Hairyfeet doth protest too much . . . Is it possible AI has already been achieved, by Gates himself, and that that AI is now deployed right here on Soylent News? Just think of the genius of it! Hairyfeet is an AI! Who would suspect! To quote Morpheus, "That sounds like the thinking of a machine to me."

  • (Score: 2) by Hartree on Saturday April 04 2015, @07:02PM

    by Hartree (195) on Saturday April 04 2015, @07:02PM (#166468)

    When we have AI that can equal the mind of a squirrel, then I think it's time to start worrying. Right now, we have systems that are just barely beginning to be able to handle unexpected conditions within ONLY a very narrow area.

    We don't understand our own general intelligence very much, let alone having the ability to duplicate it.

    Yes, computers have clock cycles billions of time faster than ours. But if it takes several hundred billion of them to reach a decision I can make in a couple of seconds, I'm still going to make it to the circuit breaker (or smack something vital with a wrench) before it can move to counter me.

    • (Score: 2) by mhajicek on Saturday April 04 2015, @07:36PM

      by mhajicek (51) on Saturday April 04 2015, @07:36PM (#166478)

      Exponential change is exponential:

      http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [waitbutwhy.com]

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2) by Hartree on Saturday April 04 2015, @08:57PM

        by Hartree (195) on Saturday April 04 2015, @08:57PM (#166488)

        Because if during the 80s and 90s we'd based our internet and computer regulations on the most dystopian and alarmist science fiction, the worries of the moralists alarmed about porn and violent video games, and the fears of the security establishment (Clipper chip anyone?) we never would have had free communication and strong crypto without key escrow available to argue about now. It would have already been a completed argument that the FOSS/crypto community would have lost.

        There is a point where you don't know enough to make reasonable regulations and a point that's still before this "AI apocalypse" where you know enough to do something sensible. I don't think we've reached the latter point yet WRT artificial intelligence.

    • (Score: 3, Funny) by wonkey_monkey on Saturday April 04 2015, @08:01PM

      by wonkey_monkey (279) on Saturday April 04 2015, @08:01PM (#166480) Homepage

      When we have AI that can equal the mind of a squirrel, then I think it's time to start worrying.

      Screw AI, who's watching the squirrels?!

      --
      systemd is Roko's Basilisk
      • (Score: 2) by Hartree on Saturday April 04 2015, @08:44PM

        by Hartree (195) on Saturday April 04 2015, @08:44PM (#166484)

        "Screw AI, who's watching the squirrels?!"

        We've decided the squirrels on the main quad at the university I work at are a joint experiment of the psych department and genetic engineers in the bio department that went horribly wrong.

  • (Score: 5, Insightful) by Dunbal on Saturday April 04 2015, @08:34PM

    by Dunbal (3515) on Saturday April 04 2015, @08:34PM (#166482)

    If I can come up with a computer doctor better than your current doctor, would you as a patient consider it? Would you as a doctor use it? For example, if we do an analysis of common genes between diseases such as obesity and asthma, we can construct a virtual dictionary that defines those genes. We can then

    Get your medical degree and then we'll talk. There is a lot more to medicine than a) genetics b) algorithms and c) memorizing signs and symptoms and treatments. While computer assistance can be very useful - it will always be a TOOL of the physician and will not replace the physician. Why? To date no computer has come up with something on par with Shakespeare or a brilliant symphony or painting. Medicine is a science. The correct application of medicine to a patient is ART. There are a lot of brilliant doctors out there who have zero bedside manner and zero empathy and do very, very poorly with patients. And there are plenty of doctors who might not have been the top of their class every year that do extremely well with their loyal patients. How is a machine going to make you FEEL better? Give you hope, and courage? Give you sympathy? Put a hand on your shoulder when you need it? Medicine is much more than just diagnosis and prescription.

    • (Score: 1, Touché) by Anonymous Coward on Saturday April 04 2015, @09:06PM

      by Anonymous Coward on Saturday April 04 2015, @09:06PM (#166494)

      Why? To date no computer has come up with something on par with Shakespeare or a brilliant symphony or painting.

      To date.

    • (Score: 3, Interesting) by Justin Case on Saturday April 04 2015, @11:30PM

      by Justin Case (4239) on Saturday April 04 2015, @11:30PM (#166519) Journal

      > How is a machine going to make you FEEL better?

      I don't want to FEEL better, I want to BE better.

      Too many people make stupid decisions while they're busy chasing their almighty feelings.

      • (Score: 2) by Dunbal on Sunday April 05 2015, @03:46PM

        by Dunbal (3515) on Sunday April 05 2015, @03:46PM (#166681)

        I don't want to FEEL better, I want to BE better.

        Call me when you get really sick. I mean REALLY sick. I'll be there.

  • (Score: 1, Insightful) by Anonymous Coward on Saturday April 04 2015, @09:23PM

    by Anonymous Coward on Saturday April 04 2015, @09:23PM (#166498)

    Talking about AI right now is a waste of time. We do not know what they will be like, at all.

    They do not even have a simple version. They have some interesting pattern matching. They have depth of search brute force. But AI? Not by a long shot. We do not have computers that can even play something like chess. We have computers that apply particular strategies with a scoring system. They 'win' because they can try 3 billion boards while you scratch your ass and they pick the best score. But then that same program could not even begin to say how to wash a car just by you demonstrating it to it.

    What we have currently are 'expert' systems. But get outside of the 'expert' and you end up with nothing or a very poor last output guess. Most of them are weighted graphs and has been that way for a long time (unless something has changed recently).

    So either these very rich dudes know something and have seen a working prototype (I doubt it). Or someone is going around trying to stir up controversy (likely).

    • (Score: 0) by Anonymous Coward on Saturday April 04 2015, @11:41PM

      by Anonymous Coward on Saturday April 04 2015, @11:41PM (#166525)

      "So either these very rich dudes know something and have seen a working prototype (I doubt it). Or someone is going around trying to stir up controversy (likely)."

      Or create more business.

  • (Score: 1) by pen-helm on Saturday April 04 2015, @09:30PM

    by pen-helm (837) on Saturday April 04 2015, @09:30PM (#166499) Homepage

    There is already a medical expert system:
    http://www.diagnose-me.com/ta/q.php [diagnose-me.com]

  • (Score: 1, Insightful) by Anonymous Coward on Saturday April 04 2015, @11:35PM

    by Anonymous Coward on Saturday April 04 2015, @11:35PM (#166522)

    It seems that $(powerful_person) says AI is dangerous is becoming trend. Its all just slight of hand, they dont want you to think that its them that are dangerous.

    They want you think its the machines they develop to predict individual/group behaviour that are the danger, not the people who control such machines and take actions based on what they say.

    • (Score: 3, Interesting) by pkrasimirov on Sunday April 05 2015, @07:57AM

      by pkrasimirov (3358) Subscriber Badge on Sunday April 05 2015, @07:57AM (#166582)

      Two factions (A and B) battle, one of them (A) create AI and instruct it to kill everything that is not them (A). Then B killed A. Then AI killed B and mankind was gone.

  • (Score: 1) by inertnet on Sunday April 05 2015, @01:16AM

    by inertnet (4071) on Sunday April 05 2015, @01:16AM (#166538) Journal

    Humans are mostly hormone, and therefore, emotion driven. I don't know how true AI will compare to that, but I hope and expect it will at least be generally less unpredictable and more reliable.

    An important biological drive is survival. It can get tricky if AI gets the same.