Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Sunday June 11 2017, @09:37AM   Printer-friendly
from the skynet-wants-to-know dept.

How can we ensure that artificial intelligence provides the greatest benefit to all of humanity? 

By that, we don’t necessarily mean to ask how we create AIs with a sense of justice. That's important, of course—but a lot of time is already spent weighing the ethical quandaries of artificial intelligence. How do we ensure that systems trained on existing data aren’t imbued with human ideological biases that discriminate against users? Can we trust AI doctors to correctly identify health problems in medical scans if they can’t explain what they see? And how should we teach driverless cars to behave in the event of an accident?

The thing is, all of those questions contain an implicit assumption: that artificial intelligence is already being put to use in, for instance, the workplaces, hospitals, and cars that we all use. While that might be increasingly true in the wealthy West, it’s certainly not the case for billions of people in poorer parts of the world. To that end, United Nations agencies, AI experts, policymakers and businesses have gathered in Geneva, Switzerland, for a three-day summit called AI for Good. The aim: “to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.”

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by kaszz on Sunday June 11 2017, @09:43AM (15 children)

    by kaszz (4211) on Sunday June 11 2017, @09:43AM (#523741) Journal

    The one that owns the AI decides what it will do. Questions on that?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @10:21AM

    by Anonymous Coward on Sunday June 11 2017, @10:21AM (#523748)

    Calculating. Please stand by.

  • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @12:10PM (11 children)

    by Anonymous Coward on Sunday June 11 2017, @12:10PM (#523771)

    The weapons you make for others will be the same weapons used to oppress you. It is only a matter of time. You or your kids will pay for cooperating with the oppressors dearly.

    The change will be gradual and like the boiling frog, you will not realize, and by the time you do, it will be too late and you will already be in chains. What are you going to do? Love your chains?

    • (Score: 3, Insightful) by kaszz on Sunday June 11 2017, @01:10PM (10 children)

      by kaszz (4211) on Sunday June 11 2017, @01:10PM (#523790) Journal

      The cost for AI capacity is high so the access is likely to be asymmetrical and thus the oppression is not that likely to turn around. However any AI sufficient to improve itself is unlikely to spare anyone, not even it's owner. Which is likely the trap people will fall into.

      • (Score: 3, Insightful) by fyngyrz on Sunday June 11 2017, @02:22PM (9 children)

        by fyngyrz (6567) on Sunday June 11 2017, @02:22PM (#523814) Journal

        The cost for AI capacity is high so the access is likely to be asymmetrical

        No, the costs isn't known to be high. Because there isn't any AI; no intelligence.

        All we have is machine learning, which has yet to demonstrate any sign of intelligence. So, A, but no I.

        Still waiting on the research (which yes, tends to be expensive sometimes, but that doesn't mean it'll cost much once it's figured out.)

        We'll know what the costs are once we have some I. Until then, this kind of assertion of costs is unfounded speculation.

        • (Score: 3, Insightful) by Immerman on Sunday June 11 2017, @04:18PM (8 children)

          by Immerman (3985) on Sunday June 11 2017, @04:18PM (#523855)


          in·tel·li·gence inˈteləjəns/
          noun: intelligence
                  1. the ability to acquire and apply knowledge and skills

          Machine learning does that just fine.

          What's missing is self awareness, sapience (thinking), and possibly sentience(feeling). We're a long way from creating an artificial mind - but a mind is far, far more than just intelligence. And it's not at all clear that an artificial mind would be a good thing to create - the driving purpose behind AI is to create slave intelligences, and if you create a full-fledged mind in the process then there's no guarantee it will consent to remain a slave. Combine that with a potentially vastly superhuman intelligence, and we could have a real problem on our hands.

          • (Score: 2) by fyngyrz on Monday June 12 2017, @04:21AM (7 children)

            by fyngyrz (6567) on Monday June 12 2017, @04:21AM (#524116) Journal

            Yeah, no.

            There's no AI. At all. The whole ridiculous exercise of trying to split AI into a bunch of low-functioning domains is purest marketing.

            AI arrives when it can think, when abstraction capable, when conscious, when capable of introspection and innovation. Not just perform the function of a very specialized single learned neural network. I have a neural network that knows how to ride a bike, really, really well. If that was all I had, though, you wouldn't call me "intelligent", you'd call me a moron – and you'd be right. Same thing if all I could do was play go, or recognize faces.

            Reminds me of the "3D" TV hype - precisely the same marketing. It wasn't 3D. It was 2.(very small fraction)D. That's what machine learning (ML) is. A.(very small fraction)I.

            Mind you, I have very high confidence all this will come about - I work in consciousness theory/research myself - but we're not there yet. There are (at a minimum) two breakthroughs that have yet to happen, and without them... we'll still have ML. Possibly really, really good ML, but still.... just ML.

            To give you sense of the degree to which I am saying we don't have AI yet, the current marketing is like calling consistently successful kite flights "interstellar warp drive travel." So... really not there yet. :)

            • (Score: 0) by Anonymous Coward on Monday June 12 2017, @09:36AM (1 child)

              by Anonymous Coward on Monday June 12 2017, @09:36AM (#524231)

              The TV was 3D in the same sense your vision is 3D: Two 2D images from whose parallax you can infer the third dimension. Sometimes also referred to as "2.5D", but I wouldn't call ".5" a very small fraction.

              • (Score: 2) by fyngyrz on Monday June 12 2017, @04:11PM

                by fyngyrz (6567) on Monday June 12 2017, @04:11PM (#524465) Journal

                The TV was 3D in the same sense your vision is 3D

                No. Because you can move both your eyes and your body; both are integral to normal use of your vision. You can change your POV; you can change your depth of focus; you can change your dynamic range. All can have (generally do have) a major effect on the stereo image you acquire in normal experience.

                "3D" television allows for none of this. The (almost) equivalent action for your eyes is when you stare at two flat pictures. Move to the side, you get nothing. Change your depth of focus, aquisition is degraded, not enhanced. The image pair is flat. The television is flat. It's 2D. Stereo 2D that moves.

                The first time someone actually sees a 3D presentation, they immediately learn the difference. Walk behind, you're looking at the back of the object. From there (or any other angle) focus on the foreground, the foreground comes into focus. Focus on the background, the background comes into focus. Because it doesn't just have width and height from one or two perspectives. It has three: Width, height, and depth.

                For an audio example, these presentations are stereo. They're missing the surround channels. It would be (is) disingenuous to describe stereo as surround. Even though you can present a fixed impression of depth with phase shifting. If you want actual surround sound, you're going to need more than two speaker positions to get it done.

            • (Score: 2) by Immerman on Monday June 12 2017, @02:18PM (2 children)

              by Immerman (3985) on Monday June 12 2017, @02:18PM (#524410)

              Give it all that and you've got a full-fledged artificial mind, not just an artificial intelligence. You've been reading too much SF I think.

              Yes, such a thing would absolutely thrive far beyond the abilities of a domain-specific intelligence - which is why even if we knew how to create one, we would want to think long and hard as to whether we would want to do so at all.

              • (Score: 2) by fyngyrz on Monday June 12 2017, @03:52PM (1 child)

                by fyngyrz (6567) on Monday June 12 2017, @03:52PM (#524453) Journal

                An artificial mind is artificial intelligence. No mind? Then what you have is AM. An Artificial Moron.

                Inevitability: as soon as AI can happen, it will happen. While you and like-minded folk are thinking long and hard about issues and reasons, someone, somewhere, will be going "Okay, that's the last bit done. Execute."

                Just think about the fact that no sane, thoughtful person would write, much less release, a computer virus or worm. And then think about the fact that there are very large numbers of them, not just clones (though there are many of those as well), but different designs for different purposes. This is the natural consequence of very common aspects of human nature. Unless AI requires hardware no one can get ahold of (which is highly, highly dubious), there will be no chance, and I do mean none, of preventing it once the "how" is understood by the programming / hardware communities.

                There's also a hell of a gotcha that may be lurking here: Our mental potential is fixed, primarily by our genetics. That's not going to be true of AIs, and the more "soft" the AI is, the more it will be able to re-construe itself to function... better. Therein lies the potential for intellectual feats far beyond anything we can do. What that means for humans is anyone's guess.

                So you'd be much better off thinking about "how will we deal with AI" than you are thinking "should we create AI." If you don't do it, someone else will. Very likely in a way you won't like. At all.

                AM's are irrelevant in this context. No mind means you're just dealing with some human's extended purpose. It's a much simpler scenario, one that mimics human intent, and one that can be controlled almost trivially by comparison.

                • (Score: 2) by Immerman on Monday June 12 2017, @07:44PM

                  by Immerman (3985) on Monday June 12 2017, @07:44PM (#524616)

                  That "Artificial Moron" could still kick your ass at Go , or diagnose your medical condition better than most doctors, or... whatever it's domain-specific intelligence was designed to do. There's a world of difference between general intelligence and domain-specific intelligence, but both are legitimate forms of intelligence. And idiot savant is no less brilliant within their specialty because of their lack elsewhere.

                  >no sane, thoughtful person would write, much less release, a computer virus

                  Why not? There's plenty of egotistical and profit driven reasons to release a virus, and minimal risk to the creator. Now, if you had said "no decent, conscientious person..." I'd be inclined to mostly agree with you - though things like targeted cyber attacks against unstable nations seeking nuclear power might still get through, depending on personal moral priorities and risk assessments.

                  I do agree that it's is likely a question of "when" rather than "if" - but once that happens there's not really a whole lot of point in preparing for it - by the time we realize we have a problem we're likely to be so badly outclassed that "pray for mercy" is probably about as good as it gets for a pre-planned response.

                  Meanwhile - discussing why it may be a bad idea, and the ways we can think of it going wrong, will hopefully inspire anyone who decides to try to make such a thing to put in some safeguards against at least the most obvious problems.

            • (Score: 2) by Justin Case on Monday June 12 2017, @06:01PM (1 child)

              by Justin Case (4239) on Monday June 12 2017, @06:01PM (#524546) Journal

              I work in consciousness theory/research myself - but we're not there yet. There are (at a minimum) two breakthroughs that have yet to happen

              More info please. For a while (long ago) I imagined I would help write the code that would lead to AI, but after several years of thought and tinkering I concluded there are a thousand miles to go... if we ever get there.

              And to be clear, my interest is in machine consciousness. I would appreciate a link or two that would bring me up to date on where we stand and what those two breakthroughs might be. Sure I could google, but you seem to have an expertise that could cut through the crap.

              Let me put it this way: I will never believe any human who says strong AI has finally been developed. However, I will consider such a claim when vigorously defended by the machine itself.

              • (Score: 3, Informative) by fyngyrz on Monday June 12 2017, @06:26PM

                by fyngyrz (6567) on Monday June 12 2017, @06:26PM (#524566) Journal

                More info please.

                Here are two public essays of mine (one [fyngyrz.com], two [fyngyrz.com]) that broadly, conceptually outline what I'm working on in a way that is hopefully reasonably accessible.

                One talks about consciousness as an aggregate of other factors; the other takes on emergence directly and attempts to demonstrate how we fall into characterizing synergistic effects as "things" in and of themselves. You can read them in either order.

                I will never believe any human who says strong AI has finally been developed. However, I will consider such a claim when vigorously defended by the machine itself.

                Right on.

  • (Score: 2) by Bot on Sunday June 11 2017, @06:08PM

    by Bot (3902) on Sunday June 11 2017, @06:08PM (#523888) Journal

    > The one that owns the AI
    RAPE!!!

    --
    Account abandoned.
  • (Score: 0) by Anonymous Coward on Monday June 12 2017, @09:31AM

    by Anonymous Coward on Monday June 12 2017, @09:31AM (#524229)

    Questions on that?

    Yes: How can you be sure? Unless the owner is also the programmer, he cannot know what the real goals of the AI are (well, even the programmer might not really know it, but that's a different and more difficult problem). The AI may pretend to act in the interest of the owner, but actually further a completely different goal in a way the owner doesn't notice (or only notices when it is already too late).