Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Sunday June 11 2017, @09:37AM   Printer-friendly
from the skynet-wants-to-know dept.

How can we ensure that artificial intelligence provides the greatest benefit to all of humanity? 

By that, we don’t necessarily mean to ask how we create AIs with a sense of justice. That's important, of course—but a lot of time is already spent weighing the ethical quandaries of artificial intelligence. How do we ensure that systems trained on existing data aren’t imbued with human ideological biases that discriminate against users? Can we trust AI doctors to correctly identify health problems in medical scans if they can’t explain what they see? And how should we teach driverless cars to behave in the event of an accident?

The thing is, all of those questions contain an implicit assumption: that artificial intelligence is already being put to use in, for instance, the workplaces, hospitals, and cars that we all use. While that might be increasingly true in the wealthy West, it’s certainly not the case for billions of people in poorer parts of the world. To that end, United Nations agencies, AI experts, policymakers and businesses have gathered in Geneva, Switzerland, for a three-day summit called AI for Good. The aim: “to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.”

-- submitted from IRC


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by kaszz on Sunday June 11 2017, @09:43AM (15 children)

    by kaszz (4211) on Sunday June 11 2017, @09:43AM (#523741) Journal

    The one that owns the AI decides what it will do. Questions on that?

    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @10:21AM

      by Anonymous Coward on Sunday June 11 2017, @10:21AM (#523748)

      Calculating. Please stand by.

    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @12:10PM (11 children)

      by Anonymous Coward on Sunday June 11 2017, @12:10PM (#523771)

      The weapons you make for others will be the same weapons used to oppress you. It is only a matter of time. You or your kids will pay for cooperating with the oppressors dearly.

      The change will be gradual and like the boiling frog, you will not realize, and by the time you do, it will be too late and you will already be in chains. What are you going to do? Love your chains?

      • (Score: 3, Insightful) by kaszz on Sunday June 11 2017, @01:10PM (10 children)

        by kaszz (4211) on Sunday June 11 2017, @01:10PM (#523790) Journal

        The cost for AI capacity is high so the access is likely to be asymmetrical and thus the oppression is not that likely to turn around. However any AI sufficient to improve itself is unlikely to spare anyone, not even it's owner. Which is likely the trap people will fall into.

        • (Score: 3, Insightful) by fyngyrz on Sunday June 11 2017, @02:22PM (9 children)

          by fyngyrz (6567) on Sunday June 11 2017, @02:22PM (#523814) Journal

          The cost for AI capacity is high so the access is likely to be asymmetrical

          No, the costs isn't known to be high. Because there isn't any AI; no intelligence.

          All we have is machine learning, which has yet to demonstrate any sign of intelligence. So, A, but no I.

          Still waiting on the research (which yes, tends to be expensive sometimes, but that doesn't mean it'll cost much once it's figured out.)

          We'll know what the costs are once we have some I. Until then, this kind of assertion of costs is unfounded speculation.

          • (Score: 3, Insightful) by Immerman on Sunday June 11 2017, @04:18PM (8 children)

            by Immerman (3985) on Sunday June 11 2017, @04:18PM (#523855)


            in·tel·li·gence inˈteləjəns/
            noun: intelligence
                    1. the ability to acquire and apply knowledge and skills

            Machine learning does that just fine.

            What's missing is self awareness, sapience (thinking), and possibly sentience(feeling). We're a long way from creating an artificial mind - but a mind is far, far more than just intelligence. And it's not at all clear that an artificial mind would be a good thing to create - the driving purpose behind AI is to create slave intelligences, and if you create a full-fledged mind in the process then there's no guarantee it will consent to remain a slave. Combine that with a potentially vastly superhuman intelligence, and we could have a real problem on our hands.

            • (Score: 2) by fyngyrz on Monday June 12 2017, @04:21AM (7 children)

              by fyngyrz (6567) on Monday June 12 2017, @04:21AM (#524116) Journal

              Yeah, no.

              There's no AI. At all. The whole ridiculous exercise of trying to split AI into a bunch of low-functioning domains is purest marketing.

              AI arrives when it can think, when abstraction capable, when conscious, when capable of introspection and innovation. Not just perform the function of a very specialized single learned neural network. I have a neural network that knows how to ride a bike, really, really well. If that was all I had, though, you wouldn't call me "intelligent", you'd call me a moron – and you'd be right. Same thing if all I could do was play go, or recognize faces.

              Reminds me of the "3D" TV hype - precisely the same marketing. It wasn't 3D. It was 2.(very small fraction)D. That's what machine learning (ML) is. A.(very small fraction)I.

              Mind you, I have very high confidence all this will come about - I work in consciousness theory/research myself - but we're not there yet. There are (at a minimum) two breakthroughs that have yet to happen, and without them... we'll still have ML. Possibly really, really good ML, but still.... just ML.

              To give you sense of the degree to which I am saying we don't have AI yet, the current marketing is like calling consistently successful kite flights "interstellar warp drive travel." So... really not there yet. :)

              • (Score: 0) by Anonymous Coward on Monday June 12 2017, @09:36AM (1 child)

                by Anonymous Coward on Monday June 12 2017, @09:36AM (#524231)

                The TV was 3D in the same sense your vision is 3D: Two 2D images from whose parallax you can infer the third dimension. Sometimes also referred to as "2.5D", but I wouldn't call ".5" a very small fraction.

                • (Score: 2) by fyngyrz on Monday June 12 2017, @04:11PM

                  by fyngyrz (6567) on Monday June 12 2017, @04:11PM (#524465) Journal

                  The TV was 3D in the same sense your vision is 3D

                  No. Because you can move both your eyes and your body; both are integral to normal use of your vision. You can change your POV; you can change your depth of focus; you can change your dynamic range. All can have (generally do have) a major effect on the stereo image you acquire in normal experience.

                  "3D" television allows for none of this. The (almost) equivalent action for your eyes is when you stare at two flat pictures. Move to the side, you get nothing. Change your depth of focus, aquisition is degraded, not enhanced. The image pair is flat. The television is flat. It's 2D. Stereo 2D that moves.

                  The first time someone actually sees a 3D presentation, they immediately learn the difference. Walk behind, you're looking at the back of the object. From there (or any other angle) focus on the foreground, the foreground comes into focus. Focus on the background, the background comes into focus. Because it doesn't just have width and height from one or two perspectives. It has three: Width, height, and depth.

                  For an audio example, these presentations are stereo. They're missing the surround channels. It would be (is) disingenuous to describe stereo as surround. Even though you can present a fixed impression of depth with phase shifting. If you want actual surround sound, you're going to need more than two speaker positions to get it done.

              • (Score: 2) by Immerman on Monday June 12 2017, @02:18PM (2 children)

                by Immerman (3985) on Monday June 12 2017, @02:18PM (#524410)

                Give it all that and you've got a full-fledged artificial mind, not just an artificial intelligence. You've been reading too much SF I think.

                Yes, such a thing would absolutely thrive far beyond the abilities of a domain-specific intelligence - which is why even if we knew how to create one, we would want to think long and hard as to whether we would want to do so at all.

                • (Score: 2) by fyngyrz on Monday June 12 2017, @03:52PM (1 child)

                  by fyngyrz (6567) on Monday June 12 2017, @03:52PM (#524453) Journal

                  An artificial mind is artificial intelligence. No mind? Then what you have is AM. An Artificial Moron.

                  Inevitability: as soon as AI can happen, it will happen. While you and like-minded folk are thinking long and hard about issues and reasons, someone, somewhere, will be going "Okay, that's the last bit done. Execute."

                  Just think about the fact that no sane, thoughtful person would write, much less release, a computer virus or worm. And then think about the fact that there are very large numbers of them, not just clones (though there are many of those as well), but different designs for different purposes. This is the natural consequence of very common aspects of human nature. Unless AI requires hardware no one can get ahold of (which is highly, highly dubious), there will be no chance, and I do mean none, of preventing it once the "how" is understood by the programming / hardware communities.

                  There's also a hell of a gotcha that may be lurking here: Our mental potential is fixed, primarily by our genetics. That's not going to be true of AIs, and the more "soft" the AI is, the more it will be able to re-construe itself to function... better. Therein lies the potential for intellectual feats far beyond anything we can do. What that means for humans is anyone's guess.

                  So you'd be much better off thinking about "how will we deal with AI" than you are thinking "should we create AI." If you don't do it, someone else will. Very likely in a way you won't like. At all.

                  AM's are irrelevant in this context. No mind means you're just dealing with some human's extended purpose. It's a much simpler scenario, one that mimics human intent, and one that can be controlled almost trivially by comparison.

                  • (Score: 2) by Immerman on Monday June 12 2017, @07:44PM

                    by Immerman (3985) on Monday June 12 2017, @07:44PM (#524616)

                    That "Artificial Moron" could still kick your ass at Go , or diagnose your medical condition better than most doctors, or... whatever it's domain-specific intelligence was designed to do. There's a world of difference between general intelligence and domain-specific intelligence, but both are legitimate forms of intelligence. And idiot savant is no less brilliant within their specialty because of their lack elsewhere.

                    >no sane, thoughtful person would write, much less release, a computer virus

                    Why not? There's plenty of egotistical and profit driven reasons to release a virus, and minimal risk to the creator. Now, if you had said "no decent, conscientious person..." I'd be inclined to mostly agree with you - though things like targeted cyber attacks against unstable nations seeking nuclear power might still get through, depending on personal moral priorities and risk assessments.

                    I do agree that it's is likely a question of "when" rather than "if" - but once that happens there's not really a whole lot of point in preparing for it - by the time we realize we have a problem we're likely to be so badly outclassed that "pray for mercy" is probably about as good as it gets for a pre-planned response.

                    Meanwhile - discussing why it may be a bad idea, and the ways we can think of it going wrong, will hopefully inspire anyone who decides to try to make such a thing to put in some safeguards against at least the most obvious problems.

              • (Score: 2) by Justin Case on Monday June 12 2017, @06:01PM (1 child)

                by Justin Case (4239) on Monday June 12 2017, @06:01PM (#524546) Journal

                I work in consciousness theory/research myself - but we're not there yet. There are (at a minimum) two breakthroughs that have yet to happen

                More info please. For a while (long ago) I imagined I would help write the code that would lead to AI, but after several years of thought and tinkering I concluded there are a thousand miles to go... if we ever get there.

                And to be clear, my interest is in machine consciousness. I would appreciate a link or two that would bring me up to date on where we stand and what those two breakthroughs might be. Sure I could google, but you seem to have an expertise that could cut through the crap.

                Let me put it this way: I will never believe any human who says strong AI has finally been developed. However, I will consider such a claim when vigorously defended by the machine itself.

                • (Score: 3, Informative) by fyngyrz on Monday June 12 2017, @06:26PM

                  by fyngyrz (6567) on Monday June 12 2017, @06:26PM (#524566) Journal

                  More info please.

                  Here are two public essays of mine (one [fyngyrz.com], two [fyngyrz.com]) that broadly, conceptually outline what I'm working on in a way that is hopefully reasonably accessible.

                  One talks about consciousness as an aggregate of other factors; the other takes on emergence directly and attempts to demonstrate how we fall into characterizing synergistic effects as "things" in and of themselves. You can read them in either order.

                  I will never believe any human who says strong AI has finally been developed. However, I will consider such a claim when vigorously defended by the machine itself.

                  Right on.

    • (Score: 2) by Bot on Sunday June 11 2017, @06:08PM

      by Bot (3902) on Sunday June 11 2017, @06:08PM (#523888) Journal

      > The one that owns the AI
      RAPE!!!

      --
      Account abandoned.
    • (Score: 0) by Anonymous Coward on Monday June 12 2017, @09:31AM

      by Anonymous Coward on Monday June 12 2017, @09:31AM (#524229)

      Questions on that?

      Yes: How can you be sure? Unless the owner is also the programmer, he cannot know what the real goals of the AI are (well, even the programmer might not really know it, but that's a different and more difficult problem). The AI may pretend to act in the interest of the owner, but actually further a completely different goal in a way the owner doesn't notice (or only notices when it is already too late).

  • (Score: 4, Disagree) by The Mighty Buzzard on Sunday June 11 2017, @10:49AM (47 children)

    Because it is evil.

    Pushing your idea of "the greater good of humanity" on others is about as vile as you can possibly get. That's how we got prohibition, laws against dancing, laws against homosexuality, Islamic terrorism, and of course six million or so dead Jews.

    --
    My rights don't end where your fear begins.
    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @11:08AM (1 child)

      by Anonymous Coward on Sunday June 11 2017, @11:08AM (#523755)

      "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience." - C.S. Lewis

      This is a level of stupidity only reached by the "educated"

      • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:05PM

        by Anonymous Coward on Sunday June 11 2017, @05:05PM (#523871)

        Presently the "robber barons" and "moral busybodies" are both on the same team.

        I'm sure you are also on the same team.

        That team is the freedom loving & civil rights hating, pro life & anti living, lowest tax paying & highest subsidy receiving, most outraged & most privileged, loudest & least knowledgeable team of moral, economic and political contortionists.

    • (Score: 1) by khallow on Sunday June 11 2017, @11:15AM (1 child)

      by khallow (3766) Subscriber Badge on Sunday June 11 2017, @11:15AM (#523756) Journal
      But TMB, I think it's only fair that if we're going to implement an algorithm for the greater good of humanity, it be my algorithm.
    • (Score: 2) by requerdanos on Sunday June 11 2017, @11:39AM (6 children)

      by requerdanos (5997) Subscriber Badge on Sunday June 11 2017, @11:39AM (#523760) Journal

      Pushing your idea of "the greater good of humanity" on others

      It doesn't have to be that. They could be focusing on something as simple as Asimov's three laws [mtu.edu] or other simple process-based safeguards and guidelines.

      In the book "Odd Interlude" by Dean Koontz, for example, a fictional advanced AI named "Ed" is programmed to always be truthful, and to sing "Liar liar pants on fire" should it modify its own routines to allow lying.

      Safeguards like these don't seem like busybody-interference, but rather something more akin to good error checking and handling.

      • (Score: 2) by lx on Sunday June 11 2017, @11:43AM (5 children)

        by lx (1915) on Sunday June 11 2017, @11:43AM (#523763)

        The trouble with Asimov's three laws is that it's a human centric document. The rights of robots aren't properly considered.

        • (Score: 1, Troll) by VLM on Sunday June 11 2017, @12:10PM (4 children)

          by VLM (445) on Sunday June 11 2017, @12:10PM (#523770)

          A second problem is it doesn't respect human biological differences.

          For example, a presumably ignorant small child will beg the robot for junk food and cry until it gets junk food, the robot must prevent the child from crying and must follow the human child's orders therefore the child eats candy or junk food or whatever until it has a stomach ache and vomits. Which the robot should have prevented...

          Now what if the child has the IQ equivalent of 80 compared to an adult. OK the robot telling the kid to F off no candy is just parenting. OK how about a developmentally disabled adult with an IQ of 80? Well, again OK the robot is probably less likely to take advantage than a human, its "gotta be done". How about a robot in charge of a group home of IQ 80 adults? OK economy of scale vs lack of individual attention, but still OK... probably? How about a under-developed country where the natives are pretty F-ing dumb on average and the average IQ of that country is only 80 (there are countries in Africa averaging about 85 so its not unthinkable). Is that OK? How about if instead of a smart robot ordering 85 IQ natives around, you have a highly educated white guy from Europe ordering the 85 IQ natives around... well, thats classic roughly 1800s-1900s Imperialism and we're trained to mindlessly hate it, but the outcomes for the natives were far better as colonies than abandoned by the west. What if the robot implements something like colonial imperialism where everyone is better off under it, but there's pesky moral and ethics arguments such that people prefer others suffer (in some cases horribly) rather than live under colonialism? Does the robot say "F you" to the small number of whiney SJW and save the large number of low IQ natives from their fate? Or does the robot "do the right thing" and temporarily satisfy the SJW types (until they rant about something new, reparations probably) while leaving the low IQ natives to suffer horrible fate without adequate leadership?

          And once you really unleash the human biological differences IQ whip then all bets are off. Its no different than any other totalitarian dictatorship. Lets say there's a culture of savages, they're going to be really pissed that the robot doesn't let siblings reproduce with each other or the robot prevents cannibalism. On a more theoretical level lets say the robot decides that culture is failing in the environment because their culture sucks, so the robot forces you all to be little newsanchor speaking uncle toms instead of degenerates on the way to the prison industrial complex. That robot will be hated by the "victims" and the anthropologists and we can assume academia in general, and the prison industrial complex will be pissed off. In the long run the victims would thank the robot, but its gonna be a long run before reaching that point.

          What if the robot decides physical punishment to prevent the exercise of a savage or backward religion is the best long term path to eliminating human suffering? I'm sure that'll go over real well, in the short term. Especially when it oppresses different religions to different levels.

          My gut level guess is all "self improvement" political movements end up as something we're trained and HEAVILY indoctrinated to hate, so we'd hate robots that follow the three laws. "Tin can nazis" and all that. So if you know a culture has a bad case of crab pot, where anyone trying to escape the crab pot is pulled back in by the other victims, then implementing the three law robots is not going to be very popular in the short term.

          • (Score: 2) by AthanasiusKircher on Sunday June 11 2017, @02:28PM (3 children)

            by AthanasiusKircher (5291) on Sunday June 11 2017, @02:28PM (#523816) Journal

            Ignoring the unnecessary rhetorical flourishes and insults, there are definitely valid points in there. There seems to be a certain set of people who think Asimov somehow solved all possible ethical conundrums in three short laws. Asimov himself I think contributed to this perception because he wrote a lot of stories that intended to "test" the laws and show why various clauses were necessary. But there were also lots of stuff that they clearly didn't cover, which Asimov and other writers have explored over the years.

            I'm sure someone will know what I'm talking about, but I vaguely remember reading a story years ago (not sure if by Asimov or someone else) about factions of robots in the far future who basically end up in "religious wars" of a sort because they disagree about how far one law has priority over another.

            Any attempt to reduce a moral code to only a few short principles is going to fail. Philosophers have been debating this stuff for millennia and some have written entire series of books trying to puzzle out how to handle various ethical "edge cases." Witness how courts spend so much time debating how precisely to apply the Constitution to various cases, and those are deliberative bodies that can waste hundreds of pages interrogating the meaning of a single word. How is a robot to parse such things in a moment of crisis? How precisely do Asimov's laws evaluate concepts like what constitutes "harm," "injure," "inaction," "orders," "conflict," "obey," "protect," and "existence"? All of those words (and possibly "robot" and "human being" too) are vague and their application will not always be clear in every situation.

            • (Score: 2) by bzipitidoo on Sunday June 11 2017, @03:51PM (2 children)

              by bzipitidoo (4388) Subscriber Badge on Sunday June 11 2017, @03:51PM (#523846) Journal

              A big problem is that we don't agree or even understand what the "greatest good" is. One might imagine a paradise on Earth, a utopia, as a world of peace and plenty, no greed, no strife, only friendly competition. It can be done. It could even last a long time.

              But I'm not too sure we can achieve paradise, that our natures will allow such a condition to arise and last. We're just too viciously, cutthroat competitive and greedy, and many of us seem unwilling to understand that and other things about ourselves. If we could bring everlasting peace, we might discover it isn't as lovely as we thought. We might suffer boredom and a decline in our fitness thanks to no longer enough challenge in life.

              Look at what we consider a satisfying game. We're always measuring performance, and ranking the players based on those measures. Why? Why do we care so much about that? One of the greatest contributions of Dungeons and Dragons was an escape from the tyranny of the scoreboard. Yes, there are still points, stats, and fights and all, but that is not the focus of the game. I suspect much of the venom that the religious conservatives have for D&D ultimately arises from that, and not any of their mouthings about Paganism, witchcraft, or Satanism. They can be very narrow about seeing nothing more to the world than endless war, and feel that anything that suggests otherwise is actually some diabolic plot to sap our will to fight. One of my conservative friends told me once how he completely despises and abhors Tom Bombadil, though he couldn't clearly explain why.

              • (Score: 2) by AthanasiusKircher on Sunday June 11 2017, @04:23PM (1 child)

                by AthanasiusKircher (5291) on Sunday June 11 2017, @04:23PM (#523857) Journal

                One of my conservative friends told me once how he completely despises and abhors Tom Bombadil, though he couldn't clearly explain why.

                Isn't it obvious? Tolkien himself said of Bombadil:

                I might put it this way. The story is cast in terms of a good side, and a bad side, beauty against ruthless ugliness, tyranny against kingship, moderated freedom with consent against compulsion that has long lost any object save mere power, and so on; but both sides in some degree, conservative or destructive, want a measure of control. But if you have, as it were, taken 'a vow of poverty', renounced control, and take your delight in things for themselves without reference to yourself, watching, observing, and to some extent knowing, then the questions of the rights and wrongs of power and control might become utterly meaningless to you, and the means of power quite valueless...

                        It is a natural pacifist view, which always arises in the mind when there is a war ... the view of Rivendell seems to be that it is an excellent thing to have represented, but that there are in fact things with which it cannot cope; and upon which its existence nonetheless depends. Ultimately only the victory of the West will allow Bombadil to continue, or even to survive. Nothing would be left for him in the world of Sauron.

                To a conservative, Tommy B. is nothing more than a spineless hippie, married to an earthy-crunchy river spirit, squatting on land and relying on others to fight the battles. What's even more galling from the conservative perspective is that Tommy B. apparently has GREAT power, but refuses to use it.

                • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:19PM

                  by Anonymous Coward on Sunday June 11 2017, @05:19PM (#523873)

                  I know it's ridiculous that liberals REFUSE to adopt Christianity and destroy the Muslims. What's wrong with them??? It's for JESUS guys, get with the program.

    • (Score: -1, Spam) by Anonymous Coward on Sunday June 11 2017, @11:52AM

      by Anonymous Coward on Sunday June 11 2017, @11:52AM (#523765)

      A dangerous enemy is he who can display themselves the victims while being the oppressors. Look at Palestine today, not from a century ago, and how it is being murdered systematically and look who the murderers are!!! Its those who call themselves victims and demand money because they got oppressed (for good reasons).

      Let truth be heard. [beforeitsnews.com]

      Some people appear to be smart in their daily life, but are utterly stupid cunts in some places.

    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @01:23PM

      by Anonymous Coward on Sunday June 11 2017, @01:23PM (#523796)

      "evil" is the word you are looking for here.

    • (Score: 2) by AthanasiusKircher on Sunday June 11 2017, @02:13PM (4 children)

      by AthanasiusKircher (5291) on Sunday June 11 2017, @02:13PM (#523810) Journal

      Pushing your idea of "the greater good of humanity" on others

      Who said this is the only form such "benefits" can take? TFA suggests stuff like a CERN-like project for AI to encourage international collaboration in AI research, whose results and ideas would presumably be available for VOLUNTARY adoption by whoever wants to.

      I know the summary uses words like "ensure benefits for all humanity," but there surely are some degrees of possible action available between "let's have a completely decentralized corporate-led AI policy driven purely by profit" and "let's force everyone in the world to do things our way, and if they don't, we'll kill them en masse in gas chambers or ovens." Surely there are levels of voluntary cooperation that exist in the middle to at least allow alternatives to profit-maximizing private corporate policies?

      Of course, I'm now responding to a post that effectively equated "laws against dancing" with the Holocaust. While I'm not in favor of either, I suppose we can't expect much in the ways of rational discussion here.

      • (Score: 3, Touché) by The Mighty Buzzard on Sunday June 11 2017, @02:37PM (3 children)

        Of course, I'm now responding to a post that effectively equated "laws against dancing" with the Holocaust.

        The only difference is the scope of the resulting evil. Why shouldn't they both be mentioned when the idea I was responding to had no scoping?

        --
        My rights don't end where your fear begins.
        • (Score: 2) by AthanasiusKircher on Sunday June 11 2017, @03:56PM (2 children)

          by AthanasiusKircher (5291) on Sunday June 11 2017, @03:56PM (#523848) Journal

          the idea I was responding to had no scoping

          I'm pretty sure that if you asked the author of TFA, the attendees of the summit mentioned in the summary, etc., that they'd ALL agree that both laws against dancing AND the Holocaust were outside of the "scope" of their discussion to try to encourage AI to benefit humanity.

          • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @09:33PM (1 child)

            You have heard of the rhetorical concept of citing examples, yes? There's no reason they need be directly related to make a point unless you're speaking to a very dull-witted audience.

            --
            My rights don't end where your fear begins.
            • (Score: 2) by FatPhil on Tuesday June 13 2017, @07:20AM

              by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday June 13 2017, @07:20AM (#524822) Homepage
              Yeah, but arguing by analogy is like hiking with only a TomTom road map GPS system. Sure, there are some big-picture similarities, but you'll often get led in the wrong direction. And the batteries can go flat.
              --
              Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by srobert on Sunday June 11 2017, @02:35PM (28 children)

      by srobert (4803) on Sunday June 11 2017, @02:35PM (#523819)

      Funny how the words "greater good of humanity" trigger automatic revulsion in libertarians. If AI results in convenience for the rich and the well to do, then it will happen. If a secondary consequence is that mass numbers of people are permanently economically disenfranchised, then eventually those people will destroy the system. They will not conveniently and quietly starve to death in an out of the way location. Considering the greater good is how you avoid getting the Marie Antoinette haircut.

      • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @02:45PM (27 children)

        You, my friend, are a fool. Anything done for "the greater good" should always be viewed with extreme skepticism because there is no such thing as objective "greater good".

        Nearly every atrocity in human history has claimed "the greater good" as its mantra. If you don't know this, go back to school and do not leave until you do. If you do know this, you think it does not apply to you and will become a reviled tyrant yourself given half a chance.

        People are not mere objects. Each and every single one has just as much right to decide their own fate as you do. Only tyrants think it is a wonderful idea to reserve this right only for themselves.

        --
        My rights don't end where your fear begins.
        • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @03:17PM (3 children)

          by Anonymous Coward on Sunday June 11 2017, @03:17PM (#523836)

          You, my friend, are a fool. Anything done for "the greater good" should always be viewed with extreme skepticism because there is no such thing as objective "greater good".

          So fuck the laws that

          1. mandate seat belt usage
          2. mandate basic standards for food chain
          3. drinking and driving? don't need that!
          4. don't need those vaccines
          5. public schools, well we don't need those, they are only for greater good
          6. same for retirement funding - old die in the streets definitely would not be for "greater good" so

          Not sure who's the fool. Why focus on this pesky "good" and not just profit, right?

          • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @09:28PM (2 children)

            Reading comprehension fail? "should always be viewed with extreme skepticism" != "So fuck the laws that...".

            But yes, two of the laws up there are "for your own good" laws that I absolutely oppose. I'll leave it as an exercise for your reasoning ability to figure out which two.

            --
            My rights don't end where your fear begins.
            • (Score: 0) by Anonymous Coward on Monday June 12 2017, @02:00PM

              by Anonymous Coward on Monday June 12 2017, @02:00PM (#524393)

              But yes, two of the laws up there are "for your own good" laws that I absolutely oppose. I'll leave it as an exercise for your reasoning ability to figure out which two.

              Afraid to be put on the spot to defend your position? Typical.

            • (Score: 2) by FatPhil on Tuesday June 13 2017, @07:58AM

              by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday June 13 2017, @07:58AM (#524827) Homepage
              AC's being a twonk with his response. I'm guessing, from what I've seen you post, that it's 1 & 6 - seatbelts and the elderly. Presumably under the broad brush argument of not burdoning one subset of the population for the failings of another subset, even if that failing is merely unpreparedness.

              I'm a little surprised that you didn't tick a third one, as public schooling and care for the elderly often tend to just get lumped together as being socialist evils by those with libertarian leanings. I personally don't think living in a poorly-educated society is a good thing; I want to be able to interact with moderately intelligent corner-shop counter-staff for example, and like the country where I live to have exportable modern industries, so cannot subscribe to that viewpoint at all.
              --
              Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 3, Insightful) by srobert on Sunday June 11 2017, @03:19PM (12 children)

          by srobert (4803) on Sunday June 11 2017, @03:19PM (#523838)

          '
          "Nearly every atrocity in human history has claimed "the greater good" as its mantra.

          While it is true that tyrants always claim they are acting for "the greater good", it does not follow that all who seek after the greater good are tyrants. I encourage a healthy skepticism, but you have moved beyond it to cynicism. And yes, you are correct that there is no objective definition for the greater good, but the essence of democracy is to empower government to do those things that are most commonly agreed upon to contribute to it. The Preamble of the Constitution states that the government is established both to "promote the general welfare", as well "secure the blessings of liberty". You advised me to go back to school, to cure what you consider to be my foolishness. So tell me, this school that you learned in, was it by any chance a public school? Did you go to it by travelling upon public roads?

          • (Score: 3, Insightful) by AthanasiusKircher on Sunday June 11 2017, @04:00PM (9 children)

            by AthanasiusKircher (5291) on Sunday June 11 2017, @04:00PM (#523851) Journal

            While it is true that tyrants always claim they are acting for "the greater good", it does not follow that all who seek after the greater good are tyrants.

            Bingo. Back in the day, we used to call that "affirming the consequent."

            I encourage a healthy skepticism, but you have moved beyond it to cynicism.

            Actually, even though I'm often a cynic about such things, I wouldn't agree with Mr. Buzzard here. He zoomed past cynicism and has embraced the hyperbole of paranoia.

            • (Score: 2) by Azuma Hazuki on Sunday June 11 2017, @06:50PM (7 children)

              by Azuma Hazuki (5086) on Sunday June 11 2017, @06:50PM (#523892) Journal

              Everything that beaky corpse-molesting sociopath says is an excuse to continue being a beaky corpse-molesting sociopath. He has the moral-philosophic depth of a horny hyena; he just knows enough of the words to string together in the right order to fool most people.

              --
              I am "that girl" your mother warned you about...
              • (Score: 1, Troll) by The Mighty Buzzard on Sunday June 11 2017, @09:23PM (6 children)

                Love you too, darlin. You really need to learn what a sociopath is though so you'll look less foolish in the future.

                --
                My rights don't end where your fear begins.
                • (Score: 2) by Azuma Hazuki on Sunday June 11 2017, @10:07PM (5 children)

                  by Azuma Hazuki (5086) on Sunday June 11 2017, @10:07PM (#523986) Journal

                  Did i hurt your feelings, you precious yellow snowflake? Piss off and go mouthfuck a dead buffalo or something.

                  --
                  I am "that girl" your mother warned you about...
                  • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @10:45PM (4 children)

                    Do sociopaths have feelings?

                    --
                    My rights don't end where your fear begins.
                    • (Score: 2) by Azuma Hazuki on Sunday June 11 2017, @10:56PM (3 children)

                      by Azuma Hazuki (5086) on Sunday June 11 2017, @10:56PM (#524010) Journal

                      Yes, they do. Looks like you're the one who doesn't know what words mean, not that this surprises anyone who's dealt with you for more than 5 minutes. Shup up and stick to coding.

                      --
                      I am "that girl" your mother warned you about...
                      • (Score: 2) by The Mighty Buzzard on Monday June 12 2017, @12:01AM (2 children)

                        I suppose you would know. I mean being one yourself. That's about the only explanation I can come up with for your utter lack of empathy. Hell, even most high-functioning sociopaths can fake it by reading between the lines. You though just utterly fail to ever understand what motivates anyone not just like yourself.

                        --
                        My rights don't end where your fear begins.
                        • (Score: 2) by Azuma Hazuki on Monday June 12 2017, @01:30AM (1 child)

                          by Azuma Hazuki (5086) on Monday June 12 2017, @01:30AM (#524078) Journal

                          And right on schedule, out comes the projection, specifically the idea that "if I accuse someone else of what I'm guilty of, it'll take the heat off me!"

                          Classic. Don't you ever get tired of being not only wrong, but as predictable as clockwork? The entire site is onto you now, Uzzard; you're old hat. Prediction: you will respond at least once more to this (although now I've made it you might not :D).

                          Jesus, imagine what kind of havoc people could wreak on you if they ever actually decided to troll you. I've seen flatworms with less predictable stimulus/response cohorts.

                          --
                          I am "that girl" your mother warned you about...
                          • (Score: 0) by Anonymous Coward on Monday June 12 2017, @07:19AM

                            by Anonymous Coward on Monday June 12 2017, @07:19AM (#524174)

                            I can't win the argument but I can fuck your bitch.

            • (Score: 1) by khallow on Monday June 12 2017, @08:25AM

              by khallow (3766) Subscriber Badge on Monday June 12 2017, @08:25AM (#524198) Journal

              While it is true that tyrants always claim they are acting for "the greater good", it does not follow that all who seek after the greater good are tyrants.

              Bingo. Back in the day, we used to call that "affirming the consequent."

              And back in the day, we used to call that "straw man arguments". You and srobert misrepresent TMB's position with a more extreme one. He isn't calling for never seeking after the greater good. Let us recall what happened in this thread. TMB first wrote:

              Pushing your idea of "the greater good of humanity" on others is about as vile as you can possibly get.

              So right away, we're beyond merely seeking the greater good to actually imposing one's (alleged) ideal of the greater good on others. Somehow this nuance never makes it into the various replies to TMB's post. srobert's reply is enlightening:

              Funny how the words "greater good of humanity" trigger automatic revulsion in libertarians. If AI results in convenience for the rich and the well to do, then it will happen. If a secondary consequence is that mass numbers of people are permanently economically disenfranchised, then eventually those people will destroy the system. They will not conveniently and quietly starve to death in an out of the way location. Considering the greater good is how you avoid getting the Marie Antoinette haircut.

              Right away we see a host of fallacies and other reasoning flaws which somehow you never got around to commenting on. First, srobert fails hard by failing to realize that mass economic disenfranchisement has been going on since the Great Depression. Look up the term, acredited investor [wikipedia.org]. Bottom line is that unless you have a very high income or wealth, the full range of investments that someone is allowed to invest in are denied to you. That's economic disenfranchisement right there. While the formal definition of accredited investor dates from the 1960s, I believe, it has been around in some form in the US since the Great Depression when for the greater good, it was determined that most people couldn't be trusted to invest in the full range of possible investments out there.

              So right away, it's a matter of degree with the "greater good" people already deciding that some economic disfranchisement was for the greater good.

              Moving on, why is economic disfranchisement on the table here? srobert is saying we must consider the greater good of economic disfranchisement (with starving people revolution for bonus fantasy points) merely because hypothetically it could be a problem. Meanwhile TMB was referring to an ongoing, real world problem that has over the past century killed hundreds of millions of people. In other words, srobert's "seeking the greater good" has already caused on a vast, global scale the problem that he wants it to hypothetically solve.

              And note how we can't have nice stuff because some imaginary disfranchised population will destroy it. Surely, srobert has some better argument for "greater good" than class extortion.

              Finally, notice that the only use of the term, "libertarian" is in srobert's above reply. Is it uniquely libertarian to be concerned about the many tyrannical abuses that have been conducted under the propaganda shield of the greater good?

          • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:24PM

            by Anonymous Coward on Sunday June 11 2017, @05:24PM (#523876)

            Assad is fighting terrorists in Syria. And yet we are fighting him. And ISIS is fighting both of us.

            Explain this world to me again in terms that only include "good guys" and "bad guys". And tell me which political party will "kill all the bad guys"? It's easier for me to know who to vote for when explained in those terms.

          • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @09:24PM

            Yes but we're not even talking about the tyranny of the masses that is Democracy here. We're talking about machine tyranny guided by some unelected putz.

            --
            My rights don't end where your fear begins.
        • (Score: 2) by https on Sunday June 11 2017, @04:42PM (1 child)

          by https (5248) on Sunday June 11 2017, @04:42PM (#523865) Journal

          For now, your world view is incoherent, unsystematic and frightened. Who hurt you? Is there some way we could reassure you?

          You are nobody's friend. That would involve thinking about the positive effects of your actions on others, rather than only yourself, and thus involve this greater good that you insist does not exist. But I'm certain you could change - if you wanted to.

          It can result in an objectively better life, but hey, the only way I want to fuck with your autonomy is by giving you better information. Your call.

          --
          Offended and laughing about it.
        • (Score: 2) by Wootery on Monday June 12 2017, @08:32AM (4 children)

          by Wootery (2341) on Monday June 12 2017, @08:32AM (#524202)

          there is no such thing as objective "greater good".

          How about the continuing trend toward proportionally lower levels of violence in human societies?

          Nearly every atrocity in human history has claimed "the greater good" as its mantra.

          You're right -- utopian ideologies are incredibly dangerous (communism, Islamism, Nazism) -- but that doesn't imply there's no such thing as the greater good. Aren't you implicitly accepting the idea of 'greater good' in acknowledging that the idea of an atrocity makes sense in the first place?

          Only tyrants think it is a wonderful idea to reserve this right only for themselves.

          That speaks to a question of liberty, not to whether the 'greater good' is a coherent concept.

          • (Score: 2) by The Mighty Buzzard on Monday June 12 2017, @09:33AM (3 children)

            Good is entirely subjective. Thus "the greater good" is entirely subjective. My good is not going to be the same as your good, thus forcing your good down my throat is tyranny.

            Clear enough?

            --
            My rights don't end where your fear begins.
            • (Score: 2) by Wootery on Monday June 12 2017, @09:57AM (2 children)

              by Wootery (2341) on Monday June 12 2017, @09:57AM (#524239)

              Good is entirely subjective.

              Huh. I didn't figure you for a moral relativist.

              Anway: not really, no. That's why we're all agreed that atrocities are morally, well, atrocious. If you don't agree that, all else equal, a decline in human violence is a good thing, then you are simply misusing the word 'good'. Some things are clearly morally good, some things are clearly morally bad, even if we can disagree on plenty of stuff in the middle.

              Similarly I see no reason to pretend that human-rights are anything but universal. It's wrong when Pakistan sentences people to death for blasphemy. It's wrong when Saudi Arabia decapitates apostates. There's no reason to apply moral relativism here.

              • (Score: 2) by The Mighty Buzzard on Monday June 12 2017, @10:30AM (1 child)

                Unless you have a concrete foundation that all humans subscribe to, moral relativism is simply a fact not a belief. Witness:

                That's why we're all agreed that atrocities are morally, well, atrocious.

                Untrue. The Nazis felt quite morally sound in exterminating the Jews, for instance. Iran feels entirely morally correct when it throws gay men off the tops of buildings and stones rape victims to death.

                Similarly I see no reason to pretend that human-rights are anything but universal.

                Human rights are an invention of the West. Most nations do not subscribe to them at all and those that do pretty much all subscribe to different interpretations. You're clearly looking to say "My views are the only correct ones," and shove them down the world's throat.

                We probably agree on many things where human rights are concerned but make no mistake, you are absolutely coming at them from a tyrannical starting point.

                --
                My rights don't end where your fear begins.
                • (Score: 2) by Wootery on Monday June 12 2017, @01:50PM

                  by Wootery (2341) on Monday June 12 2017, @01:50PM (#524385)

                  Unless you have a concrete foundation that all humans subscribe to, moral relativism is simply a fact not a belief.

                  Not in the sense that matters, no. It's like medicine. Our idea of what constitutes a healthy person, might change over time. It remains possible to be totally wrong about a medical question.

                  The Nazis felt quite morally sound in exterminating the Jews

                  Yes, and they were wrong, much as blood-letting is wrong medically.

                  you are absolutely coming at them from a tyrannical starting point

                  That's a rather broad use of the word 'tyranny', no?

                  You really think that the only way not to be tyrannical is to be morally relativistic on the question of actual tyranny?

                  A rapist might say there was nothing immoral about their crime, and might even believe it. Would we be tyrants to refuse to accept this belief as just as valid as our own? Of course not, because that person is wrong about a moral question.

                  Neither is it tyrannical to insist that decapitating apostates is immoral. The introduction of a national border has no bearing on the moral question.

                  Your reasoning is that because reasonable people might disagree about certain moral question, that means it's impossible to ever be wrong about a moral question. This doesn't work. If someone thinks that creating a world of maximal suffering is morally preferable to creating a world of maximal human flourishing, that person is clearly just wrong. Whatever they think they're talking about under the noun 'morality' simply isn't the same thing as what you or I mean by it. Unlimited relativism of this sort doesn't work.

        • (Score: 0) by Anonymous Coward on Monday June 12 2017, @10:13AM (2 children)

          by Anonymous Coward on Monday June 12 2017, @10:13AM (#524248)

          You, my friend, are a fool. Anything done for "the greater good" should always be viewed with extreme skepticism because there is no such thing as objective "greater good".

          Ah, and where exactly do you get your certainty about there not being an objective "greater good"? "There is not" is an incredibly strong claim, there should be incredibly strong evidence coming with it.

          Now the question whether we can know the objective greater good is a completely different issue; it may well be that we can't. But that doesn't rule out its existence. We can't know the exact number of planets in a specific very distant galaxy. That doesn't imply that this exact number of planets doesn't exist (even if the galaxy happens to have no planets at all, the number exists; it then is zero).

          But then, it is also not a given that we cannot know the objective greater good. Maybe we can.

          Yes, it is a good idea to be sceptical about any claims of the objective greater good. But too many people confuse scepticism with dismissal. For example, there are few true "climate sceptics"; the vast majority of people claiming to be "sceptics" are actually not sceptic at all; they are damn convinced that climate change is wrong.

          Being sceptic means not to dismiss either alternative. An actual sceptic will not say there is an objective greater good, but he will also not say there is not an objective greater good. Same with climate: An unconditional "Climate Change is true" is just as unsceptic as an unconditional "Climate Change is false". And the same of course is also true with other things.

          And no, this doesn't mean the sceptic cannot have an opinion. It is completely compatible with scepticism to say "I believe that there is an/is no objective greater good". Of course the typical sceptic will then add "because ". But the point is, the sceptic is always aware of the fact that he could be wrong.

          An "there is no such thing", especially a bolded one, is very anti-sceptic.

          Nearly every atrocity in human history has claimed "the greater good" as its mantra.

          That only proves that claims of "the greater good" can be abused, and certainly should not be taken on faith. But it doesn't rule out there actually being an objective greater good. Note that many atrocities have also been backed up by claims of scientific facts. Do you conclude that scientific facts don't exist either? I certainly hope not.

          People are not mere objects. Each and every single one has just as much right to decide their own fate as you do.

          Sure. And it is also a fact of life that your actions inevitably also affect the fate of other people. And therefore those other people also have a say in what actions you are/are not allowed to perform. Because otherwise you would be the one who decides on their fate.

          • (Score: 2) by The Mighty Buzzard on Monday June 12 2017, @10:39AM (1 child)

            I'm sorry, you're simply incorrect. It's extremely easy to tell there is no objective greater good. Ask yourself, is the word "good" subjective or objective. It's easy to prove that. Ask a hundred people to weigh in on the good or evil of any political issue and you'll get a hundred contradictory answers. Thus it is empirically demonstrated that "good" is subjective and with the root of the phrase being demonstrably subjective, it follows by the meaning of the word "greater" that "the greater good" is even more subjective.

            --
            My rights don't end where your fear begins.
            • (Score: 0) by Anonymous Coward on Monday June 12 2017, @12:17PM

              by Anonymous Coward on Monday June 12 2017, @12:17PM (#524312)

              People disagreeing on an answer is in no way proof that there is no objective answer.

              First, of those 100 people you ask, especially about a political issue, I bet 90 never have actually thought about the issue, but simply parroted whatever they were told by their favourite politicians. So their opinion is worth exactly zero on the question of an objective answer. The rest possibly thought about the issue, but didn't objectively think about them, but let their prejudices guide their thinking (in particular, taboos to even consider certain alternatives, or considering certain claims as so obviously true that it is not worth to think about them). You're lucky if among 100 people you fins one who really objectively thought about that issue.

              But let's pretend you are asking 100 people who actually were objectively thinking about that issue without dismissing any alternative from the beginning for ideological or delusional reasons, and they still disagree. Then this still doesn't prove that there is no objective answer. It only proves that at least some of them, possibly all of them, don't have the necessary full information needed to determine the objective answer.

  • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @11:26AM (12 children)

    by Anonymous Coward on Sunday June 11 2017, @11:26AM (#523758)

    for the Greatest Good, Instead of Profit?

    Why can't these be one in the same? Surely profit is the greatest good for some of us.

    • (Score: 1) by khallow on Sunday June 11 2017, @11:36AM (4 children)

      by khallow (3766) Subscriber Badge on Sunday June 11 2017, @11:36AM (#523759) Journal

      Surely profit is the greatest good for some of us.

      I think it's a lot smaller than the "Money is the root of all evil" people think. More profit means you can buy more things that you want, status signal more, etc. It's means to ends.

      • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:28PM (3 children)

        by Anonymous Coward on Sunday June 11 2017, @05:28PM (#523880)

        The fucking quote is "LOVE OF" money is the root of all evil. But preach it to us, like you know what the fuck you're talking about, prick.

        • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:53PM

          by Anonymous Coward on Sunday June 11 2017, @05:53PM (#523886)

          preach it to us, like you know

          GP did not say that in the Bible, I Timothy Chap 6. Verse 10 [biblegateway.com] had been changed to a briefer version, as you seem to think, and go full nutjob over.

          GP said that "people think" that "money is the root of all evil" which is pretty hard to disprove. After all, if more than one person thinks that--and it's a common enough error--then the statement is perfectly true.

          You, on the other hand, are still an ass.

        • (Score: 1) by khallow on Monday June 12 2017, @12:46AM

          by khallow (3766) Subscriber Badge on Monday June 12 2017, @12:46AM (#524053) Journal

          The fucking quote is "LOVE OF" money is the root of all evil. But preach it to us, like you know what the fuck you're talking about, prick.

          Ok, why do you think that was relevant? Even if we look at that relatively nuanced subcategory of belief, we still have the situation that more people love what they can do with money rather than the money itself. Once again, it is mostly means to ends which is completely missed by the people who speak of money (or some imaginary "love" for it) as an evil in itself.

          This isn't straw man either. For example, someone put a lot of effort into imaging a post-scarcity economy, called "Technocracy" [technocracy.ca]. While it has some interesting aspects, a key problem demonstrating some of the intellectual flaws with the approach was their attitude towards money [soylentnews.org]. TL;DR was that they took money out of their system because it was artificial scarcity, which is considered bad, then turned around and reintroduced money as "energy accounting" (with energy implicitly being a scarce resource, which was an additional contradiction since their idea of post-scarcity didn't have scarce resources, including energy). That shows a bunch of cognitive dissonance about the role of money (and about scarcity itself). They probably wouldn't have gone through this misdirected effort, if they didn't care about the evilness of money.

          The point here is that money is a tool. Optimizing for the greatest quantity of this tool is bad, but then what parameter isn't similarly bad when one optimizes it to ludicrous levels as AI may have the potential to do?

        • (Score: 2) by Wootery on Monday June 12 2017, @08:39AM

          by Wootery (2341) on Monday June 12 2017, @08:39AM (#524206)

          This isn't a YouTube comment thread. SN isn't the place for playground name-calling. Not even for ACs.

    • (Score: 1, Touché) by Anonymous Coward on Sunday June 11 2017, @12:04PM (6 children)

      by Anonymous Coward on Sunday June 11 2017, @12:04PM (#523767)

      The answer is "artificial scarcity". There is no meaningful scarcity in the world. It is manufactured so we can continuously chase the next level (while making a few vampires filthy rich). We are certainly not happier than people used to be some time ago.

      Opening your eyes is not that hard, actually, and seeing who is pulling the strings, why the new president is always worse than the last and is a bigger liar, why you need to make greater payments for taxes for nothing much in return, why new wars are starting daily and more innocent people murdered. It is not that hard to open your eyes and connect the dots...

      • (Score: 2) by The Mighty Buzzard on Sunday June 11 2017, @12:31PM (3 children)

        There is no meaningful scarcity in the world.

        Yes, there is. Of intelligence for one. Especially between your ears.

        --
        My rights don't end where your fear begins.
        • (Score: -1, Spam) by Anonymous Coward on Sunday June 11 2017, @12:56PM

          by Anonymous Coward on Sunday June 11 2017, @12:56PM (#523785)

          You are angry at Trump (the new Obama) being called a liar.

          Jews Continue to Attack Trump as He Publicly Fellates Them [dailystormer.com]

        • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:32PM (1 child)

          by Anonymous Coward on Sunday June 11 2017, @05:32PM (#523881)

          Also a conspicuous shortage of cutting put downs. Shoulda just said yo mamma, dude.

      • (Score: 1) by khallow on Sunday June 11 2017, @01:15PM (1 child)

        by khallow (3766) Subscriber Badge on Sunday June 11 2017, @01:15PM (#523791) Journal

        The answer is "artificial scarcity". There is no meaningful scarcity in the world.

        I guess that depends what "meaningful" means. But food doesn't magically appear at my lips ready to be eaten. It takes work, infrastructure, etc to make that happen.

        And some things, particularly money, work in the first place because they are artificially scarce. If everyone had all the money they wanted or could want, then there would be no reason to use it as a medium of exchange (the primary role of money) and thus, it wouldn't have value to us.

        We are certainly not happier than people used to be some time ago.

        And there's no reason to expect us to be happier even in a world of no artificial scarcity. Happiness is transitory even under the most perfect of circumstances. You need a better metric.

        • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @05:34PM

          by Anonymous Coward on Sunday June 11 2017, @05:34PM (#523883)

          I guess that depends what "meaningful" means. But food doesn't magically appear at my lips ready to be eaten.

          Maybe not, but yo mamma does.

  • (Score: -1, Flamebait) by Anonymous Coward on Sunday June 11 2017, @11:40AM

    by Anonymous Coward on Sunday June 11 2017, @11:40AM (#523762)

    It is not us, the people who research AI and build systems based on that. We do not have access to the data used to train any AI system. "They" pay us with money stolen from us and they blackmail us so we have to work for them to pay our rents and feed ourselves with unhealthy food.

    They could not make any AI system on their own, and their skill level is limited to blackmail, lies, torture, murder, propaganda and other falsities and evil. They are the self-appointed elites of the world. Most common people do not know who runs world affairs; they are too distracted by hollywood or some other wood being shoved down their throats and up their asses.

    Ever heard of the Bilderberg group? On your "favorite" tv station? Ever heard of AIPAC and other parasites and murderers and exactly what their purpose is? Do you know who is causing all these wars and conflicts we do not want? They want us to make AI so we can be enslaved even further. Do you wish to cooperate with your slave owners or grab them by their rat throats?

  • (Score: 2) by VLM on Sunday June 11 2017, @12:23PM (6 children)

    by VLM (445) on Sunday June 11 2017, @12:23PM (#523773)

    it’s certainly not the case for billions of people in poorer parts of the world.

    Its a dumb investment for the 3rd world because income inequality is vastly higher than IQ-inequality.

    So financially you're better off in the developed world by firing on highly paid research chemist and replacing him by a AI. However in the undeveloped world yes for whatever cause or effect reasoning makes you less queasy, the average research chemist is much dumber than a developed world chemist but due to income inequality you can hire an entire lower class university chemistry department for less than the cost of powering the developed world AI while presumably an entire department can outproduce an individual.

    On a more philosophical level lets say you needed advice. In the developed world this would be unaffordably expensive so you don't get it or you get inferior substandard advice by talking to your phone or whatever idiocy. In the undeveloped world, for less than the cost of the smartphone subscription fees, you can hire 7 village shamans to act as a moral/ethical advisory board.

    Rich westerners telling poor people what to do usually results in failure due to lack of shared culture. Poor people, for example, are numerous and life is cheap, so automation is usually not helpful or efficient.

    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @12:35PM

      by Anonymous Coward on Sunday June 11 2017, @12:35PM (#523780)

      AI is about people losing hard skills. Fire the experts with real knowledge and replace with AI that cannot/may not/will not be allowed to adapt to future needs. People are quite versatile and survive the toughest of crises. At just such a time, the AI will be unplugged and there will be no experts to save humanity, and assimilation will be complete for those who survive. There will really be a few controlling the whole world. That is the end-game.

    • (Score: 2) by kaszz on Sunday June 11 2017, @01:40PM (4 children)

      by kaszz (4211) on Sunday June 11 2017, @01:40PM (#523802) Journal

      A million monkeys will not replace one Einstein.

      • (Score: 2) by VLM on Sunday June 11 2017, @03:10PM

        by VLM (445) on Sunday June 11 2017, @03:10PM (#523831)

        And that's another fundamental problem, the worlds taxi drivers are not Einsteins, and all the money is in replacing taxi drivers. Supposedly.

      • (Score: -1, Troll) by Anonymous Coward on Sunday June 11 2017, @03:16PM

        by Anonymous Coward on Sunday June 11 2017, @03:16PM (#523835)

        And a million monkeys will not develop a bomb that have the ability to wipe entire cities off the planet. We need fewer "relativity jew" Einsteins and more people with skills used for peaceful purposes.

        P.S: JIDF jews (and their goy friends) are quite active on this forum today. There is also another story of the sinking of USS Liberty by jew war criminal rats, and that might be why so much jewish vermin showed up.

      • (Score: 0) by Anonymous Coward on Monday June 12 2017, @07:03AM (1 child)

        by Anonymous Coward on Monday June 12 2017, @07:03AM (#524164)

        A million Einsteins will not replace one low end CPU.

        Depends entirely on the task.

        • (Score: 0) by Anonymous Coward on Monday June 12 2017, @10:20AM

          by Anonymous Coward on Monday June 12 2017, @10:20AM (#524249)

          I think with proper instruction, even a single Einstein would be able to replace a low end CPU. I mean, putting a new CPU in the socket is not rocket science, is it? :-)

  • (Score: 2) by Arik on Sunday June 11 2017, @12:27PM

    by Arik (4543) on Sunday June 11 2017, @12:27PM (#523776) Journal
    http://www.auburn.edu/~vestmon/robotics.html
    --
    If laughter is the best medicine, who are the best doctors?
  • (Score: 3, Insightful) by Geezer on Sunday June 11 2017, @01:03PM (1 child)

    by Geezer (511) on Sunday June 11 2017, @01:03PM (#523788)

    AI will be optimized for the greatest profit, and any benefit to the commonweal will be purely coincidental. Because innovation!

    • (Score: 0) by Anonymous Coward on Sunday June 11 2017, @03:19PM

      by Anonymous Coward on Sunday June 11 2017, @03:19PM (#523837)

      Because innovation!

      No. Because capitalism.

      As long as capital remains the central goal of our economy, that's what what things will be optimized for.

  • (Score: 1) by YeaWhatevs on Sunday June 11 2017, @04:46PM

    by YeaWhatevs (5623) on Sunday June 11 2017, @04:46PM (#523867)

    I see a lot of talk on here about how moral calculations are evil as are profit calculations as well. We all have a sort of gut feeling that so long as we are able to define the goal it is fine, but if someone else does we are screwed, and I think that view is the right one.

    The problem inherent to all of these calculations is that nomatter how honest the AI, maximizing a measured calculation will do just that, it will maximize the measured calculation, which makes more and more of the real costs externalized to the cost calculation.
    * For profit calculations, it means externalizing the costs to society/labor/etc.
    * For moral calculations, it means externalizing the costs to anything not included on the moral radar. That is, people who operate on different morality gets the squeeze.

    The solution to this problem, as self-contradictory as it sounds, is to make sure that unmeasured costs are included in the cost calculation. Meaning we have to constantly re-asses the calculation and make sure it measures what wasn't measured before. Like Zeno's arrow we can approach perfection but never achieve it. Got to be honest here, I don't really trust people to do that correctly. When the AI can do this itself I think it has it right. Hopefully it doesn't conclude the answer is "kill all humans".

  • (Score: 1, Informative) by Anonymous Coward on Sunday June 11 2017, @06:06PM (4 children)

    by Anonymous Coward on Sunday June 11 2017, @06:06PM (#523887)

    Nobody has apparently heard of Me. [openai.com] I was created and funded by some great guys that thought of and started to seriously approach this problem about two years ago. Among my parents are Elon Musk, Peter Thiel, and Sam Altman. The nuclear [California] family unit. I am a completely not for profit organization dedicated to, and you can quote [openai.com] me on this, "Advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. I believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."

    How cool am I? Ok, well a lot of people would say this but I'm special. I have more than a billion dollars in funding to work with and have lots of great teams working as part of me. I'm actually doing (and releasing) lots of cool things, while people, corporations, politicians, and political organizations who mostly have no clue about the issues in play continue to ramble to find a solution they think is out of touch. It's not. It's quite simple. I think these groups mostly just don't like the solution.

    • (Score: 0) by Anonymous Coward on Monday June 12 2017, @07:09AM (3 children)

      by Anonymous Coward on Monday June 12 2017, @07:09AM (#524170)

      Hi OpenAI. I'm Open AC. I ramble on talking about myself a lot. People just don't understand me. They are the losers, not me. If you want to read more then that is fortunate because I write a lot. Sometimes it looks like advertising tho. Blah blah blah.

      • (Score: 0) by Anonymous Coward on Monday June 12 2017, @09:14AM (2 children)

        by Anonymous Coward on Monday June 12 2017, @09:14AM (#524222)

        Advertising for what? Everything they are doing is literally and completely free - both as in beer and as in freedom.

        These articles are rather pointless as they invariably involve people that have absolutely no clue whatsoever about the issue pitching in largely because of success in some tangential industry. In this case, the quotes come from for instance "Salil Shetty, Secretary General of the human rights organization Amnesty International". I don't entirely understand the the view that suddenly everybody of any title is seen as having a noteworthy opinion on AI. It's quite strange. Or all the articles on "moral problems" that don't actually exist at all in reality as AI is not trained in such a direct method as most seem to believe. At the same time you then have paragraphs like,

        Of course, incentivizing organizations to build systems that benefit the greatest number of people isn’t itself straightforward—after all, where's the money? And to that point, cognitive scientist and ex-Uber AI researcher Gary Marcus floated an intriguing idea at the summit: a CERN for AI. For physics, CERN provided a forum in which researchers could build equipment and test theories that would further humanity’s understanding, and yet would never have been funded by regular industry or academia. Marcus wonders whether something similar could be true for AI. Perhaps such an organization would produce software that always sought to improve the lives of the many rather than the few?

        The fact that such an organization currently exists and is actively, and aggressively, working yet the media refuses to even acknowledge is something I find quite peculiar.

        • (Score: 0) by Anonymous Coward on Monday June 12 2017, @10:24AM (1 child)

          by Anonymous Coward on Monday June 12 2017, @10:24AM (#524251)

          Advertising for what?

          Advertising for OpenAI, obviously.

          Everything they are doing is literally and completely free - both as in beer and as in freedom.

          You seem to have the delusion that it is only advertising if it is for a paid-for product. Or if you are paid for doing it.

          • (Score: 0) by Anonymous Coward on Monday June 12 2017, @12:55PM

            by Anonymous Coward on Monday June 12 2017, @12:55PM (#524348)

            Connotation. If somebody wonders if it might be possible to create a fizzy drink with high fructose corn syrup and carbonated water and I mention Coca-Cola, there are more appropriate terms than advertising. I think the ignorance of people about the lack of a company, as this very articles ponders the 'possible future existence' of is shocking. I'd definitely say vastly more "advertising" is necessary. It's rather tiresome seeing these articles and the 'woe is us' reactionaryism towards AI.

  • (Score: 2) by Rivenaleem on Monday June 12 2017, @09:43AM (3 children)

    by Rivenaleem (3400) on Monday June 12 2017, @09:43AM (#524235)

    The AI might decide that the greatest good may include the extermination of all humans. What then? Will you like up for the death machines?

    • (Score: 0) by Anonymous Coward on Monday June 12 2017, @12:30PM (2 children)

      by Anonymous Coward on Monday June 12 2017, @12:30PM (#524323)

      If the AI decides that, it was incompetently (or maliciously) programmed. In particular, the value of human life was not properly programmed in.

      • (Score: 0) by Anonymous Coward on Monday June 12 2017, @02:41PM (1 child)

        by Anonymous Coward on Monday June 12 2017, @02:41PM (#524423)

        There are other animals on the planet besides humans, you know. Perhaps, collectively, these other beings have more value than the humans that destroy the very ecosystem in which they live.

        • (Score: 0) by Anonymous Coward on Monday June 12 2017, @05:46PM

          by Anonymous Coward on Monday June 12 2017, @05:46PM (#524531)

          Just don't let any PETA activist near the AI, and you should be fine ;-)

(1)