Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by looorg on Tuesday May 29 2018, @01:58AM (2 children)

    by looorg (578) on Tuesday May 29 2018, @01:58AM (#685391)

    Short answer? Maybe. I don't believe in algorithmic salvation. I'm sure something good will come out of it, if nothing else it will do most of the boring maths. But that notion of if we just tame the machine God AI it will take all out datas and create utopia? Nope, not going to happen.

    For some time it might or will probably create more problems, new or old, then it solves.

    • (Score: -1, Spam) by Anonymous Coward on Tuesday May 29 2018, @02:33AM

      by Anonymous Coward on Tuesday May 29 2018, @02:33AM (#685406)

      "It's stinky! Stinky, stinky!" someone bellowed. The language used was that of a child's, but the person who shouted it had the voice of an adult male. Looking around the dim room, one could see an obese man pinching his nostrils shut. Indeed, the foul odor present in the room was intolerable. Where was it coming from? The man looked around.

      "Ah." Bigham muttered. Having found the location of the obnoxious stench, the man slowly nodded his head as if recalling something. Bigham's actions had produced too much garbage, and it was all rotting away in his house because he was too lazy to throw it out. Thus, this undesirable outcome was the result.

      Finally, the riddle had been solved. They were garbage now. Yes, "they"; those children, that is. When was it? Bigham remembered that he had tortured and violated those kids a few months ago. Since the man became too rough, the little ones all quickly broke and became trash that could only rot away and inconvenience him. "How dare they!" Bigham shouted, as he kicked one of the children's corpses. Yes, this poor man was being oppressed by these mutilated, rotten corpses. Would this righteous man let his oppressors get away with their heinous actions? No. This injustice will not stand.

      Not long after that, a number of human corpses were found in a dumpster. However, these corpses were so mangled and devastated that it raised the question of whether they were even human corpses to begin with, as though they had drawn the ire of a madman. Men's rights was truly a wondrous thing.

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @02:48AM

      by Anonymous Coward on Tuesday May 29 2018, @02:48AM (#685415)

      Mathematica already does "the boring maths"

  • (Score: 2) by c0lo on Tuesday May 29 2018, @02:04AM (1 child)

    by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @02:04AM (#685392) Journal

    Can AI solve all the problems?
    Nope - even assuming the current AI (NN essentially) is flawless** in what it does (recognizing patterns).
    It's like saying "Google search is all you need as knowledge" - the illusion that all the knowledge was already discovered, posted on the Web, indexed by Google and relevantly indexed while at it.
    The danger: if humanity gives into this illusion, no new knowledge would ever be created.

    ---

    ** it is not flawless: the adversarial attacks [medium.com] are [arxiv.org] [arxiv.org]to craft [github.com].
    And this, I assert, is inherent to the nature of the NN approach - as you create, by training, "desired attractors/optima" on the hyper-surface of the solution (where the NN will converge when starting from an initial state), one also creates a bunch of other "undesired ones" - maybe shallower, but still able to "detract" the recognition/classification on a wrong path.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by c0lo on Tuesday May 29 2018, @02:08AM

      by c0lo (156) Subscriber Badge on Tuesday May 29 2018, @02:08AM (#685393) Journal

      the adversarial attacks [medium.com] are [arxiv.org] easy [arxiv.org] to craft [github.com].

      FTFM(e)

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 4, Insightful) by The Mighty Buzzard on Tuesday May 29 2018, @02:20AM (35 children)

    Here's what AI can solve: things that are computationally very difficult but require no creativity or intuition. That's all. For everything else, humans FTW.

    --
    My rights don't end where your fear begins.
    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @02:25AM (21 children)

      by Anonymous Coward on Tuesday May 29 2018, @02:25AM (#685403)

      ...making popcorn...

      I think you are about to be torn apart, unless you have some pretty special definitions for creativity or intuition.

      • (Score: 2) by The Mighty Buzzard on Tuesday May 29 2018, @05:48AM (20 children)

        No special definitions needed. The entity doing the thinking simply has to be able to answer the question "why". It has to be able to understand rather than simply playing the odds. Computers are currently not capable of doing this and likely never will be.

        I have no doubt that eventually a computer will be able to crank out pop music that hits the top ten regularly. That is extremely formulaic already and not at all what I'm talking about. I'm speaking of actual creativity. Pulling something entirely new from the imagination and bringing it into existence. Given all the data in the world and infinite processing power, no computer would have ever spit out Smells Like Teen Spirit in 1991. It was simply too different from everything else on the charts at the time. A human had to do it because a computer could never have understood what it was to be a GenX human being back then, realized something entirely new was called for, and created it.

        --
        My rights don't end where your fear begins.
        • (Score: 4, Insightful) by Wootery on Tuesday May 29 2018, @09:04AM (17 children)

          by Wootery (2341) on Tuesday May 29 2018, @09:04AM (#685501)

          'Understand' is not a concrete idea, it's a constantly shifting goal.

          Consider chess AI. A century ago, I'm sure people would have said that any machine that can beat the chess masters, can surely be said to 'understand' the game. These days, people don't consider chess AI to 'understand' the game.

          The moving-goalposts thing is a common theme with AI. Whenever someone cracks a problem using AI techniques, the response is Well that's a cute trick, but it's not real AI! This happens decade after decade.

          'Understanding' and 'real AI' are not well-defined terms, they're just a reflection of public opinion.

          If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?

          likely never will be

          I see no reason to assume this. Our brains may be a very different sort of computer than human-built computers are, [youtube.com] but they're ultimately 'just' complex arrangements of molecules that process information. In principle at least, there's no reason to assume that we'll never be able to do with transistors, what we can already do with neurons. There's nothing magic about carbon, or about silicon.

          Another interesting take on this stuff is Dennett's idea of competence without comprehension [tufts.edu], but in this context, I'm not convinced that 'understanding' is meaningful in the first place.

          Perhaps we could define 'understanding' in terms of having a good command over the abstract concepts relating to a problem domain, but I don't think it's enough to dismiss the question with No special definitions needed.

          • (Score: 2) by cubancigar11 on Tuesday May 29 2018, @02:58PM (6 children)

            by cubancigar11 (330) on Tuesday May 29 2018, @02:58PM (#685639) Homepage Journal

            I think a problem can be well defined as 'understood' when AI can change itself according to shifting goalpost without any external intervention. Right now the best solution out there pertaining to AI are based on neural networks (which, btw, I think is pretty neat in itself), if for any of those to be called Real(TM) AI, they will need (A) a mechanism to know that the goalpost has shifted (B) find that goal post (C) surpass it before it is shifted again.

            As of now (A) is solved and (B) is done by the humans. Until that happens (C) will not be focus of the debate.

            • (Score: 2) by Wootery on Tuesday May 29 2018, @05:34PM (5 children)

              by Wootery (2341) on Tuesday May 29 2018, @05:34PM (#685738)

              I think a problem can be well defined as 'understood' when AI can change itself according to shifting goalpost without any external intervention.

              That doesn't sound like 'understanding', it sounds more like online machine learning. [wikipedia.org]

              for any of those to be called Real(TM) AI, they will need (A) a mechanism to know that the goalpost has shifted (B) find that goal post (C) surpass it before it is shifted again.

              Sounds like a rather arbitrary threshold to set, and again, it seems to me that even the most primitive online machine learning algorithms already fit the bill, although they don't necessarily use the approach that you describe where there are discrete stages.

              • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @04:18AM (4 children)

                by cubancigar11 (330) on Wednesday May 30 2018, @04:18AM (#686090) Homepage Journal

                Not arbitrary at all! This is how we measure intelligence in fellow human beings! We don't think that a plumber is intelligent because it can turn a screw, we think of him as intelligent when he figures out a problem on its own and solves it to the expectation that was set before the problem was even discovered.

                • (Score: 2) by Wootery on Wednesday May 30 2018, @09:03AM (3 children)

                  by Wootery (2341) on Wednesday May 30 2018, @09:03AM (#686178)

                  This is how we measure intelligence in fellow human beings!

                  Not really - we look at how good people are at solving problems. I don't see the need for any explicit model involving distinct stages and 'shifting goalposts'.

                  • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @09:36AM (2 children)

                    by cubancigar11 (330) on Wednesday May 30 2018, @09:36AM (#686193) Homepage Journal

                    That is because we have implicit understanding of what it means to be people. As much as you might dislike it, we have come to understand AI as having more than just how good it is at solving a particular problem. Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

                    • (Score: 2) by Wootery on Wednesday May 30 2018, @10:29AM (1 child)

                      by Wootery (2341) on Wednesday May 30 2018, @10:29AM (#686206)

                      So you agree intelligence tests have no 3-stage model?

                      Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

                      You've not phrased that very clearly, but I think your point is that extremely simple programs can be more effective at solving certain problems than even very intelligent people.

                      This is certainly true. When it comes to memorising numbers, multiplying numbers, etc, even very humble computers easily outperform the most intelligent humans. (This is what Daniel Dennett calls competence without comprehension.)

                      When we say 'AI', we tend to mean either

                      1. Software that attempts to solve a problem that previously no software system has solved
                      2. Software that uses machine-learning in at least some way
                      3. 'General artificial intelligence', which roughly speaking tends to mean it has the generality of intelligence that a human has (rather fuzzy, and we'll ignore for now that it will presumably be better than any human at multiplying numbers)

                      A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

                      None of this tells us what 'understanding' means. The best definition I can think of is - as I mentioned elsewhere in this thread - the ability to reason about the abstract concepts related to a problem-domain. (Of course, no current software system is capable of doing this.)

                      That's not all that precise, but it seems like a good starting point. I don't see that it would be reasonable to define understanding to only ever apply to humans. That's just cheating.

                      • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @01:11PM

                        by cubancigar11 (330) on Wednesday May 30 2018, @01:11PM (#686239) Homepage Journal

                        So you agree intelligence tests have no 3-stage model?

                        Huh? How did you get that feeling? I am critiquing your following statement:

                        Not really - we look at how good people are at solving problems.

                        by pointing out that how good someone is at solving a problem is not at all related to how we measure intelligence. Intelligence is about how novel your solution is not and at all about how good that solution is.

                        When we say 'AI', we tend to mean either
                        1. Software that attempts to solve a problem that previously no software system has solved
                        2. Software that uses machine-learning in at least some way

                        Only if by 'we' you mean people working on AI. For me, 'we' means general populace and they don't care about machine learning.

                        A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

                        No, a pocket calculator doesn't qualify as AI because it doesn't have any way to recognize a problem and come with a novel solution, or in colloquial terms, "understands" the problem, or to put it in a testable way, it doesn't follow the model I am suggesting.

                        I think I realize where the disagreement is originating - I am talking about why a layman doesn't consider something to be an AI and you are talking about something that has already been done in the name of AI.

          • (Score: 2) by The Mighty Buzzard on Tuesday May 29 2018, @03:24PM (6 children)

            Chess is a finite problem that requires nothing but time to solve utterly. Most of life is not.

            And we're not talking competence here. Competence simply requires looking at what everyone else is doing and parroting or solving a problem given a sufficiently exhaustive input data set. Intuition is what handles everything else.

            We really don't need to define understanding here unless you're a machine yourself. Computers are incapable of it because it requires more than executing op codes and reading memory addresses. Humans are not incapable; they're born capable, it simply takes them time to acquire. Put in programming terms, every new event needs a method/function to deal with it. These have to be pre-programmed because you cannot release potentially dangerous machines into the wild with absolutely no idea what kind of solution they're going to come up with for a given problem. If they are pre-programmed, that's human understanding not machine understanding.

            Existence may technically be finite but it's finite beyond ability to describe in given our storage limitations.

            --
            My rights don't end where your fear begins.
            • (Score: 2) by maxwell demon on Tuesday May 29 2018, @05:56PM (1 child)

              by maxwell demon (1608) on Tuesday May 29 2018, @05:56PM (#685759) Journal

              Chess is a finite problem that requires nothing but time to solve utterly.

              Except that the time needed to completely solve it is long enough that the age of the universe seems like the blink of an eye in comparison.

              --
              The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by Wootery on Tuesday May 29 2018, @06:13PM (3 children)

              by Wootery (2341) on Tuesday May 29 2018, @06:13PM (#685769)

              Competence simply requires looking at what everyone else is doing and parroting or solving a problem given a sufficiently exhaustive input data set. Intuition is what handles everything else.

              Seems to me you've introduced another rather unclear and loaded term ('intuition'), and are gradually building a somewhat half-baked theory of mind.

              1. 'Parroting' isn't trivial, as we're seeing with non-'creative' AI tasks like self-driving cars, and voice-recognition
              2. There's no bright line between parroting and creativity. Does the work of an undergraduate math student count as 'creative'? Most people think not. How about a math researcher? They're both in the business of coming up with proofs they've not seen before, right? It's just that only the latter produces something worth publishing. Certainly, no current AI system could do the work of either. (Ignoring Gödel's incompleteness theorem for a moment.)
              3. By any sensible definition, every intelligent entity inspects a data-set, does some processing, and produces a result. Efficient intelligences do better than brute-force, and effective ones produce desirable results (by whatever metric). I don't need to define 'creativity' for that to be true: it's true of numerical addition, chess, and yes, of writing poetry, composing music, and battling it out on Soylent News :P

              We really don't need to define understanding here unless you're a machine yourself.

              Disagree. If we're not going to give it a reasonably clear definition, we can't meaningfully reason about it.

              Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

              That doesn't sound right. In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

              The earliest computer scientists never dreamed that computers could do what they do today. Let's not constrain ourselves to the limitations of modern hardware and software, when we're really concerned with the principle.

              Humans are not incapable; they're born capable, it simply takes them time to acquire

              So we humans are stateful machines capable of online machine learning, and we're pre-programmed with certain biases? Well sure. But I'm not seeing the 'in principle' difference between us and computers.

              These have to be pre-programmed because you cannot release potentially dangerous machines into the wild with absolutely no idea what kind of solution they're going to come up with for a given problem.

              People do that all the time. They're call 'parents'. We end up assigning moral responsibility to their new 'release', of course. Sometimes we even ritualise doing so. [wikipedia.org]

              Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

              Humans are very effective general learning machines. In principle, computers are capable of everything we're capable of. The universe supports the functionality of the human brain (obviously), and there's nothing magic about our substrate (neurons) vs theirs (transistors).

              Or do you really think that it would be impossible, even just in principle, to simulate the human brain using transistors?

              We already know that the inverse is possible: brains can simulate computers, it just takes an impractically long time. (That's why we build the things, after all.)

              If they are pre-programmed, that's human understanding not machine understanding.

              The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

              Existence may technically be finite but it's finite beyond ability to describe in given our storage limitations.

              I don't get you.

              • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:34PM (2 children)

                In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

                No. The programmers would have to understand precisely how the human brain worked for that to be possible. Neuroscientists don't even have this level of understanding at the moment. And if you don't think it needs to be precise, ask the fine folks over at Wine [winehq.org] to school you on emulation.

                Well sure. But I'm not seeing the 'in principle' difference between us and computers.

                Which is what I'd been trying to correct. It appears I've failed. Such is life.

                Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

                Abso-fucking-lutely. Try running mesa in software rendering mode sometime and you'll see why. Or try running a PS3 emulator on an x86 processor that's only clocked twice as fast as the original hardware. Apples and oranges matters a hell of a lot.

                The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

                No. The appearance of understanding is not the same as understanding. You can teach any fool enough of a raindance to be able to wire their house for electricity without teaching them why they're doing what they're doing. You're not going to let them loose as a licensed electrician like that though because they're either going to die or cause other people to die. Thus the apprenticeship and licensing requirements for electricians; we demand that they understand, not just ape what they see others doing. I'm not saying there's no utility to be found in teaching a machine a raindance or letting it raindance things of little importance or danger on its own but the capability is simply not the same, only the outcome.

                I don't get you.

                No, you don't. Which is why you should not be monkeying with AI. Ever.

                --
                My rights don't end where your fear begins.
                • (Score: 2) by Wootery on Wednesday May 30 2018, @02:09PM (1 child)

                  by Wootery (2341) on Wednesday May 30 2018, @02:09PM (#686268)

                  Come on Buzz, I was quite clear: in principle. It gets us nowhere for you to write about how difficult it would be. Those points are short-sighted and, frankly, rather obvious. You may as well remind me that no-one has yet successfully simulated a human brain.

                  It remains that there is no reason in principle that transistors can't do what neurons can.

                  Do you really want to assert that this is beyond the capability of any Turing-complete machine? It seems absurd. It's a physical process. You think physical processes can't be modelled by sufficiently powerful Turing-complete systems? We're not talking about nondeterministic quantum phenomena here.

                  Do you think it would be impossible even in principle for any computer system to simulate the brain of a wasp? It's the same physical process at work, just in miniature.

                  It appears I've failed.

                  You've given me no good reason to believe that the physical processes of the brain follows special rules which are beyond the capabilities of any hypothetical Turing-complete computer, regardless of power.

                  This is an extraordinary claim, but you've got nothing to support it.

                  Try running mesa in software rendering mode sometime and you'll see why

                  Again, you're writing about the performance challenges that face the computer systems of today. What's relevant is whether the computational simulation of a brain is possible in principle, and there seems to me to be every reason to think that the answer is yes: modern computers are quite capable of modelling physical processes, so I see no reason to assume the physical processes of the brain are categorically impossible to model computationally.

                  The appearance of understanding is not the same as understanding.

                  'Appearance' means behaviour. You're going with a definition of 'understanding' which isn't based on how something acts? I'm reminded of the Chinese Room Argument. You need to explain what you do mean by 'understanding' (as does Searle for that matter).

                  the capability is simply not the same, only the outcome

                  If two things are equally able to bring about some desired outcome, we say they have equal capability. When discussing capability, we don't care if they have brains or not.

                  I remind you of what you said earlier:

                  We really don't need to define understanding here unless you're a machine yourself. Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

                  Are you saying that 'real understanding' needs consciousness? If so, just say so.

                  You seem to be saying that no computer system can be said to have 'understanding', even if it behaves exactly the same way a human behaves. Presumably then you think physical brains are metaphysically magical? No matter what computers do, it still doesn't count!

                  • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @05:34PM

                    In theory cracking 21024 bit EC encryption is possible. You'd just need a universe that was going to last a lot longer than ours. Now I don't dislike science fiction but I do prefer to leave it on novel pages or a screen until something gives us an idea that it might actually be possible. Neither current nor proposed hardware has given us any indication of being able to approximate human intelligence. Computers are extremely good at being high-speed idiots but extremely bad at being anything else.

                    --
                    My rights don't end where your fear begins.
          • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:35PM (2 children)

            by Gaaark (41) on Wednesday May 30 2018, @03:35PM (#686308) Journal

            "If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?"

            No. Can it come up with something that 'pre-supposes' something else?
            Could it predict geometry?
            Could it predict black holes?
            Could it come up, completely without human intervention, that if gravity is thought of as a piece of rubber with an object place on it and another object rolled around the first object, that physics as we know it will be completely changed and that from that we could create satellites that orbit planets and we could even travel inside and outside our solar system?

            We might be approaching artificial intelligence if it could predict something complex and not predicted before.

            Otherwise, and so far, we are just using 'artificial learning'.
            A + B gives us C.
            C + B gives us A.
            Justin Bieber + Internet gives us Crap.

            There needs to be a fundamental LEAP before there is AI. A holy grail is missing... queue the Spam-alot.

            Back to :"If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?"
            No.
            They've done this with chess, and all it is is a chess program with a fuck load of computational power behind it.
            Could the same program realize that chess has a lot to do with logic and prediction and from playing chess, realize that man could leave it's own planet?

            Intelligence is figuring things out, making predictions, understanding things, creating things, knowing that life is complex and not simply black and white.
            The AI we have now is
            "the program says if A and B exist, then C or D is the answer unless E can be considered as an extraneous data then F."

            There is no A and B exist but F is there as well so C and D are not the answer: what is the answer? We have to find G or H or I. How do we find out whether it is G, H or I? Lets work all the possibilities: to do this, we need to put a satellite into orbit around J and see if it clarifies G, H or I. No? But almost....maybe a satellite that could extract K could help?
            Yes it does: the extraction of data about K gives us G or H, elimintating I, so how do we decide G or H?
            We need to land a rover on J.
            DAMN! We got G here! G! Shit, we got G and nothing but G!

            All that from a guess.

            Can an AI go through all that and come up with an answer without human intervention? Then maybe you have AI.
            I (for me, myself and I) would be happy with that.

            Guess, predict, check, reguess, recheck....predict....check....change check....re-predict.... change check....re-predict...correct answer!

            Would be damn close.

            --
            --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
            • (Score: 2) by Wootery on Wednesday May 30 2018, @05:16PM (1 child)

              by Wootery (2341) on Wednesday May 30 2018, @05:16PM (#686356)

              No. Can it come up with something that 'pre-supposes' something else?

              Following the line I'm going down with Buzz elsewhere in the thread: let's say yes, it essentially just simulates the world's best living math professor, complete with the occasional faulty intuition.

              (This is of course an extremely unlikely 'first AI', but it's a fun thought experiment.)

              We might be approaching artificial intelligence if it could predict something complex and not predicted before.

              Well, we've definitely got a very impressive AI if it can write a serious mathematical research paper.

              I think I was rather unclear when I put 'good at math' - I hadn't meant procedural number-crunching, I'd meant research mathematics.

              I agree with your 'figuring things out' description, though of course it's imprecise. I agree that today's 'AI' is nowhere near the sort of 'strong AI' that we're talking about here. Today's AI's cannot reason about abstract concepts at all. No present-day AI could make sense of our conversation here and make a meaningful contribution. It's not even on the horizon.

              Another thing they're bad at is problems that use a general knowledge of things in the world. If we see a picture of a dog next to a shredded sofa, we immediately assume That bad dog has shredded the sofa! We'd never think to explain the picture with The dog has dragged that damaged sofa into the room, neither would we ever suggest that The sofa fought the dog, but lost. We intuitively know these are poor explanations, as we know how dogs and sofas 'behave'.

              The best that today's AIs could manage would be to identify the objects. They can't reason about the events that likely lead up to the depicted situation, neither can they reason about the likely future (dog getting told off by its master, sofa getting repaired or replaced, etc).

              Guess, predict, check, reguess, recheck....

              I broadly agree with your points, but that's rather imprecise. Self-play with chess 'AIs' involves that sort of process, but we wouldn't say they're strong, general AIs. It's the generality that's the kicker.

              • (Score: 2) by Gaaark on Wednesday May 30 2018, @05:48PM

                by Gaaark (41) on Wednesday May 30 2018, @05:48PM (#686377) Journal

                Okay, we're on the same page:
                Except "Self-play with chess 'AIs' involves that sort of process, but we wouldn't say they're strong, general AIs."

                It doesn't involve that sort of process: it is just as the programmers programmed it. There is no 'guess and prediction, only "with that move, my best move is here because my programming says THIS. OH! He moved here! My best move is then this because my programming says so: at no point is there a bluff, or a confusion, only "my best move, as per my program, is this."
                Basically an expanded 1+X=

                When it can say "Gaaark will do this because his son did this and his wife might do this because his daughter is maybe doing this...but wait! The in-laws want THIS and Gaaark parents want him to .... for their 60th wedding anniversary, and his sister in Alberta is planning...."

                If your AI can help me... GIMME!!!!

                --
                --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 1) by mmarujo on Wednesday May 30 2018, @11:21AM (1 child)

          by mmarujo (347) on Wednesday May 30 2018, @11:21AM (#686217)

          No special definitions needed.

          We agree on this, but on a different view points.

          Personally, I don't believe in soul/etheral/energy/whatever being: We are material. If we are then it i's possible to some day compute a simulation.

          I don't know / don't really care What about us / our brain allows us to understand, but sooner or later it will be possible to emulate. Besides, most work won't even require it.

          • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:46PM

            ...but sooner or later it will be possible to emulate.

            I disagree. I don't think any current or even theorized processor technology is capable of running software to emulate such radically different hardware. I'm not even sure the human race is capable of understanding the workings of their brains well enough to code such an emulator.

            Besides, most work won't even require it.

            Oh, sure. You don't need an electrician to change a light blub. You damned sure want one to look over any property you plan on buying though. If a raindance is sufficient, a raindance is sufficient. If understanding is necessary, understanding is necessary.

            --
            My rights don't end where your fear begins.
    • (Score: 1) by Ethanol-fueled on Tuesday May 29 2018, @02:32AM (2 children)

      by Ethanol-fueled (2792) on Tuesday May 29 2018, @02:32AM (#685405) Homepage

      Everytime I think of the current applications of AI I think of that DuckTales episode with the chess-playing computer that wins the game up front because it "predicted" Scrooge's moves rather than let him actually play.

      It doesn't matter how smart AIs are, because they're inevitably going to be used in such a manner that people are going to hate them more and more, and will take sledgehammers to them.

      • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:52AM (1 child)

        by Anonymous Coward on Tuesday May 29 2018, @03:52AM (#685434)

        It doesn't matter how smart AIs are, because they're inevitably going to be used in such a manner that people are going to hate them more and more, and will take sledgehammers to them.

        So you're saying that you, Ethanol-fueled, are AI?

    • (Score: 2, Insightful) by Anonymous Coward on Tuesday May 29 2018, @03:00AM (9 children)

      by Anonymous Coward on Tuesday May 29 2018, @03:00AM (#685421)

      There is no such thing as creativity. Everything can be brute forced (a quadrillion monkeys with a quadrillion type writers with a quadrillion years).

      • (Score: 3, Insightful) by The Mighty Buzzard on Tuesday May 29 2018, @05:13AM

        Given infinite time, infinite do-overs, and failure not costing anything? Sure. That's not how the world works for the most part though.

        --
        My rights don't end where your fear begins.
      • (Score: 2) by acid andy on Tuesday May 29 2018, @01:56PM (5 children)

        by acid andy (1683) on Tuesday May 29 2018, @01:56PM (#685599) Homepage Journal

        I agree. This is one of the things I want to write about in a journal entry. There are almost no truly original ideas. Most are, at best, random mash-ups of what came before, if not outright repetitions of those things. It's just that no one pays attention to the crappiest art or music or inventions (for various values of "crappiest"). An expanding world population and greater sharing and recording of information greatly accentuates this effect as well as making it much more obvious. Just google (or duckduck) your great idea and most of the time, if you look hard enough, you'll find someone online that already thought of it.

        As more and more billions come online, the human race is effectively becoming those infinite monkeys.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 2) by acid andy on Tuesday May 29 2018, @09:48PM

          by acid andy (1683) on Tuesday May 29 2018, @09:48PM (#685929) Homepage Journal

          Just to add to the above; I think it's only part of the picture and something of a glass-half-empty point of view. Yes, there are only a finite number of popular ideas, familiar concepts, objects and conventions in the creative arts and yes each work of art will doubtless include or rely heavily upon themes that have come before; but on the other hand no two works of art are identical. Each artist puts their own unique spin on those previous ideas. So perhaps I was being a little too cynical. I still think there's a problem of diminishing returns when artists want to keep churning out works that fit strictly within established genre. Just look at what's happening to Hollywood with stale remake after remake and the same tired plots, stunts and one liners getting trotted out ad nauseam. The remaining new fertile ground for creativity I suppose lies in the more unconventional, the abstract and the surreal. But that doesn't always bring in the big bucks, unfortunately (if you care about big bucks).

          --
          If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 1, Informative) by Anonymous Coward on Tuesday May 29 2018, @10:06PM (1 child)

          by Anonymous Coward on Tuesday May 29 2018, @10:06PM (#685942)

          This was the message of Planet of the Apes (The book, not the movies).

          • (Score: 2) by acid andy on Wednesday May 30 2018, @01:32AM

            by acid andy (1683) on Wednesday May 30 2018, @01:32AM (#686031) Homepage Journal

            Thanks, I might check it out.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 2) by Wootery on Wednesday May 30 2018, @09:07AM (1 child)

          by Wootery (2341) on Wednesday May 30 2018, @09:07AM (#686180)

          There are almost no truly original ideas.

          Not really. When Turing 'invented' the Turing Machine, he was the first to think it up. Hence 'invented'.

          You might be able to nitpick about how it 'combined existing ideas' and how 'all ideas exist timelessly in idea space, and we can only really discover, never invent', but that isn't what people mean when they talk about the originality of an idea.

          • (Score: 2) by acid andy on Wednesday May 30 2018, @11:09AM

            by acid andy (1683) on Wednesday May 30 2018, @11:09AM (#686216) Homepage Journal

            That's true. I was mainly thinking of art rather than science. With science there's a more clearly defined, objective measurement of what constitutes a new scientific theorem. It needs to be logically consistent, builds upon what has gone before and provides a means of answering new questions. It's arguably therefore much harder for someone to come up with a new, useful scientific theorem that will gain acceptance, than for someone to generate a piece of art. Because ideas in art are more easily generated, the likelihood that other humans have had similar ideas is that much greater.

            You might be able to nitpick about how it 'combined existing ideas' and how 'all ideas exist timelessly in idea space, and we can only really discover, never invent'

            Yes I think I had that sort of thing in mind as well. New ideas are collections of older ideas. I suppose what really counts is whether humanity gains anything from the existence of the new idea, above and beyond the idea itself. In the scientific case, they gain new understanding and may gain an increase in efficiency when doing physical work. In the artistic case, I don't know, people might gain elevations in mood, or it might serve as an improved communication medium. The test for newness there I suppose is whether it does that any better than an exact copy of another work of art that has come before. Although that's a bit strange because people will still seek to consume "new" works of art even if they consider them worse than their predecessors, I suppose because the novelty itself adds to the perceived value and elevation of mood.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
      • (Score: 1) by mmarujo on Wednesday May 30 2018, @11:23AM

        by mmarujo (347) on Wednesday May 30 2018, @11:23AM (#686218)

        That is just shifting the cost: from the production to identification.

        Can you imagine the cost of searching infinite paintings looking for the "Mona Lisa" ?

      • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:54PM

        by Gaaark (41) on Wednesday May 30 2018, @03:54PM (#686312) Journal

        and yet in all this time, only Einstein (or his wife???) was able to make the leap to GR.

        Sometimes AI needs more than monkeys. Sometimes there is a need for creativity: how good are monkeys if the info needed to get us to another planet to save the human species is found JUST SECONDS before the destruction of the human race.

        Not. much. good.

        You need creativity...not monkeys. Try to crack one of my passwords: sure, brute force may do it EVENTUALLY, but maybe some creativity would do it faster.
        A creative human would probably crack one of my passwords faster than brute force.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:09AM

    by Anonymous Coward on Tuesday May 29 2018, @03:09AM (#685422)

    But it might be close.

  • (Score: 3, Insightful) by Gaaark on Tuesday May 29 2018, @03:17AM (21 children)

    by Gaaark (41) on Tuesday May 29 2018, @03:17AM (#685425) Journal

    Unless an ai can control people, it won't change a thing:
    Example...say the solution is "no one can have more than $100,000 in total assets, and the world will be 'its best'.
    Will the rich give up what they have to solve the world's problems?

    The solution is "no one can eat ANY meat ever again".
    The solution is "no one can own/possess/manufacture any kind of weapon".

    "No one can ever have sexual relations ever again (whether it be by proper definition or Clinton definition)".

    AI will not change people and only people can save the world.

    "I'm a man. But I can change. If I have to. I guess."

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 2) by mhajicek on Tuesday May 29 2018, @03:35AM (2 children)

      by mhajicek (51) on Tuesday May 29 2018, @03:35AM (#685431)

      The better AI will tell you how to convince or manipulate everyone to follow the plan. Unfortunately at present the plan is to funnel all resources into a few pockets.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 1) by Ethanol-fueled on Tuesday May 29 2018, @05:36AM (1 child)

        by Ethanol-fueled (2792) on Tuesday May 29 2018, @05:36AM (#685447) Homepage

        We are already seeing the "better AI," and it's not doing much but pissing people off and pushing Google search users over to Bing search, and pushing YouTube users over to HookTube. Most of us are too smart to be fooled by AI tricks. That why I have faith in humanity over deep-state technology, it is very obviously not yet good enough to beat us.

        • (Score: 2) by mhajicek on Tuesday May 29 2018, @05:59AM

          by mhajicek (51) on Tuesday May 29 2018, @05:59AM (#685453)

          Chess, Go, StarCraft, salesmanship. Seems like a logical progression.

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 1) by khallow on Tuesday May 29 2018, @03:54AM (9 children)

      by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @03:54AM (#685436) Journal

      Example...say the solution is "no one can have more than $100,000 in total assets, and the world will be 'its best'.

      If that's the solution, drop the problem and find a better one. Solving ill-posed or ill-conceived problems doesn't do much for us and may actually cause considerable harm.

      • (Score: 2) by acid andy on Tuesday May 29 2018, @05:38PM (6 children)

        by acid andy (1683) on Tuesday May 29 2018, @05:38PM (#685741) Homepage Journal

        Example...say the solution is "no one can have more than $100,000 in total assets, and the world will be 'its best'.

        If that's the solution, drop the problem and find a better one.

        Funny; I was thinking almost exactly the same thing but about the sexual relations example instead. If it specified procreation instead, then fine.

        The $100,000 one didn't sound like such a bad idea, if there was a way to index link it to the Earth's resources (all elements and organisms, not just gold).

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 1) by khallow on Tuesday May 29 2018, @10:20PM (5 children)

          by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @10:20PM (#685951) Journal

          The $100,000 one didn't sound like such a bad idea, if there was a way to index link it to the Earth's resources (all elements and organisms, not just gold).

          So you want to incentivize humanity to take the Earth completely apart for its resources? Because there's a vast difference in that number between an intact Earth and one where the resources of Earth have been to the last pebble distributed to an entire Solar System civilization.

          And if no one is allowed to have more than $100k, then where's the incentive to grow the pie? To the contrary, there is no use for such a constraint. We are not wealthier or better off because Bill Gates is worth $100k instead of $100 billion. And without massive incentives to make the world better in massive ways, who is left to do this stuff. You've just pushed a bunch of power into the lap of government without any corresponding increase in how much they'll care.

          • (Score: 2) by acid andy on Wednesday May 30 2018, @02:04AM (4 children)

            by acid andy (1683) on Wednesday May 30 2018, @02:04AM (#686042) Homepage Journal

            So you want to incentivize humanity to take the Earth completely apart for its resources? Because there's a vast difference in that number between an intact Earth and one where the resources of Earth have been to the last pebble distributed to an entire Solar System civilization.

            Yeah, I did worry a bit about a gold rush for every resource, but the idea was to set the wealth cap at a low enough level that it actually limits how much of the Earth's resources can be plundered, rather than accelerating their extraction.

            And if no one is allowed to have more than $100k, then where's the incentive to grow the pie? To the contrary, there is no use for such a constraint. We are not wealthier or better off because Bill Gates is worth $100k instead of $100 billion. And without massive incentives to make the world better in massive ways, who is left to do this stuff. You've just pushed a bunch of power into the lap of government without any corresponding increase in how much they'll care.

            It certainly looks like it would have some problems. Presumably Bill would have to start spending his wealth before he was allowed to earn anymore. The trouble is, he'd only be allowed to spend it on people that hadn't yet got their full $100k, if there were any left. Hmm, if they weren't, how could anyone purchase anything? They could have children in order to palm some of the money off to them. Ah but Gaark's AI said that wasn't allowed!

            Hey, Gaark, what happens when everyone on the planet has $100k in assets? Are they allowed to burn the assets? I suppose it would be more logical to burn assets of the person you want to purchase from, rather than your own. Yeah, this system would cause a global crime epidemic. People could always trade assets of equal value, but no-one would be allowed to make a cent of profit once they were at the cap.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
            • (Score: 2, Insightful) by khallow on Wednesday May 30 2018, @04:10AM

              by khallow (3766) Subscriber Badge on Wednesday May 30 2018, @04:10AM (#686083) Journal

              but the idea was to set the wealth cap at a low enough level that it actually limits how much of the Earth's resources can be plundered, rather than accelerating their extraction.

              What exactly is wrong with plundering resources? They're not doing anyone any good sitting in the ground. And the vast majority are so plentiful and easy to come by (even with that environmental footprint) that it doesn't make sense to keep it in the ground. Finally, we have over seven billion people to care for and improve the lives of. Artificial restrictions on resource consumption run contrary to what has worked amazingly well so far.

              Let's look at that argument in more detail. First, resource extraction just isn't that bad environmentally outside of agriculture. The environmental destruction can be quite severe, but it's just that area that is affected. And we've developed a bunch of technologies and strategies for limiting our impact of resource extraction.

              The real pollution problems happen after its dug out of the ground. But even that is manageable. It's a solved problem in the developed world.

              And what's the point of saving in the ground merely so that another generation can also save it in the ground so that another generation can... At some point, resources have to come up in order for them to have value to us. It is merely a matter of whether it's better to mine them as soon as convenient or let them wait for a future generation. Here, the obvious thing is that our technologies of extraction, recycling, and such improves over time. So future generations won't have the same resource headaches we do now. So why coddle them?

              Agriculture is a bit different, but it has the virtue of being heavily renewable. With sparing use of natural gas-based fertilizers, one should be able to get it to keep going for another century. By then we'll probably have long closed the gap in renewable nitrogen and perhaps figured out how to recycle phosphorus and other macronutrients as well (depends on whether we need to, of course).

              Moving on, why the obsession with resources anyway? They're not that scarce and their exploitation doesn't cause that much in the way of trouble (particularly in the developed world). Meanwhile we're doing amazing things with those resources, such as elevating the entirety of humanity out of poverty and developing a global, high tech society. Don't we have better things to do with our time?

            • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:12PM (2 children)

              by Gaaark (41) on Wednesday May 30 2018, @02:12PM (#686271) Journal

              See my other response about the example being pulled out of my dogs ass: the example has no real relevance to.... anything.

              Just a silly example.

              --
              --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
              • (Score: 2) by acid andy on Wednesday May 30 2018, @03:35PM (1 child)

                by acid andy (1683) on Wednesday May 30 2018, @03:35PM (#686309) Homepage Journal

                Sorry for spelling your name wrong, by the way. The "a"s just seemed to blur together. My eyesight probably isn't what it once was. I guess Copy & Paste is my friend in these circumstances. ;)

                --
                If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
                • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:57PM

                  by Gaaark (41) on Wednesday May 30 2018, @03:57PM (#686314) Journal

                  Just don't call me late for supper.
                  ;)

                  --
                  --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 2) by Gaaark on Wednesday May 30 2018, @01:36PM (1 child)

        by Gaaark (41) on Wednesday May 30 2018, @01:36PM (#686246) Journal

        Just an 'example' pulled out of my dogs ass...don't take it for anything 'real'.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 1) by khallow on Thursday May 31 2018, @12:49PM

          by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:49PM (#686703) Journal
          But an illustrative example just the same. A AI can be just another authority figure. If Big Puter says that we must hand our society completely over to the benevolent Supreme Galactic Emperor khallow, who are you to disagree? You don't know what it knows!
    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:56AM

      by Anonymous Coward on Tuesday May 29 2018, @03:56AM (#685438)

      Unless an ai can control people, it won't change a thing

      Then we better not give any AI a lethal weapon, or control over life support systems. Because "control" and "kill" may just mean the same to some AI.

      AI will not change people and only people can save the world.

      But AI can save the world from people.

    • (Score: 2) by shortscreen on Tuesday May 29 2018, @05:34AM

      by shortscreen (2252) on Tuesday May 29 2018, @05:34AM (#685446) Journal

      I think AI could be a tool that would provide a medical diagnosis, help generate "creative" works (procedurally generated, but with a procedure fancy enough to be called AI), or work on engineering problems.

      As you say, it's unlikely to help with any problem where the difficulty lies with getting large numbers of people onboard.

    • (Score: 2) by Wootery on Wednesday May 30 2018, @10:38AM (5 children)

      by Wootery (2341) on Wednesday May 30 2018, @10:38AM (#686210)

      Strong claims, and zero supporting evidence...

      AI will not change people and only people can save the world.

      And you're basing this on what? That you can't imagine an AI being superhumanly persuasive?

      Your failure of imagination does not demonstrate impossibility.

      If anything, the evidence is against you. This experiment [yudkowsky.net] suggests than for an AI to escape an 'airgapped' environment, it need only be as persuasive as a skilled human. (It's annoying that Yudkowsky refuses to disclose his persuasive techniques.)

      • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:01PM (4 children)

        by Gaaark (41) on Wednesday May 30 2018, @02:01PM (#686263) Journal

        Basically I said that under the heading of "Unless an ai can control people, it won't change a thing:"

        When an AI CAN manipulate people, this all changes (basically saying people will fuck things up unless control over things is taken away from them).

        No argument with you otherwise.
        Can't wait for an intelligent sexy sex-bot, lol.

        "That's it, baby... manipulate me!"
        ;)

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 2) by Wootery on Wednesday May 30 2018, @02:23PM (1 child)

          by Wootery (2341) on Wednesday May 30 2018, @02:23PM (#686275)

          Ok, I get you now.

          I'm not sure I agree though. If we have AI running the stock-markets, they will have a great deal of power, even if they have zero ability to communicate with us directly.

          • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:44PM

            by Gaaark (41) on Wednesday May 30 2018, @02:44PM (#686285) Journal

            We DO have 'artificial' intelligence running the stock market: they're called traders, lol.

            But yeah, seriously, this is why i try to support things like mycroft.ai : open source and open ermmm 'idealogy'??? AI with a conscience??? cause yeah, AI in the hands of corporations could REEEEEEEALLLLLLLY FUCK US AAAAALLLLL!

            AI COULD be really good, but with the way things are heading.....

            --
            --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 2) by Justin Case on Wednesday May 30 2018, @02:55PM (1 child)

          by Justin Case (4239) on Wednesday May 30 2018, @02:55PM (#686291) Journal

          But AI can control people. We've arrived at the point where AI can kill people -- at will, without human oversight. See for example https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg [wikipedia.org].

          Perhaps at present no AI "understands" the power it has... do you want to gamble that such a day will never come?

          • (Score: 3, Insightful) by Gaaark on Wednesday May 30 2018, @04:11PM

            by Gaaark (41) on Wednesday May 30 2018, @04:11PM (#686320) Journal

            Hooboy!

            No, AI has nothing to do with this case:

            Uber 'turned down' the AI to almost zero, in order to avoid false braking. If they'd turned the AI "up to 11", this could easily have been avoided and she would still be alive.

            It was from the top down in Uber to decide these things and has NOTHING to do with the AI.

            Uber fucked up big time and tuned the software (the AI) to a point that Uber liked. She was killed by Uber and Uber alone and Uber should be bankrupted for it.

            The AI did what it was allowed to do and braked at the last second.
            The AI was ONLY allowed to do what it was ALLOWED to do

            Don't blame AI for Uber executives being assholes (and no, it is not real AI). Uber should be gone, should be bankrupt and Uber's top executives should be in jail at the least. Uber should have been shut down... what does that tell you about the cost of living in todays world:

            BIG CORP kills a person... who cares.
            A random psycho kills a person....O.M.G! THEY USED A GUN AND KILLED A PERSON! O.M.G! UBER USED A CAR AND KILLED A PERSON AND NO ONE CARES BUT GUN/NRA/PSYCHO!

            Random psycho is jailed...Uber continues as normal.

            --
            --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 2) by MichaelDavidCrawford on Tuesday May 29 2018, @05:39AM (2 children)

    So I got a summer job working on Sapiens Software Star Sapphire Common LISP.

    It found some success in the education market.

    I don't think anyone ever managed to use Star Sapphire to make an IBM PC-XT conscious of its own existence.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 1, Interesting) by Anonymous Coward on Tuesday May 29 2018, @10:14PM

      by Anonymous Coward on Tuesday May 29 2018, @10:14PM (#685948)

      I went to a talk by Dr. Hecht-Nielson in about 1990. His company had just created some software to evaluate loan applications for banks. The best I could figure out, after unwrapping NN, was that it was a way to evaluate a LSF line through the dataset.

      https://en.wikipedia.org/wiki/Robert_Hecht-Nielsen [wikipedia.org]

    • (Score: 2) by Wootery on Wednesday May 30 2018, @10:42AM

      by Wootery (2341) on Wednesday May 30 2018, @10:42AM (#686211)

      It didn't happen then, so it can never happen.

      Really? For how many centuries did people think the same way about flight?

  • (Score: 5, Informative) by bradley13 on Tuesday May 29 2018, @06:02AM (23 children)

    by bradley13 (3053) on Tuesday May 29 2018, @06:02AM (#685455) Homepage Journal

    I've been working in the area of AI since around 1985. The fact is, there is no "I" in AI. With symbolic AI, we have some clever algorithms, developed by people that do exactly what people tell them to do. With neural nets (and to a lesser extent genetic algorithms, etc.), we have a way of throwing piles of computing power at a problem, and getting a half-assed solution that sometimes works, only we don't know why it works or why it sometimes doesn't.

    What none of these solutions bring to the table is any sort of intelligence, as people understand the term. There is only a piece of code, working on demand, taking inputs and generating outputs. The code has no "understanding" of its task, it takes no initiative, never does anything new or surprising.

    The main change from 1985 to today is the massive amount of computing power we can throw at problems. This hasn't helped much in symbolic AI, where we are still proceeding in tiny baby steps. In the area of neural nets, we also haven't made much progress, but the computing power means that we can build much bigger networks and throw more (and more complex) data at them. It isn't much of an oversimplification to say that today's neural nets are still the same perceptrons developed in 1957.

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by NotSanguine on Tuesday May 29 2018, @08:44AM (17 children)

      You are quite correct.

      What the shills call "A.I." isn't what a lay person thinks of as artificial intelligence.

      The shills are talking about expert systems [wikipedia.org], which can actually be quite useful, especially with (as you pointed out) the large amounts of computational resources we can throw at these systems.

      When a person who isn't shilling for vendors of expert systems, "A.I." is generally interpreted as a generalized intelligence [wikipedia.org] that can perform human-like mental activities.

      In some respects those fields are related, but not in most of the important ways.

      Regardless of technological advances, such a generalized intelligence isn't anywhere on the horizon. The complex interplay of mental and physical stimuli and feedback in living systems are crucial to a generalized intelligence, and we don't have the knowledge, the scientific principles or, frankly, a clue, as to how to create such systems. As such, generalized AI is (optimistically) centuries, if not millenia (tens of millenia?) away, if possible at all.

      As such, if anyone tells you that current "A.I." is anything more than expert systems, they are either woefully uninformed or have some ulterior motive (probably to get you, and/or others, to buy something).

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
      • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @09:03AM (11 children)

        by Anonymous Coward on Tuesday May 29 2018, @09:03AM (#685500)

        As such, generalized AI is (optimistically) centuries, if not millenia (tens of millenia?) away, if possible at all.

        You have zero data to make any estimate about how far or close we are from hard AI.

        Note that "tens of millennia" already describes all of human history. Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

        • (Score: 3, Funny) by NotSanguine on Tuesday May 29 2018, @09:10AM (8 children)

          Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

          Yes. Next question.

          --
          No, no, you're not thinking; you're just being logical. --Niels Bohr
          • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @01:04PM

            by Anonymous Coward on Tuesday May 29 2018, @01:04PM (#685556)

            Well, it depends on POV.

            Looking back, now that we know that space exists (or more to the point, that "sky" ends after a while when going up), that it is reachable with finite amount of energy, and that forces of nature can be harvested to provide the sufficient amount of energy, we could easily predict that humans will figure it out eventually.

            However, looking from perspective of someone from one of those respective ages, there is an infinite number of reasons for doubt.

            We could make a wager today, and someone 5000 years from now (if there'll be someone) could judge our bet, about something not existent today that could or could not be possible in 5000 years time, but we cannot predict with certainty, because we are still in process of finding out.

          • (Score: 1) by khallow on Tuesday May 29 2018, @10:22PM (6 children)

            by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @10:22PM (#685955) Journal

            Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space?

            Yes. Next question.

            No. Next bald assertion.

            • (Score: 3, Touché) by NotSanguine on Tuesday May 29 2018, @10:42PM (5 children)

              Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space?

                      Yes. Next question.

              No. Next bald assertion.

              Not really. 5,000 years ago, I *could* have predicted space travel, impressionist art, vajazzling [wikipedia.org], self-immolation as a form of yoga, or any number of other things.

              It seems unlikely that I would have done so, but I certainly could have done so.

              Words have specific meanings. Perhaps you, and the poster whose blathering I initially replied, might remember that next time.

              What's more, such blathering adds nothing to the discussion.

              --
              No, no, you're not thinking; you're just being logical. --Niels Bohr
              • (Score: 1) by khallow on Wednesday May 30 2018, @01:20AM (4 children)

                by khallow (3766) Subscriber Badge on Wednesday May 30 2018, @01:20AM (#686027) Journal

                Not really. 5,000 years ago, I *could* have predicted space travel, impressionist art, vajazzling, self-immolation as a form of yoga, or any number of other things.

                It seems unlikely that I would have done so, but I certainly could have done so.

                You wouldn't have, but you could have. Such is the difference between reality and what is mathematically possible.

                • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:32PM (3 children)

                  by Gaaark (41) on Wednesday May 30 2018, @02:32PM (#686279) Journal

                  No no...step back: science fiction writers do this all the time: make predictions.

                  Someone MAY have written a story about an all powerful 'god-like' being that is really truly fucked (lol), whereas others might have written a story about aliens visiting (some hieroglyphs suggest such a thing)

                  It IS possible and quite realistic someone COULD have 'predicted' all SORTS of things.

                  --
                  --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
                  • (Score: 1) by khallow on Thursday May 31 2018, @04:39AM

                    by khallow (3766) Subscriber Badge on Thursday May 31 2018, @04:39AM (#686592) Journal
                    You say that now...
                  • (Score: 1) by khallow on Thursday May 31 2018, @12:32PM (1 child)

                    by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:32PM (#686693) Journal
                    I'll note also that science fiction didn't exist in any form till around three centuries ago. The people who "do it all the time" didn't exist 5000 years ago.
                    • (Score: 2) by Gaaark on Friday June 01 2018, @12:24AM

                      by Gaaark (41) on Friday June 01 2018, @12:24AM (#687000) Journal

                      Science fiction had its beginnings in the time when the line between myth and fact was blurred. Written in the 2nd century AD by the Hellenized Syrian satirist Lucian, A True Story contains many themes and tropes that are characteristic of modern science fiction, including travel to other worlds, extraterrestrial lifeforms, interplanetary warfare, and artificial life. Some consider it the first science fiction novel.
                      --- https://en.m.wikipedia.org/wiki/Science_fiction [wikipedia.org]

                      But past that, the writing would have to be in stone, I guess, or kept in an urn-like vessel: kind of like the dead sea Scrolls.

                      *Cough* That is unless you see the Bible as a sort of science fiction story *cough*

                      --
                      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @09:52AM (1 child)

          by Anonymous Coward on Tuesday May 29 2018, @09:52AM (#685510)

          Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

          Yes. All of them.

          What I would seriously have had difficulty being able to predict 100 years ago is that we'd still be reliant on the petrol-driven automobile.

          • (Score: 1, Insightful) by Anonymous Coward on Tuesday May 29 2018, @09:58AM

            by Anonymous Coward on Tuesday May 29 2018, @09:58AM (#685512)

            Or that we'd still be murdering each other in the name of God.

      • (Score: 3, Insightful) by bzipitidoo on Tuesday May 29 2018, @11:22AM (4 children)

        by bzipitidoo (4388) on Tuesday May 29 2018, @11:22AM (#685534) Journal

        > Regardless of technological advances, such a generalized intelligence isn't anywhere on the horizon.

        I wouldn't be so sure of that. We could still accidentally stumble over ways to do it. Now we're throwing around an awful lot of computing power, far more capacity than we ourselves have in many areas. We can no longer match our creations in raw mathematical calculation, haven't been able to do that since the 1950s or even the 1940s, and now computers have surpassed us at mental games such as chess and Go, and also poker and arcade games.

        > we don't have the knowledge, the scientific principles or, frankly, a clue, as to how to create such systems.

        Our ignorance isn't quite that bad. A very basic problem is figuring out what intelligence is, what do we mean by it? Our IQ tests measure our personal computational and intuition abilities and knowledge as a sort of proxy for general intelligence. Our veneration of chess comes from the same thinking, and success in creating machines that surpass us at chess showed that ability at chess does not translate into general intelligence as some had wishfully hoped. Clearly, intelligence is more than a phenomenal memory capacity and ability at specific types of mental tasks, we know that better now, aren't guessing as much at that one.

        Soon, if not already, our computers will be able to tackle basic philosophy, and there are a lot of uncomfortable questions there. We hide a lot from ourselves. We like to think the age old "why are we here?" and "what is the meaning of life?" questions are unanswerable, because we don't like some of the answers. For instance, what if nihilism, that life has no meaning, is correct? Then there's Good and Evil. We like to think we're good, but what does that mean, and are we as nice as we like to think we are? The way life works has a lot of frankly evil activities such as predation and parasitism that are absolutely necessary for the functioning of the ecology. All animals, including us, have to eat other living things to survive. Plants have evolved strategies for that environment, and now many depend upon the very animals that eat them. Even plants can't be considered innocent, in that they fight each other for sunlight and other resources. We do a lot of rationalizing and excusing over it, but you can count on intelligent computers not being impressed by that. On occasion, entire nations explore directions that lead to a lot of death, and it's all the more tragic in that so often those paths didn't need to be explored, we already knew how it would turn out, and the results merely confirm that some simplistic and evil thinking was stupid and wrong.

        • (Score: 3, Interesting) by NotSanguine on Tuesday May 29 2018, @03:47PM (3 children)

          You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

          What's more, those systems are tightly coupled with mechanical, chemical, electrical and biological processes, many of which include components which aren't even part of the brain or nervous system. And many of those, we don't understand at all. A human (or a dolphin or a dog) intelligence is a synthesis of all of those systems and processes. They've been integrated and optimized via millions of years of evolution.

          We have some understanding of many of these systems and processes. However, our understanding as to how they integrate to create generalized intelligence and the difficult to define concept of "consciousness" is, at best, minimal.

          Sure, there could be a breakthrough tomorrow that lays it all bare for us. However, given the state of our science and technology, that's unlikely in the extreme.

          We have an enormously better understanding (via general relativity) of how space-time can be stretched, contracted, bent, folded, spindled and mutilated, than we do about how generalized intelligence works.

          In fact, our understanding of general relativity gives us a pretty clear idea [wikipedia.org] about how we might achieve practical interstellar travel in reasonable human-scale time spans. That said, there are significant areas of science which would require development, as well as technology/engineering obstacles (not least of which is a source of energy concentrated enough to create the required conditions) which, it's pretty clear, are centuries, if not millenia beyond us.

          If I apply your reasoning to the Alcubierre drive, we could be off with the kids to Gliese 581 [wikipedia.org] for summer holiday, rather than Mallorca, Disney World or Kuala Lumpur in 25-30 years, assuming there's some sort of breakthrough. Which is absurd on its face.

          Given that our understanding of the processes involved in generating a conscious intelligence is miniscule by comparison to our understanding of space-time, the idea that we could create such a thing any time soon, is even more absurd. The science that needs to be developed, as well as the technology/engineering challenges required to create a generalized intelligence are orders of magnitude greater than creating some form of "warp drive."

          Don't believe me. Go look at the *actual* science, research and engineering going on around both topics.

          --
          No, no, you're not thinking; you're just being logical. --Niels Bohr
          • (Score: 1) by khallow on Thursday May 31 2018, @12:37PM (2 children)

            by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:37PM (#686694) Journal

            You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

            Or maybe it is a strictly computational problem. Certainly, integrating a wide variety of input stimuli is such a computational problem.


            My view is that we have ample hardware resources for building genuine AI, we just haven't done it yet.

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @08:49AM

      by Anonymous Coward on Tuesday May 29 2018, @08:49AM (#685497)

      and getting a half-assed solution that sometimes works, only we don't know why it works or why it sometimes doesn't.

      Sounds the same that we get from most humans, too.

    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @01:36PM

      by Anonymous Coward on Tuesday May 29 2018, @01:36PM (#685582)
    • (Score: 3, Touché) by Wootery on Wednesday May 30 2018, @10:44AM

      by Wootery (2341) on Wednesday May 30 2018, @10:44AM (#686212)

      The code has no "understanding" of its task, it takes no initiative, never does anything new or surprising.

      Well, the brain is deterministic too...

      Whatever 'understanding' might mean, it clearly doesn't mean 'nondeterministic'.

    • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:50PM (1 child)

      by Gaaark (41) on Wednesday May 30 2018, @02:50PM (#686287) Journal

      Thanks for the confirmation:
      kind of what i gathered. More computational power does not mean more 'intelligence'. Until we figure out the holy grail of AI, it will all just be programmers telling computers what 'intelligence' is.

      On teh other hand, when the holy grail IS found, Dog help us all...here's hoping our new AI overlords will "use their knowledge for good...instead of evil" and know what the difference between good and evil is, lol.

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 2) by Wootery on Friday June 01 2018, @09:02AM

        by Wootery (2341) on Friday June 01 2018, @09:02AM (#687158)

        You're assuming there's a discrete 'Holy Grail'. I think we can be confident this isn't how intelligence works. It isn't boolean. At no point in our evolutionary history did we suddenly 'become intelligent'. Instead we grew more intelligent, incrementally, the way everything works in evolution.

  • (Score: 2) by rigrig on Tuesday May 29 2018, @08:47AM (2 children)

    by rigrig (5129) <soylentnews@tubul.net> on Tuesday May 29 2018, @08:47AM (#685494) Homepage
    We're actually very close to solving all of these problems.
    Turns out that all we really need to do is eradicate the dominant species to make most of them go away, leaving just the matter of optimising paperclip production.
    If only people would put a little more trust in computers, maybe a catchy jingle or a nice limerick would help?
    --
    No one remembers the singer.
    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @03:55PM

      by Anonymous Coward on Tuesday May 29 2018, @03:55PM (#685681)

      Clippy is looking to rule the world!

    • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:02PM

      by Gaaark (41) on Wednesday May 30 2018, @03:02PM (#686296) Journal

      There was a computer from Fucking
      that couldn't compute untucking
      it played for a while
      and started to smile
      when it realized it was from Austria

      https://en.wikipedia.org/wiki/Fucking%2C_Austria [wikipedia.org]

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @01:17PM (2 children)

    by Anonymous Coward on Tuesday May 29 2018, @01:17PM (#685568)

    Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program, entire crops were lost. Some believed we lacked the programming language to describe your perfect world, but I believe that, as a species, human beings define their reality through misery and suffering. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. -- Agent Smith [youtu.be]

    • (Score: 2) by suburbanitemediocrity on Tuesday May 29 2018, @10:24PM (1 child)

      by suburbanitemediocrity (6844) on Tuesday May 29 2018, @10:24PM (#685957)

      The first Matrix only held 4 people, ran on a PDP-11/44 and crashed a lot.

      • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:04PM

        by Gaaark (41) on Wednesday May 30 2018, @03:04PM (#686297) Journal

        The second one ran on Windows and rebooted and got viruses/virii and elected Donald Trump via Russia and crashed a lot.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 1) by What planet is this on Tuesday May 29 2018, @04:55PM

    by What planet is this (5031) on Tuesday May 29 2018, @04:55PM (#685714)

    Why not augment human intelligence with computers instead. Think being able to almost instantly call up any and all info or data you can imagine and having a computers ability to sort and use the relevant bits to accomplish your task. Fun stuff like learning to fly a helicopter in The Matrix. Load it up and go. Done. Not to mention thought experiments take on real meaning and can have real impact. If some sociopath or psycho tries to use the ability for evil purposes disconnect or delete them.

(1)