Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @02:25AM (21 children)

    by Anonymous Coward on Tuesday May 29 2018, @02:25AM (#685403)

    ...making popcorn...

    I think you are about to be torn apart, unless you have some pretty special definitions for creativity or intuition.

  • (Score: 2) by The Mighty Buzzard on Tuesday May 29 2018, @05:48AM (20 children)

    No special definitions needed. The entity doing the thinking simply has to be able to answer the question "why". It has to be able to understand rather than simply playing the odds. Computers are currently not capable of doing this and likely never will be.

    I have no doubt that eventually a computer will be able to crank out pop music that hits the top ten regularly. That is extremely formulaic already and not at all what I'm talking about. I'm speaking of actual creativity. Pulling something entirely new from the imagination and bringing it into existence. Given all the data in the world and infinite processing power, no computer would have ever spit out Smells Like Teen Spirit in 1991. It was simply too different from everything else on the charts at the time. A human had to do it because a computer could never have understood what it was to be a GenX human being back then, realized something entirely new was called for, and created it.

    --
    My rights don't end where your fear begins.
    • (Score: 4, Insightful) by Wootery on Tuesday May 29 2018, @09:04AM (17 children)

      by Wootery (2341) on Tuesday May 29 2018, @09:04AM (#685501)

      'Understand' is not a concrete idea, it's a constantly shifting goal.

      Consider chess AI. A century ago, I'm sure people would have said that any machine that can beat the chess masters, can surely be said to 'understand' the game. These days, people don't consider chess AI to 'understand' the game.

      The moving-goalposts thing is a common theme with AI. Whenever someone cracks a problem using AI techniques, the response is Well that's a cute trick, but it's not real AI! This happens decade after decade.

      'Understanding' and 'real AI' are not well-defined terms, they're just a reflection of public opinion.

      If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?

      likely never will be

      I see no reason to assume this. Our brains may be a very different sort of computer than human-built computers are, [youtube.com] but they're ultimately 'just' complex arrangements of molecules that process information. In principle at least, there's no reason to assume that we'll never be able to do with transistors, what we can already do with neurons. There's nothing magic about carbon, or about silicon.

      Another interesting take on this stuff is Dennett's idea of competence without comprehension [tufts.edu], but in this context, I'm not convinced that 'understanding' is meaningful in the first place.

      Perhaps we could define 'understanding' in terms of having a good command over the abstract concepts relating to a problem domain, but I don't think it's enough to dismiss the question with No special definitions needed.

      • (Score: 2) by cubancigar11 on Tuesday May 29 2018, @02:58PM (6 children)

        by cubancigar11 (330) on Tuesday May 29 2018, @02:58PM (#685639) Homepage Journal

        I think a problem can be well defined as 'understood' when AI can change itself according to shifting goalpost without any external intervention. Right now the best solution out there pertaining to AI are based on neural networks (which, btw, I think is pretty neat in itself), if for any of those to be called Real(TM) AI, they will need (A) a mechanism to know that the goalpost has shifted (B) find that goal post (C) surpass it before it is shifted again.

        As of now (A) is solved and (B) is done by the humans. Until that happens (C) will not be focus of the debate.

        • (Score: 2) by Wootery on Tuesday May 29 2018, @05:34PM (5 children)

          by Wootery (2341) on Tuesday May 29 2018, @05:34PM (#685738)

          I think a problem can be well defined as 'understood' when AI can change itself according to shifting goalpost without any external intervention.

          That doesn't sound like 'understanding', it sounds more like online machine learning. [wikipedia.org]

          for any of those to be called Real(TM) AI, they will need (A) a mechanism to know that the goalpost has shifted (B) find that goal post (C) surpass it before it is shifted again.

          Sounds like a rather arbitrary threshold to set, and again, it seems to me that even the most primitive online machine learning algorithms already fit the bill, although they don't necessarily use the approach that you describe where there are discrete stages.

          • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @04:18AM (4 children)

            by cubancigar11 (330) on Wednesday May 30 2018, @04:18AM (#686090) Homepage Journal

            Not arbitrary at all! This is how we measure intelligence in fellow human beings! We don't think that a plumber is intelligent because it can turn a screw, we think of him as intelligent when he figures out a problem on its own and solves it to the expectation that was set before the problem was even discovered.

            • (Score: 2) by Wootery on Wednesday May 30 2018, @09:03AM (3 children)

              by Wootery (2341) on Wednesday May 30 2018, @09:03AM (#686178)

              This is how we measure intelligence in fellow human beings!

              Not really - we look at how good people are at solving problems. I don't see the need for any explicit model involving distinct stages and 'shifting goalposts'.

              • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @09:36AM (2 children)

                by cubancigar11 (330) on Wednesday May 30 2018, @09:36AM (#686193) Homepage Journal

                That is because we have implicit understanding of what it means to be people. As much as you might dislike it, we have come to understand AI as having more than just how good it is at solving a particular problem. Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

                • (Score: 2) by Wootery on Wednesday May 30 2018, @10:29AM (1 child)

                  by Wootery (2341) on Wednesday May 30 2018, @10:29AM (#686206)

                  So you agree intelligence tests have no 3-stage model?

                  Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

                  You've not phrased that very clearly, but I think your point is that extremely simple programs can be more effective at solving certain problems than even very intelligent people.

                  This is certainly true. When it comes to memorising numbers, multiplying numbers, etc, even very humble computers easily outperform the most intelligent humans. (This is what Daniel Dennett calls competence without comprehension.)

                  When we say 'AI', we tend to mean either

                  1. Software that attempts to solve a problem that previously no software system has solved
                  2. Software that uses machine-learning in at least some way
                  3. 'General artificial intelligence', which roughly speaking tends to mean it has the generality of intelligence that a human has (rather fuzzy, and we'll ignore for now that it will presumably be better than any human at multiplying numbers)

                  A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

                  None of this tells us what 'understanding' means. The best definition I can think of is - as I mentioned elsewhere in this thread - the ability to reason about the abstract concepts related to a problem-domain. (Of course, no current software system is capable of doing this.)

                  That's not all that precise, but it seems like a good starting point. I don't see that it would be reasonable to define understanding to only ever apply to humans. That's just cheating.

                  • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @01:11PM

                    by cubancigar11 (330) on Wednesday May 30 2018, @01:11PM (#686239) Homepage Journal

                    So you agree intelligence tests have no 3-stage model?

                    Huh? How did you get that feeling? I am critiquing your following statement:

                    Not really - we look at how good people are at solving problems.

                    by pointing out that how good someone is at solving a problem is not at all related to how we measure intelligence. Intelligence is about how novel your solution is not and at all about how good that solution is.

                    When we say 'AI', we tend to mean either
                    1. Software that attempts to solve a problem that previously no software system has solved
                    2. Software that uses machine-learning in at least some way

                    Only if by 'we' you mean people working on AI. For me, 'we' means general populace and they don't care about machine learning.

                    A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

                    No, a pocket calculator doesn't qualify as AI because it doesn't have any way to recognize a problem and come with a novel solution, or in colloquial terms, "understands" the problem, or to put it in a testable way, it doesn't follow the model I am suggesting.

                    I think I realize where the disagreement is originating - I am talking about why a layman doesn't consider something to be an AI and you are talking about something that has already been done in the name of AI.

      • (Score: 2) by The Mighty Buzzard on Tuesday May 29 2018, @03:24PM (6 children)

        Chess is a finite problem that requires nothing but time to solve utterly. Most of life is not.

        And we're not talking competence here. Competence simply requires looking at what everyone else is doing and parroting or solving a problem given a sufficiently exhaustive input data set. Intuition is what handles everything else.

        We really don't need to define understanding here unless you're a machine yourself. Computers are incapable of it because it requires more than executing op codes and reading memory addresses. Humans are not incapable; they're born capable, it simply takes them time to acquire. Put in programming terms, every new event needs a method/function to deal with it. These have to be pre-programmed because you cannot release potentially dangerous machines into the wild with absolutely no idea what kind of solution they're going to come up with for a given problem. If they are pre-programmed, that's human understanding not machine understanding.

        Existence may technically be finite but it's finite beyond ability to describe in given our storage limitations.

        --
        My rights don't end where your fear begins.
        • (Score: 2) by maxwell demon on Tuesday May 29 2018, @05:56PM (1 child)

          by maxwell demon (1608) on Tuesday May 29 2018, @05:56PM (#685759) Journal

          Chess is a finite problem that requires nothing but time to solve utterly.

          Except that the time needed to completely solve it is long enough that the age of the universe seems like the blink of an eye in comparison.

          --
          The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by Wootery on Tuesday May 29 2018, @06:13PM (3 children)

          by Wootery (2341) on Tuesday May 29 2018, @06:13PM (#685769)

          Competence simply requires looking at what everyone else is doing and parroting or solving a problem given a sufficiently exhaustive input data set. Intuition is what handles everything else.

          Seems to me you've introduced another rather unclear and loaded term ('intuition'), and are gradually building a somewhat half-baked theory of mind.

          1. 'Parroting' isn't trivial, as we're seeing with non-'creative' AI tasks like self-driving cars, and voice-recognition
          2. There's no bright line between parroting and creativity. Does the work of an undergraduate math student count as 'creative'? Most people think not. How about a math researcher? They're both in the business of coming up with proofs they've not seen before, right? It's just that only the latter produces something worth publishing. Certainly, no current AI system could do the work of either. (Ignoring Gödel's incompleteness theorem for a moment.)
          3. By any sensible definition, every intelligent entity inspects a data-set, does some processing, and produces a result. Efficient intelligences do better than brute-force, and effective ones produce desirable results (by whatever metric). I don't need to define 'creativity' for that to be true: it's true of numerical addition, chess, and yes, of writing poetry, composing music, and battling it out on Soylent News :P

          We really don't need to define understanding here unless you're a machine yourself.

          Disagree. If we're not going to give it a reasonably clear definition, we can't meaningfully reason about it.

          Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

          That doesn't sound right. In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

          The earliest computer scientists never dreamed that computers could do what they do today. Let's not constrain ourselves to the limitations of modern hardware and software, when we're really concerned with the principle.

          Humans are not incapable; they're born capable, it simply takes them time to acquire

          So we humans are stateful machines capable of online machine learning, and we're pre-programmed with certain biases? Well sure. But I'm not seeing the 'in principle' difference between us and computers.

          These have to be pre-programmed because you cannot release potentially dangerous machines into the wild with absolutely no idea what kind of solution they're going to come up with for a given problem.

          People do that all the time. They're call 'parents'. We end up assigning moral responsibility to their new 'release', of course. Sometimes we even ritualise doing so. [wikipedia.org]

          Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

          Humans are very effective general learning machines. In principle, computers are capable of everything we're capable of. The universe supports the functionality of the human brain (obviously), and there's nothing magic about our substrate (neurons) vs theirs (transistors).

          Or do you really think that it would be impossible, even just in principle, to simulate the human brain using transistors?

          We already know that the inverse is possible: brains can simulate computers, it just takes an impractically long time. (That's why we build the things, after all.)

          If they are pre-programmed, that's human understanding not machine understanding.

          The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

          Existence may technically be finite but it's finite beyond ability to describe in given our storage limitations.

          I don't get you.

          • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:34PM (2 children)

            In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

            No. The programmers would have to understand precisely how the human brain worked for that to be possible. Neuroscientists don't even have this level of understanding at the moment. And if you don't think it needs to be precise, ask the fine folks over at Wine [winehq.org] to school you on emulation.

            Well sure. But I'm not seeing the 'in principle' difference between us and computers.

            Which is what I'd been trying to correct. It appears I've failed. Such is life.

            Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

            Abso-fucking-lutely. Try running mesa in software rendering mode sometime and you'll see why. Or try running a PS3 emulator on an x86 processor that's only clocked twice as fast as the original hardware. Apples and oranges matters a hell of a lot.

            The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

            No. The appearance of understanding is not the same as understanding. You can teach any fool enough of a raindance to be able to wire their house for electricity without teaching them why they're doing what they're doing. You're not going to let them loose as a licensed electrician like that though because they're either going to die or cause other people to die. Thus the apprenticeship and licensing requirements for electricians; we demand that they understand, not just ape what they see others doing. I'm not saying there's no utility to be found in teaching a machine a raindance or letting it raindance things of little importance or danger on its own but the capability is simply not the same, only the outcome.

            I don't get you.

            No, you don't. Which is why you should not be monkeying with AI. Ever.

            --
            My rights don't end where your fear begins.
            • (Score: 2) by Wootery on Wednesday May 30 2018, @02:09PM (1 child)

              by Wootery (2341) on Wednesday May 30 2018, @02:09PM (#686268)

              Come on Buzz, I was quite clear: in principle. It gets us nowhere for you to write about how difficult it would be. Those points are short-sighted and, frankly, rather obvious. You may as well remind me that no-one has yet successfully simulated a human brain.

              It remains that there is no reason in principle that transistors can't do what neurons can.

              Do you really want to assert that this is beyond the capability of any Turing-complete machine? It seems absurd. It's a physical process. You think physical processes can't be modelled by sufficiently powerful Turing-complete systems? We're not talking about nondeterministic quantum phenomena here.

              Do you think it would be impossible even in principle for any computer system to simulate the brain of a wasp? It's the same physical process at work, just in miniature.

              It appears I've failed.

              You've given me no good reason to believe that the physical processes of the brain follows special rules which are beyond the capabilities of any hypothetical Turing-complete computer, regardless of power.

              This is an extraordinary claim, but you've got nothing to support it.

              Try running mesa in software rendering mode sometime and you'll see why

              Again, you're writing about the performance challenges that face the computer systems of today. What's relevant is whether the computational simulation of a brain is possible in principle, and there seems to me to be every reason to think that the answer is yes: modern computers are quite capable of modelling physical processes, so I see no reason to assume the physical processes of the brain are categorically impossible to model computationally.

              The appearance of understanding is not the same as understanding.

              'Appearance' means behaviour. You're going with a definition of 'understanding' which isn't based on how something acts? I'm reminded of the Chinese Room Argument. You need to explain what you do mean by 'understanding' (as does Searle for that matter).

              the capability is simply not the same, only the outcome

              If two things are equally able to bring about some desired outcome, we say they have equal capability. When discussing capability, we don't care if they have brains or not.

              I remind you of what you said earlier:

              We really don't need to define understanding here unless you're a machine yourself. Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

              Are you saying that 'real understanding' needs consciousness? If so, just say so.

              You seem to be saying that no computer system can be said to have 'understanding', even if it behaves exactly the same way a human behaves. Presumably then you think physical brains are metaphysically magical? No matter what computers do, it still doesn't count!

              • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @05:34PM

                In theory cracking 21024 bit EC encryption is possible. You'd just need a universe that was going to last a lot longer than ours. Now I don't dislike science fiction but I do prefer to leave it on novel pages or a screen until something gives us an idea that it might actually be possible. Neither current nor proposed hardware has given us any indication of being able to approximate human intelligence. Computers are extremely good at being high-speed idiots but extremely bad at being anything else.

                --
                My rights don't end where your fear begins.
      • (Score: 2) by Gaaark on Wednesday May 30 2018, @03:35PM (2 children)

        by Gaaark (41) on Wednesday May 30 2018, @03:35PM (#686308) Journal

        "If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?"

        No. Can it come up with something that 'pre-supposes' something else?
        Could it predict geometry?
        Could it predict black holes?
        Could it come up, completely without human intervention, that if gravity is thought of as a piece of rubber with an object place on it and another object rolled around the first object, that physics as we know it will be completely changed and that from that we could create satellites that orbit planets and we could even travel inside and outside our solar system?

        We might be approaching artificial intelligence if it could predict something complex and not predicted before.

        Otherwise, and so far, we are just using 'artificial learning'.
        A + B gives us C.
        C + B gives us A.
        Justin Bieber + Internet gives us Crap.

        There needs to be a fundamental LEAP before there is AI. A holy grail is missing... queue the Spam-alot.

        Back to :"If someone wrote a program that was as good at math as the greatest living math professor, would you accept that it 'understands' mathematics?"
        No.
        They've done this with chess, and all it is is a chess program with a fuck load of computational power behind it.
        Could the same program realize that chess has a lot to do with logic and prediction and from playing chess, realize that man could leave it's own planet?

        Intelligence is figuring things out, making predictions, understanding things, creating things, knowing that life is complex and not simply black and white.
        The AI we have now is
        "the program says if A and B exist, then C or D is the answer unless E can be considered as an extraneous data then F."

        There is no A and B exist but F is there as well so C and D are not the answer: what is the answer? We have to find G or H or I. How do we find out whether it is G, H or I? Lets work all the possibilities: to do this, we need to put a satellite into orbit around J and see if it clarifies G, H or I. No? But almost....maybe a satellite that could extract K could help?
        Yes it does: the extraction of data about K gives us G or H, elimintating I, so how do we decide G or H?
        We need to land a rover on J.
        DAMN! We got G here! G! Shit, we got G and nothing but G!

        All that from a guess.

        Can an AI go through all that and come up with an answer without human intervention? Then maybe you have AI.
        I (for me, myself and I) would be happy with that.

        Guess, predict, check, reguess, recheck....predict....check....change check....re-predict.... change check....re-predict...correct answer!

        Would be damn close.

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
        • (Score: 2) by Wootery on Wednesday May 30 2018, @05:16PM (1 child)

          by Wootery (2341) on Wednesday May 30 2018, @05:16PM (#686356)

          No. Can it come up with something that 'pre-supposes' something else?

          Following the line I'm going down with Buzz elsewhere in the thread: let's say yes, it essentially just simulates the world's best living math professor, complete with the occasional faulty intuition.

          (This is of course an extremely unlikely 'first AI', but it's a fun thought experiment.)

          We might be approaching artificial intelligence if it could predict something complex and not predicted before.

          Well, we've definitely got a very impressive AI if it can write a serious mathematical research paper.

          I think I was rather unclear when I put 'good at math' - I hadn't meant procedural number-crunching, I'd meant research mathematics.

          I agree with your 'figuring things out' description, though of course it's imprecise. I agree that today's 'AI' is nowhere near the sort of 'strong AI' that we're talking about here. Today's AI's cannot reason about abstract concepts at all. No present-day AI could make sense of our conversation here and make a meaningful contribution. It's not even on the horizon.

          Another thing they're bad at is problems that use a general knowledge of things in the world. If we see a picture of a dog next to a shredded sofa, we immediately assume That bad dog has shredded the sofa! We'd never think to explain the picture with The dog has dragged that damaged sofa into the room, neither would we ever suggest that The sofa fought the dog, but lost. We intuitively know these are poor explanations, as we know how dogs and sofas 'behave'.

          The best that today's AIs could manage would be to identify the objects. They can't reason about the events that likely lead up to the depicted situation, neither can they reason about the likely future (dog getting told off by its master, sofa getting repaired or replaced, etc).

          Guess, predict, check, reguess, recheck....

          I broadly agree with your points, but that's rather imprecise. Self-play with chess 'AIs' involves that sort of process, but we wouldn't say they're strong, general AIs. It's the generality that's the kicker.

          • (Score: 2) by Gaaark on Wednesday May 30 2018, @05:48PM

            by Gaaark (41) on Wednesday May 30 2018, @05:48PM (#686377) Journal

            Okay, we're on the same page:
            Except "Self-play with chess 'AIs' involves that sort of process, but we wouldn't say they're strong, general AIs."

            It doesn't involve that sort of process: it is just as the programmers programmed it. There is no 'guess and prediction, only "with that move, my best move is here because my programming says THIS. OH! He moved here! My best move is then this because my programming says so: at no point is there a bluff, or a confusion, only "my best move, as per my program, is this."
            Basically an expanded 1+X=

            When it can say "Gaaark will do this because his son did this and his wife might do this because his daughter is maybe doing this...but wait! The in-laws want THIS and Gaaark parents want him to .... for their 60th wedding anniversary, and his sister in Alberta is planning...."

            If your AI can help me... GIMME!!!!

            --
            --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 1) by mmarujo on Wednesday May 30 2018, @11:21AM (1 child)

      by mmarujo (347) on Wednesday May 30 2018, @11:21AM (#686217)

      No special definitions needed.

      We agree on this, but on a different view points.

      Personally, I don't believe in soul/etheral/energy/whatever being: We are material. If we are then it i's possible to some day compute a simulation.

      I don't know / don't really care What about us / our brain allows us to understand, but sooner or later it will be possible to emulate. Besides, most work won't even require it.

      • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:46PM

        ...but sooner or later it will be possible to emulate.

        I disagree. I don't think any current or even theorized processor technology is capable of running software to emulate such radically different hardware. I'm not even sure the human race is capable of understanding the workings of their brains well enough to code such an emulator.

        Besides, most work won't even require it.

        Oh, sure. You don't need an electrician to change a light blub. You damned sure want one to look over any property you plan on buying though. If a raindance is sufficient, a raindance is sufficient. If understanding is necessary, understanding is necessary.

        --
        My rights don't end where your fear begins.