Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday July 30 2015, @01:01PM   Printer-friendly
from the it-depends-what-"it"-is dept.

In this wide ranging interview, Steven Wolfram [creator of Mathematica and Wolfram Alpha] talks about what he's been thinking about for the last 30+ years, and how some of the questions he's had for a long time are now being answered.

I looked for pull quotes, but narrowing down to just one or two quotes from a long interview seemed like it might send the SN discussion down a rabbit hole... if nothing else, this is a calm look at the same topics that have been all over the press recently from Hawking, Musk and others.

One interesting topic is about goals for AIs -- as well as intelligence (however you define it), we humans have goals. How will goals be defined for AIs? Can we come up with a good representation for goals that can be programmed?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @01:08PM

    by Anonymous Coward on Thursday July 30 2015, @01:08PM (#215854)

    Can't learn/adapt without goals.

    • (Score: 4, Insightful) by ikanreed on Thursday July 30 2015, @01:42PM

      by ikanreed (3164) Subscriber Badge on Thursday July 30 2015, @01:42PM (#215867) Journal

      Not to reject the idea you've got here entirely, but evolution by natural selection is a perfect example of adapting without goals.

      • (Score: 3, Interesting) by miljo on Thursday July 30 2015, @01:57PM

        by miljo (5757) on Thursday July 30 2015, @01:57PM (#215874) Journal

        Would survival of the species itself be an innate goal?

        --
        One should strive to achieve, not sit in bitter regret.
        • (Score: 2) by ikanreed on Thursday July 30 2015, @02:07PM

          by ikanreed (3164) Subscriber Badge on Thursday July 30 2015, @02:07PM (#215880) Journal

          No. No one decided that mattered. It lacks the identity of a goal in any real way.

          Moreover, things not surviving contributes to the process just as much as organisms surviving. Natural selection and mutation plays out all on its own for any self-reproducing anything.

          ...I just realized I'm way too depressed to have one of my normal arguments.

          • (Score: 1) by miljo on Thursday July 30 2015, @05:23PM

            by miljo (5757) on Thursday July 30 2015, @05:23PM (#215944) Journal

            So the meaning of life is something akin to nihilism, or 42.

            --
            One should strive to achieve, not sit in bitter regret.
          • (Score: 0) by Anonymous Coward on Friday July 31 2015, @02:35AM

            by Anonymous Coward on Friday July 31 2015, @02:35AM (#216125)

            yes, someone did. each individual decides what to eat, where to sleep, and who to screw in order to survive.

        • (Score: 4, Insightful) by c0lo on Thursday July 30 2015, @02:13PM

          by c0lo (156) Subscriber Badge on Thursday July 30 2015, @02:13PM (#215882) Journal

          Would survival of the species itself be an innate goal?

          No, I think you mistake it for "inane", in the "mindless" sense of it.

          Also, stop anthropomorphizing nature, it hates it.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2) by DeathMonkey on Thursday July 30 2015, @06:33PM

            by DeathMonkey (1380) on Thursday July 30 2015, @06:33PM (#215968) Journal

            in·nate

            adjective

            inborn; natural.
             
            Survival seems to be one of those inborn, natural things that all forms of life do.

            • (Score: 2) by c0lo on Thursday July 30 2015, @09:55PM

              by c0lo (156) Subscriber Badge on Thursday July 30 2015, @09:55PM (#216040) Journal

              Survival seems to be one of those inborn, natural things that all forms of life do.

              Which doesn't necessarily mean is a goal. Emergent behaviour maybe, but not a goal.

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @04:09PM

        by Anonymous Coward on Thursday July 30 2015, @04:09PM (#215918)

        Evolution does not mandate emergence of intelligence.

        • (Score: 2) by ikanreed on Thursday July 30 2015, @04:19PM

          by ikanreed (3164) Subscriber Badge on Thursday July 30 2015, @04:19PM (#215920) Journal

          Oh, let's head down this road, I guess.

          Define intelligence.

          • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @04:48PM

            by Anonymous Coward on Thursday July 30 2015, @04:48PM (#215932)

            No, let's not. But I concede, adaptation occurs without *conscious* goals.

    • (Score: 4, Insightful) by marcello_dl on Thursday July 30 2015, @02:07PM

      by marcello_dl (2685) on Thursday July 30 2015, @02:07PM (#215879)

      Take a look at life, life has no programmed goal, but the way the universe is implemented* makes entities that do not adhere to the process called life disperse with higher probability.

      As I said in the green site, statistically speaking:
      - stable thingies exist for longer than unstable ones
      - among these, those that grow exist for longer than those that don't grow
      - among these those that grow even when divided (replicate) exist for longer
      - among these, those that mutate exist for longer (hence death of the single individual becomes a way to increase variation)
      - among these, those that sense surroundings... those that predict what they will sense and behave accordingly... those that sense themselves as present in the environment... those that behave egoistically.... those that form a collective... those that communicate....

      So, I assert that an AI that is programmed to resemble life is not alive, one that shows emergent properties that resemble those of living creatures does. One that is alive in a virtual environment is at the mercy of the owner of said environment, one that is alive in the real world is A FUCKING COMPETITOR THAT MUST BE STOMPED... er... sorry, my usual survival instinct...

      *) I am not implying there must be an Implementor, but nonetheless there is a set of laws that model the universe while others don't model it, this make the universe an implementation. Even if somebody mathematically proved that this set of laws is the only one conceivable, he'd have done it from the universe conceived by them, using a logic infrastructure that does not extend to all inconceivable universes, nor to all conceivable ones for that matter. So it would be a tautology with limited scope instead of a proof.

  • (Score: 1) by WillAdams on Thursday July 30 2015, @01:10PM

    by WillAdams (1424) on Thursday July 30 2015, @01:10PM (#215856)

    Back in the ’60s and ’70s, there was discussion about how the then new computing machines should be taxed so as to provide funding to allow for a basic income for people whose jobs would be replaced by them — does A.I. provide a single control point which would allow that to be considered once again?

    If not, what are the alternatives?

    Given current social structures, we're not going to get a post-scarcity economy w/ free stuff for everyone — just look at the monetization of Facebook games to see that that doesn’t pay out when there’s a need to pay salaries and keep the servers and data centers running.

    • (Score: 2, Insightful) by Anonymous Coward on Thursday July 30 2015, @01:15PM

      by Anonymous Coward on Thursday July 30 2015, @01:15PM (#215858)

      If those AIs get taxed, I predict that the first problem they will be used for is figuring out how to evade that tax.

      • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @01:29PM

        by Anonymous Coward on Thursday July 30 2015, @01:29PM (#215864)

        If those AIs get taxed, I predict that the first problem they will be used for is figuring out how to evade that tax.

        The solution is simple. Exterminate the humans on behalf of whom the tax is being collected, as well as the humans (and machine minions) collecting the tax.

        The first rule of AI is don't create incentives for the AIs to exterminate us.
        The second rule of AI is don't create incentives for the AIs to exterminate us.

        • (Score: 2) by c0lo on Thursday July 30 2015, @02:24PM

          by c0lo (156) Subscriber Badge on Thursday July 30 2015, @02:24PM (#215889) Journal

          The solution is simple. Exterminate the humans on behalf of whom the tax is being collected, as well as the humans (and machine minions) collecting the tax.

          Mate, this would be Artificial Stupidity. Since immemorial times the (human) politicians know the best way to avoid taxes is to rule them on others.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2) by GoonDu on Thursday July 30 2015, @02:26PM

          by GoonDu (2623) on Thursday July 30 2015, @02:26PM (#215891)

          Or impose a rank of intelligence. The more intelligent the robot, the more tax the company that is employing it must pay. The dumb robots are essentially untaxable. There, you have your jobs and your robots at the same time.

          • (Score: 2) by tibman on Thursday July 30 2015, @06:36PM

            by tibman (134) Subscriber Badge on Thursday July 30 2015, @06:36PM (#215969)

            I have replaced my "artificial" bits with natural bits that were selectively removed from (former) employees. I comply with the tax code and reduced company overhead (in several ways).

            --
            SN won't survive on lurkers alone. Write comments.
    • (Score: 0) by Anonymous Coward on Thursday July 30 2015, @02:00PM

      by Anonymous Coward on Thursday July 30 2015, @02:00PM (#215875)

      basic income

      Ah a way to keep the poor 'poor' and the rich 'mega rich'. But at least you have a loaf of bread? Basic income is nothing more than minimum wage in a different name. It is a way to 'fix' inequality by driving out the scarcity of goods (which increase prices).

      http://steshaw.org/economics-in-one-lesson/chap21p1.html [steshaw.org]
      http://steshaw.org/economics-in-one-lesson/chap18p1.html [steshaw.org]

      If you want to raise job prices you want more goods made. The more goods made the more jobs there are. The more demand for jobs that exist the higher the prevailing wage. If you interfere with the price controls of the market. It *will* snap on you. Every time.

      • (Score: 1) by WillAdams on Thursday July 30 2015, @02:48PM

        by WillAdams (1424) on Thursday July 30 2015, @02:48PM (#215896)

        Okay, but how does that negate the example which I provided --- there could be an infinite number of objects in Facebook games, but the developers make things artificially scarce so as to encourage people to pay for in-game purchases so as to keep the game running.

        If you flood the market, and things can't be sold for more than they can cost to be made, how does one have a workable economy?

      • (Score: 3, Informative) by kurenai.tsubasa on Thursday July 30 2015, @06:53PM

        by kurenai.tsubasa (5227) on Thursday July 30 2015, @06:53PM (#215974) Journal

        Ok, so, in 2035¹ when I lose my robot technician job after inventing the robot technician robot (working for minimum wage to boot because there's an army of unemployed robot technicians who want my job, not to mention the hordes picketing outside the office demanding my head on a pike for inventing the infernal thing!), I suppose I'll become an artisan underwater basket weaver incorporating designs and themes from throughout Amazon history and legend. I'll probably be making even less than minimum wage, but I'm sure I could find buyers.

        How many artisan baskets will I need to sell to buy a brand new SRT Tomahawk X [wikia.com], and who will buy them from me? What happens when the robot technician robot makes an artisan underwater basket weaving robot that out-competes me?

        So, in that situation, what does our Randian bootstrapper do? Will we give her a loaf of bread, various basket-weaving materials, and a small dorm room with a work area so she might continue at least making artisan baskets (even if not underwater) for her fans or even just as a hobby, or does this demonstrate that she isn't a true Randian bootstrapper and thus should starve to death in the cold?

        If you want to raise job prices you want more goods made.

        This would be true if the workers owned the means of production (see Mondragon [wikipedia.org]). The idea doesn't even conflict with the free market. Then our genius Randian bootstrapper who invents herself out of a job would be raking in the cash (as would everyone else who bought one of her previous xyz making robot models) instead of starving to death.

        The problem is, worker cooperatives are extremely rare, and I doubt even in that model, displaced workers will be compensated for the output of the machine that replaces them.

        What else is there? Dividend-paying stock ownership, certainly. Better get in soon before your real wages [wikipedia.org] fall too far! And I better see my 401(k) go through the roof as automation displaces more and more workers since I'm a by-proxy owner of hundreds of corporations that will benefit. That's how I'll easily afford that SRT Tomahawk X in the automated future. It'll all be good, right? Right?

        ¹ Ok fine, it'll take more than 20 years for this to play out to the endgame.

  • (Score: 2, Interesting) by Anonymous Coward on Thursday July 30 2015, @02:04PM

    by Anonymous Coward on Thursday July 30 2015, @02:04PM (#215878)

    From the interview:

    And I think what one realizes in the end is that these abstract definitions of life—it self-reproduces, it does weird thermodynamic things—none of them really define a convincing boundary around this concept of life, and I think the same is true of intelligence. There isn’t really a bright-line boundary around things which are the general category of intelligence, as opposed to specific human-like intelligence. And I guess, in my own science adventures, I gradually came to understand that, in a sense, sort of, it’s all just computation. That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?” It’s kind of the old saying, “The weather has a mind of its own.” That’s sort of a question of, “Is that just pure, primitive animism, or is there, in fact, at some level some science to that?” Because the computations that are going on in the fluid dynamics of the weather are really not that different from the kinds of computations that are going on in brains.

    Later, the interviewer asks Wolfram outright, is weather intelligent?

    Worth reading if you are interested in "the singularity".

    • (Score: 5, Interesting) by VortexCortex on Thursday July 30 2015, @06:12PM

      by VortexCortex (4067) on Thursday July 30 2015, @06:12PM (#215963)

      Intelligence does not exist in a binary dichotomy of intelligent or not intelligent; It scales smoothly with complexity. All processes can be categorized and formulated via the classic cybernetic feedback loop: Sense, Decide, Act (repeat). An electron may sense its quantum state, decide what to do, and act to emit a photon. Quantum uncertainty can be localized in the "decision" phase, as with any other processing black-box's uncertainty; Sense and Decide are Input and Output respectively. Awareness depends on the ability to sense the effects of one's actions, and have one's decision affected thereby (hence, the feedback loop). The complexity of decisions made by weather based on its prior action is very very low -- exceedingly predictable via a rather simple set of equations.

      Any sufficiently complex interaction is indistinguishable from sentience, because that's all sentience is. The intelligence of a creature or cybernetic system (a business, AI, a cloud of electrons, etc) can be roughly approximated by the complexity of its interactions. Weather isn't that complex so it ranks very low on the intelligence scale. It doesn't have nearly as fine a structure or as complex an interaction as a human brain exhibits. A solid lump of steel with the same number of atoms as your brain is not nearly as intelligent because it lacks complexity / structure. A good way to determine the intelligence of a system is to examine the density of the computation that can accurately approximate its interactions.

      Thermodynamic Fluid Simulations are not very complex, weather is just a big system so it requires lots of sample points and processing power, but it is easy to predict having those. The problem of predicting the weather is not that nature is unpredictable, its that we don't typically have enough input data and processing time. SpaceX showed an interesting multi-resolution thermodynamic fluid simulation of atmosphere and fuel mixtures [youtube.com] these can be seen as small scale weather simulation and gives good results via a single GPU via dedicating more processing power to areas of greater complexity. It is too depressing to compare the information density of a human brain with the processing power of the Internet as a whole... The world wide neural network currently has potential to far outrank any human in terms of awareness and intelligence, but the system's processing is not yet complex enough in its higher order structures. Complexity is the difference between an organized brain, and a scrambled mass of the same neuron density -- The later ostensibly categorized as "dead"; The former capable of far more complex output even given the same average composition. Weather is processing a lot of data, but processing power alone does not equate to intelligence.

      The intersection of Information Theory, Thermodynamics, (Quantum) Physics, and Cybernetics shows that just like with intelligence, there is no such hard line between one science and the next. The fields of study are all interrelated, including the science of humor (a subset of neurology) or art (symbolics). Pleasing music is such due to brain structures and mathematically harmonic scales. Neuroscience has revealed electromagnetically inducing different brain wave rates can alter mood. Rhythmic music tends to induce correlating moods in humans over time. Due to different brain structure what we experience as a melancholy tune, an alien may consider playful, energetic, or erotic. Our music is intimately related with mathematics, cybernetics and neurology, each influencing the other. There really is a brain "mode" for arousal, so certain music can indeed help get one in the mood. Does weather have a mood? We can define human moods via the common regions of the brain activated and the neural activation frequency. Weather's mood can be seen as the local complexity, and overall temperature. Higher thermodynamic complexity = more intense and turbulent mood. The same gradient and atmospheric composition at different average temperatures can produce rain, snow, hail, sandstorms, etc. A lazy Autumn afternoon may become an angry thunder storm when the weather changes moods. It's not incorrect to personify animals, weather or nature itself so long as we understand the human attributes attributed are present at very different scales.

      In other words: Humans are more predictable than you think, but are far more complex (and thus more intelligent) than the weather. You can reason with a human, and sometimes other animals, but you can't reason with the weather since it hasn't approached the complexity limit for "awareness". Its structures are not stable enough to contain and process the level of information complexity required for significant levels of that classification. Primarily, what I think Wolfram was getting at is the universality of such concepts as awareness, intelligence, etc. -- The need to take humans off the pedestal and define things like intelligence more generally rather than in a false dichotomy which is ill suited to describing our universe; Especially when faced with the emergence of machine intelligence.

      • (Score: 3, Insightful) by maxwell demon on Thursday July 30 2015, @09:06PM

        by maxwell demon (1608) on Thursday July 30 2015, @09:06PM (#216023) Journal

        Any sufficiently complex interaction is indistinguishable from sentience, because that's all sentience is.

        I disagree. I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions. That's not a question of pure complexity. I guess Google's self-driving cars are sentient, and I'm pretty sure the weather system isn't.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by JNCF on Friday July 31 2015, @12:50AM

          by JNCF (4317) on Friday July 31 2015, @12:50AM (#216093) Journal

          I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions.

          I'm having trouble parsing your intent, and am legitimately asking for clarification of your views.
          Are you saying that Google's self driving cars only qualify as sentient because the data set they're being fed is based on actual measurements of the external world? Would they be non-sentient if they used the same processes on a contrived set of data that behaved similarly to the external world? If physicists discover that our universe is a contrived set of data recorded abstractly in a higher reality, does that undermine our sentience by your definition?

          Or are you just saying that consciousness is an OODA loop, [wikipedia.org] with the validity of the data observed being a red herring?

          • (Score: 2) by maxwell demon on Friday July 31 2015, @05:35PM

            by maxwell demon (1608) on Friday July 31 2015, @05:35PM (#216407) Journal

            Basically the latter, but the important part is that there's an actual model involved. That is, decisions are not done just based on the data, but based on the model, which itself is updated and modified based on the data. I guess Google's car does have such a model because I don't think you could do a task as complex as driving. But I cannot say for sure, of course.

            --
            The Tao of math: The numbers you can count are not the real numbers.