Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday July 30 2015, @01:01PM   Printer-friendly
from the it-depends-what-"it"-is dept.

In this wide ranging interview, Steven Wolfram [creator of Mathematica and Wolfram Alpha] talks about what he's been thinking about for the last 30+ years, and how some of the questions he's had for a long time are now being answered.

I looked for pull quotes, but narrowing down to just one or two quotes from a long interview seemed like it might send the SN discussion down a rabbit hole... if nothing else, this is a calm look at the same topics that have been all over the press recently from Hawking, Musk and others.

One interesting topic is about goals for AIs -- as well as intelligence (however you define it), we humans have goals. How will goals be defined for AIs? Can we come up with a good representation for goals that can be programmed?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by VortexCortex on Thursday July 30 2015, @06:12PM

    by VortexCortex (4067) on Thursday July 30 2015, @06:12PM (#215963)

    Intelligence does not exist in a binary dichotomy of intelligent or not intelligent; It scales smoothly with complexity. All processes can be categorized and formulated via the classic cybernetic feedback loop: Sense, Decide, Act (repeat). An electron may sense its quantum state, decide what to do, and act to emit a photon. Quantum uncertainty can be localized in the "decision" phase, as with any other processing black-box's uncertainty; Sense and Decide are Input and Output respectively. Awareness depends on the ability to sense the effects of one's actions, and have one's decision affected thereby (hence, the feedback loop). The complexity of decisions made by weather based on its prior action is very very low -- exceedingly predictable via a rather simple set of equations.

    Any sufficiently complex interaction is indistinguishable from sentience, because that's all sentience is. The intelligence of a creature or cybernetic system (a business, AI, a cloud of electrons, etc) can be roughly approximated by the complexity of its interactions. Weather isn't that complex so it ranks very low on the intelligence scale. It doesn't have nearly as fine a structure or as complex an interaction as a human brain exhibits. A solid lump of steel with the same number of atoms as your brain is not nearly as intelligent because it lacks complexity / structure. A good way to determine the intelligence of a system is to examine the density of the computation that can accurately approximate its interactions.

    Thermodynamic Fluid Simulations are not very complex, weather is just a big system so it requires lots of sample points and processing power, but it is easy to predict having those. The problem of predicting the weather is not that nature is unpredictable, its that we don't typically have enough input data and processing time. SpaceX showed an interesting multi-resolution thermodynamic fluid simulation of atmosphere and fuel mixtures [youtube.com] these can be seen as small scale weather simulation and gives good results via a single GPU via dedicating more processing power to areas of greater complexity. It is too depressing to compare the information density of a human brain with the processing power of the Internet as a whole... The world wide neural network currently has potential to far outrank any human in terms of awareness and intelligence, but the system's processing is not yet complex enough in its higher order structures. Complexity is the difference between an organized brain, and a scrambled mass of the same neuron density -- The later ostensibly categorized as "dead"; The former capable of far more complex output even given the same average composition. Weather is processing a lot of data, but processing power alone does not equate to intelligence.

    The intersection of Information Theory, Thermodynamics, (Quantum) Physics, and Cybernetics shows that just like with intelligence, there is no such hard line between one science and the next. The fields of study are all interrelated, including the science of humor (a subset of neurology) or art (symbolics). Pleasing music is such due to brain structures and mathematically harmonic scales. Neuroscience has revealed electromagnetically inducing different brain wave rates can alter mood. Rhythmic music tends to induce correlating moods in humans over time. Due to different brain structure what we experience as a melancholy tune, an alien may consider playful, energetic, or erotic. Our music is intimately related with mathematics, cybernetics and neurology, each influencing the other. There really is a brain "mode" for arousal, so certain music can indeed help get one in the mood. Does weather have a mood? We can define human moods via the common regions of the brain activated and the neural activation frequency. Weather's mood can be seen as the local complexity, and overall temperature. Higher thermodynamic complexity = more intense and turbulent mood. The same gradient and atmospheric composition at different average temperatures can produce rain, snow, hail, sandstorms, etc. A lazy Autumn afternoon may become an angry thunder storm when the weather changes moods. It's not incorrect to personify animals, weather or nature itself so long as we understand the human attributes attributed are present at very different scales.

    In other words: Humans are more predictable than you think, but are far more complex (and thus more intelligent) than the weather. You can reason with a human, and sometimes other animals, but you can't reason with the weather since it hasn't approached the complexity limit for "awareness". Its structures are not stable enough to contain and process the level of information complexity required for significant levels of that classification. Primarily, what I think Wolfram was getting at is the universality of such concepts as awareness, intelligence, etc. -- The need to take humans off the pedestal and define things like intelligence more generally rather than in a false dichotomy which is ill suited to describing our universe; Especially when faced with the emergence of machine intelligence.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 3, Insightful) by maxwell demon on Thursday July 30 2015, @09:06PM

    by maxwell demon (1608) on Thursday July 30 2015, @09:06PM (#216023) Journal

    Any sufficiently complex interaction is indistinguishable from sentience, because that's all sentience is.

    I disagree. I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions. That's not a question of pure complexity. I guess Google's self-driving cars are sentient, and I'm pretty sure the weather system isn't.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by JNCF on Friday July 31 2015, @12:50AM

      by JNCF (4317) on Friday July 31 2015, @12:50AM (#216093) Journal

      I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions.

      I'm having trouble parsing your intent, and am legitimately asking for clarification of your views.
      Are you saying that Google's self driving cars only qualify as sentient because the data set they're being fed is based on actual measurements of the external world? Would they be non-sentient if they used the same processes on a contrived set of data that behaved similarly to the external world? If physicists discover that our universe is a contrived set of data recorded abstractly in a higher reality, does that undermine our sentience by your definition?

      Or are you just saying that consciousness is an OODA loop, [wikipedia.org] with the validity of the data observed being a red herring?

      • (Score: 2) by maxwell demon on Friday July 31 2015, @05:35PM

        by maxwell demon (1608) on Friday July 31 2015, @05:35PM (#216407) Journal

        Basically the latter, but the important part is that there's an actual model involved. That is, decisions are not done just based on the data, but based on the model, which itself is updated and modified based on the data. I guess Google's car does have such a model because I don't think you could do a task as complex as driving. But I cannot say for sure, of course.

        --
        The Tao of math: The numbers you can count are not the real numbers.