Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday July 30 2015, @01:01PM   Printer-friendly
from the it-depends-what-"it"-is dept.

In this wide ranging interview, Steven Wolfram [creator of Mathematica and Wolfram Alpha] talks about what he's been thinking about for the last 30+ years, and how some of the questions he's had for a long time are now being answered.

I looked for pull quotes, but narrowing down to just one or two quotes from a long interview seemed like it might send the SN discussion down a rabbit hole... if nothing else, this is a calm look at the same topics that have been all over the press recently from Hawking, Musk and others.

One interesting topic is about goals for AIs -- as well as intelligence (however you define it), we humans have goals. How will goals be defined for AIs? Can we come up with a good representation for goals that can be programmed?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by maxwell demon on Thursday July 30 2015, @09:06PM

    by maxwell demon (1608) on Thursday July 30 2015, @09:06PM (#216023) Journal

    Any sufficiently complex interaction is indistinguishable from sentience, because that's all sentience is.

    I disagree. I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions. That's not a question of pure complexity. I guess Google's self-driving cars are sentient, and I'm pretty sure the weather system isn't.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by JNCF on Friday July 31 2015, @12:50AM

    by JNCF (4317) on Friday July 31 2015, @12:50AM (#216093) Journal

    I think sentience is about having a model of the world that is constantly compared and updated with actual data from the world, and used to make decisions.

    I'm having trouble parsing your intent, and am legitimately asking for clarification of your views.
    Are you saying that Google's self driving cars only qualify as sentient because the data set they're being fed is based on actual measurements of the external world? Would they be non-sentient if they used the same processes on a contrived set of data that behaved similarly to the external world? If physicists discover that our universe is a contrived set of data recorded abstractly in a higher reality, does that undermine our sentience by your definition?

    Or are you just saying that consciousness is an OODA loop, [wikipedia.org] with the validity of the data observed being a red herring?

    • (Score: 2) by maxwell demon on Friday July 31 2015, @05:35PM

      by maxwell demon (1608) on Friday July 31 2015, @05:35PM (#216407) Journal

      Basically the latter, but the important part is that there's an actual model involved. That is, decisions are not done just based on the data, but based on the model, which itself is updated and modified based on the data. I guess Google's car does have such a model because I don't think you could do a task as complex as driving. But I cannot say for sure, of course.

      --
      The Tao of math: The numbers you can count are not the real numbers.