Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday July 30 2015, @01:01PM   Printer-friendly
from the it-depends-what-"it"-is dept.

In this wide ranging interview, Steven Wolfram [creator of Mathematica and Wolfram Alpha] talks about what he's been thinking about for the last 30+ years, and how some of the questions he's had for a long time are now being answered.

I looked for pull quotes, but narrowing down to just one or two quotes from a long interview seemed like it might send the SN discussion down a rabbit hole... if nothing else, this is a calm look at the same topics that have been all over the press recently from Hawking, Musk and others.

One interesting topic is about goals for AIs -- as well as intelligence (however you define it), we humans have goals. How will goals be defined for AIs? Can we come up with a good representation for goals that can be programmed?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by marcello_dl on Thursday July 30 2015, @02:07PM

    by marcello_dl (2685) on Thursday July 30 2015, @02:07PM (#215879)

    Take a look at life, life has no programmed goal, but the way the universe is implemented* makes entities that do not adhere to the process called life disperse with higher probability.

    As I said in the green site, statistically speaking:
    - stable thingies exist for longer than unstable ones
    - among these, those that grow exist for longer than those that don't grow
    - among these those that grow even when divided (replicate) exist for longer
    - among these, those that mutate exist for longer (hence death of the single individual becomes a way to increase variation)
    - among these, those that sense surroundings... those that predict what they will sense and behave accordingly... those that sense themselves as present in the environment... those that behave egoistically.... those that form a collective... those that communicate....

    So, I assert that an AI that is programmed to resemble life is not alive, one that shows emergent properties that resemble those of living creatures does. One that is alive in a virtual environment is at the mercy of the owner of said environment, one that is alive in the real world is A FUCKING COMPETITOR THAT MUST BE STOMPED... er... sorry, my usual survival instinct...

    *) I am not implying there must be an Implementor, but nonetheless there is a set of laws that model the universe while others don't model it, this make the universe an implementation. Even if somebody mathematically proved that this set of laws is the only one conceivable, he'd have done it from the universe conceived by them, using a logic infrastructure that does not extend to all inconceivable universes, nor to all conceivable ones for that matter. So it would be a tautology with limited scope instead of a proof.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4