Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday May 09 2019, @11:14PM   Printer-friendly
from the Get-creative! dept.

Phys.org:

Repetitive skills like pattern recognition, information retrieval, optimization and planning are most vulnerable to automation. On the other hand, social and cognitive skills such as creativity, problem-solving, drawing conclusions about emotional states and social interactions are least vulnerable.

The most resilient competencies (those least likely to be displaced by AI) included critical thinking, teamwork, interpersonal skills, leadership and entrepreneurship.

Yuval Harari, a historian at the Hebrew University of Jerusalem, described the rise of AI as a "cascade of ever-bigger disruptions" in higher education rather than a single event that settles into a new equilibrium. The unknown paths taken by AI will make it increasingly difficult to know what to teach students.

Perhaps we can all be employed as therapists, counseling each other about our feelings of irrelevance?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Friday May 10 2019, @02:18PM

    by Anonymous Coward on Friday May 10 2019, @02:18PM (#841841)

    Machines do not miss clues, machines do not believe in fairytales, machines are not purposely taught garbage.

    Well, let's start with the last one: I'm pretty sure some machines will be purposely taught garbage. If only to sabotage the competitor's AI.

    And when taught garbage, I'm sure some machine will start to believe in fairytales, because doing so is not the result of a malfunctioning brain, it is the result of a misprogrammed brain. And I don't see why machines should notz be vulnerable to that, too.

    And machines certainly will miss clues, as even for a machine, it is not economical to consider all available data, but it will have to prioritise the evaluation for the probability of it being relevant. And that's exactly the mechanism that leads humans to immunize ideas, and it is not to expect that machines are immune to this.

    After all, there's no evolutionary advantage in being vulnerable to deception, therefore the rational assumption, based on the limited data we have on that question, is that it is a likely, if not necessary, side effect of having the ability to reason.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1