Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday January 26 2018, @07:00AM   Printer-friendly
from the oh-my-god-give-it-a-rest-already!!! dept.

Prime Minister Theresa May has not abandoned her usual crusades:

On a break from Brexit, British Prime Minister Theresa May takes her crusade against technology giants to Davos.

"No-one wants to be known as 'the terrorists' platform' or the first choice app for pedophiles," May is expected to say according to excerpts released by her office ahead of her speech Thursday at the World Economic Forum in Davos. "Technology companies still need to go further in stepping up their responsibilities for dealing with harmful and illegal online activity."

Don't forget the slave traders.

Luckily, May has a solution... Big AI:

After two years of repeatedly bashing social media companies, May will say that successfully harnessing the capabilities of AI -- and responding to public concerns about AI's impact on future generations -- is "one of the greatest tests of leadership for our time."

May will unveil a new government-funded Center for Data Ethics and Innovation that will provide companies and policymakers guidance on the ethical use of artificial intelligence.

Also at BBC, TechCrunch, and The Inquirer.

Related: UK Prime Minister Repeats Calls to Limit Encryption, End Internet "Safe Spaces"
WhatsApp Refused to add a Backdoor for the UK Government


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bradley13 on Friday January 26 2018, @10:43AM (2 children)

    by bradley13 (3053) on Friday January 26 2018, @10:43AM (#628197) Homepage Journal

    ...is that it is likely to be just an even more complex experience like research into image recognition. We can train a neural network to recognize items at incredible accuracy, but we cannot really control how it achieves those results [theguardian.com].

    So imagine we progress as much in the next 20 years as we have in the past 20 - we really could have functional AI. We can give it problems, and it can give us answers. But we won't know how it actually thinks. Even if you include like a law of robotics, you cannot nail down every possible, unforeseen situation that comes up. Something we take as important, the AI may not even notice. I am reminded of an old sci-fi story, where robots started dissecting people and reassembling them in random ways. The AI didn't understand that this was a problem - after all, robots liked being made of exchangeable parts, so why not humans?

    That said, it's looking like this isn't going to be an issue any time soon. Most of the progress in AI in the past 20 years, or for that matter 50 years, is due to Moore's law, not to any fundamental new insights. The basic technologies were invented anywhere from 50 to 70 years ago; everything since has been baby steps, and that's not going to get us to self-aware AI. Meanwhile, Moore's law was already flattening out - now Meltdown and Spectre are likely to kill it off. Maybe (maybe quantum computing will reignite things, but it's a long ways from practical, and actual usefulness remains pretty unclear.

    --
    Everyone is somebody else's weirdo.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Friday January 26 2018, @10:49AM (1 child)

    by Anonymous Coward on Friday January 26 2018, @10:49AM (#628200)

    "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years." https://en.wikipedia.org/wiki/Moore%27s_law [wikipedia.org]

    So no, Meltdown and Spectre will not kill it off. If anything, they are probably going to have to put more transistors in the circuits to fix Meltdown and Spectre.

    • (Score: 2) by Grishnakh on Friday January 26 2018, @03:15PM

      by Grishnakh (2831) on Friday January 26 2018, @03:15PM (#628274)

      Yeah, exactly, the OP doesn't make any sense at all. These security flaws exist because the hardware wasn't diligent enough in making sure different processes couldn't access each others' memory. The fix is conceptually simple: improve the hardware to prevent this, which will of course increase complexity and require even more transistors.