Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday May 29 2019, @11:58AM   Printer-friendly
from the I'm-sorry-Dave dept.

Artificial intelligence is ubiquitous. Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song.

But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

[...] To better understand how AI might be governed, and how to prevent human bias from altering the automated systems we rely on every day, CNET spoke with Salesforce AI experts Kathy Baxter and Richard Socher in San Francisco. Regulating the technology might be challenging, and the process will require nuance, said Baxter.

The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said. "That we make sure [the process] does not infringe on those human rights. It also needs  to be transparent. It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it."

Salesforce and other tech firms, Baxter said, are developing cross-industry guidance on the criteria for data used in AI data models. "We will show the factors that are used in a model like age, race, gender. And we're going to raise a flag if you're using one of those protected data categories."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Wednesday May 29 2019, @09:16PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday May 29 2019, @09:16PM (#849077) Journal

    Your timeframe is way off.

    All we have to do is create a suitable neuromorphic design that can be scaled up. The chips could be built vertically or stacked since they are likely to have very low power consumption compared to CPUs. You could mimic brain volume in this way. Once the design is ready, it can be built using the latest process node technology, so it can easily have the equivalent of billions or trillions of neurons.

    It won't take 300 years, and it might not even take 10. It may be done in secret since the Musky OpenAI types will scream "SKYNET!" as soon as it is announced.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2