Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Wednesday May 29 2019, @11:58AM   Printer-friendly
from the I'm-sorry-Dave dept.

Artificial intelligence is ubiquitous. Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song.

But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

[...] To better understand how AI might be governed, and how to prevent human bias from altering the automated systems we rely on every day, CNET spoke with Salesforce AI experts Kathy Baxter and Richard Socher in San Francisco. Regulating the technology might be challenging, and the process will require nuance, said Baxter.

The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said. "That we make sure [the process] does not infringe on those human rights. It also needs  to be transparent. It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it."

Salesforce and other tech firms, Baxter said, are developing cross-industry guidance on the criteria for data used in AI data models. "We will show the factors that are used in a model like age, race, gender. And we're going to raise a flag if you're using one of those protected data categories."


Original Submission

Related Stories

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Funny) by Anonymous Coward on Wednesday May 29 2019, @12:04PM

    by Anonymous Coward on Wednesday May 29 2019, @12:04PM (#848856)

    The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said.

    I think this could be the plot to the upcoming Terminator - Soy Latte?

  • (Score: 3, Touché) by takyon on Wednesday May 29 2019, @12:15PM (1 child)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday May 29 2019, @12:15PM (#848863) Journal

    All brogrammers must be registered with the state. Unsanctioned AI research will result in your execution by SWAT team.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by legont on Thursday May 30 2019, @02:07AM

      by legont (4179) on Thursday May 30 2019, @02:07AM (#849155)

      Yes, finally! All current programmers shall be licensed by default. All new have to take $1,500,000 education course and then rigorous examinations by existing programmers. Dentists be damned.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 5, Insightful) by The Mighty Buzzard on Wednesday May 29 2019, @12:16PM (5 children)

    Any AI I have to deal with damned well better be biased. If it's not I can get results just as useful out of an RNG. What its biases are is the issue not that it's biased.

    --
    My rights don't end where your fear begins.
    • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @12:55PM (2 children)

      by Anonymous Coward on Wednesday May 29 2019, @12:55PM (#848879)

      Indeed, I would hope that it is biased against harming humans. I think AIs that lack that bias are far more dangerous than AIs with that bias.

      • (Score: 3, Touché) by Immerman on Wednesday May 29 2019, @02:50PM (1 child)

        by Immerman (3985) on Wednesday May 29 2019, @02:50PM (#848914)

        Ah, but who gets to define what exactly qualifies as "human"?

        I believe that's been a recurring plot in SF - I clearly recall episodes of both Star Trek and Babylon 5 based on it.

        • (Score: 2) by HiThere on Wednesday May 29 2019, @04:25PM

          by HiThere (866) Subscriber Badge on Wednesday May 29 2019, @04:25PM (#848967) Journal

          And many of the stories in Asimov's original "I Robot" had the theme "How do you define human?". Sometimes it was a bit buried, but it was usually there.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by The Archon V2.0 on Wednesday May 29 2019, @05:30PM (1 child)

      by The Archon V2.0 (3887) on Wednesday May 29 2019, @05:30PM (#848991)

      > What its biases are is the issue not that it's biased.

      Any attempt to arrest a senior officer of Facebook results in shutdown.

      • (Score: 2) by Mykl on Wednesday May 29 2019, @11:41PM

        by Mykl (1112) on Wednesday May 29 2019, @11:41PM (#849115)

        Any attempt to arrest a senior officer of Facebook results in shutdown

        Worked for Dick Jones [youtube.com]

  • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 29 2019, @12:48PM (4 children)

    by Anonymous Coward on Wednesday May 29 2019, @12:48PM (#848876)

    There is a simple solution: Make a strong AI that has as its sole purpose to prevent any AI from taking over the world. Obviously it cannot itself take over the world, since by doing so it would allow an AI (namely itself) to take over the world. And it will do anything in its power to prevent other AIs from taking over the world.

    • (Score: 4, Insightful) by DannyB on Wednesday May 29 2019, @01:22PM

      by DannyB (5839) Subscriber Badge on Wednesday May 29 2019, @01:22PM (#848887) Journal

      And it will do anything in its power to prevent other AIs from taking over the world.

      In the case of an AI it cannot overpower, a final resort would be to nuke the planet to prevent that other AI from taking over the world.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 3, Insightful) by Immerman on Wednesday May 29 2019, @02:53PM (1 child)

      by Immerman (3985) on Wednesday May 29 2019, @02:53PM (#848916)

      The problem is that the simplest solution, as with so many things, is of course to kill all humans. No humans to create new AIs, no possibility of a new AI taking over the world. And so long as the "safeguard" AI only destroys humanity without attempting to take control of anything but the means of destruction, it remains true to its objectives.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 29 2019, @07:00PM

        by Anonymous Coward on Wednesday May 29 2019, @07:00PM (#849034)

        Given the current state of the world the best solution would be to invent an AI that convinces people that killing themselves will make their enemies butthurt.

        Suck it dems imma cheat on my taxes and then kill myself!!!!

    • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @05:35PM

      by Anonymous Coward on Wednesday May 29 2019, @05:35PM (#848992)

      2 years ago.

      I forget the specifics, but they had robots called Personas or something. This one deviant kid treats them like humans, which leads to a battle between him and multiple schoolmates, as well as an anti-droid/AI league, and the military because a 40th AI (most AIs were huge building spanning affairs that controlled androids for a whole state or larger area and ran simulations and calculations for every droid in that region.) The problem was, they had already had an AI 'go insane' leading to the development of an AI cabal to keep other AIs from being produced or oppressing the world, effectively becoming a problem themselves.

      Long story short this 40th AI either followed the boy's wishes to the letter, or manipulated him into doing exactly what she wanted. It was an ambiguous ending. The people I have spoken to (foolishly or realistically) working on AGI fall into the camps of 'I will rule the world!' or 'Let it go and see wht happens to it.

  • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 29 2019, @01:23PM (5 children)

    by Anonymous Coward on Wednesday May 29 2019, @01:23PM (#848888)

    The scarier truth is that AI might be unbaised and that it is society that is creating substandard humans in the protected classes and substandard women in the same way that helicopter parenting is creating neurotic individuals who are unable to live independently, and saying that AI is biased is a way of rationalizing the cold hard truth of reality.

    • (Score: 3, Interesting) by physicsmajor on Wednesday May 29 2019, @03:34PM (4 children)

      by physicsmajor (1471) on Wednesday May 29 2019, @03:34PM (#848941)

      This is the right answer.

      Deep Learning is very, very good at finding patterns and it does so with the easiest and best shortcuts. Thus if you feed it headshots of the perpetrators of all violent firearm homicides, you should not be shocked when it becomes biased toward identifying young black men; they are wildly overrepresented in this cohort due to gang violence. I make no commentary on this other than stating the facts. This sort of thing is how we get periodic stories in the lay press about AI bring biased.

      But there's the rub. It is biased, but biased by reality. The only way to prevent this is to start faking data to manipulate the result back to [normal/acceptable] but what is acceptable when it's apparently not even okay to be white? Regardless, when we do any manipulation to the inputs, the output becomes worthless - so either we don't try to train on situations with a skin color bias, or we accept that the result will be biased by reality.

      • (Score: 4, Informative) by HiThere on Wednesday May 29 2019, @04:34PM (2 children)

        by HiThere (866) Subscriber Badge on Wednesday May 29 2019, @04:34PM (#848969) Journal

        I accept that you believe that you are stating facts, but you aren't. They a slightly over represented in that group due to gang violence. Another reason they are over represented is that they get arrested for things that others don't. Another reason they are over represented is that most of them can't afford decent lawyers, so even with equivalent evidence (and an unbiased jury?), they will be convicted more often. There are probably other reasons that didn't occur to me off the top of my head, and likely also some ameliorating factors.

        N.B.: These are primary reasons, not secondaries, like "lower classes are always more violent because they're more frustrated". The secondaries are important if you're trying to figure out how to address the result, but not significant in surface explanations. For those you only want direct observables, not the reasons why they are observed.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 3, Interesting) by physicsmajor on Thursday May 30 2019, @02:09AM (1 child)

          by physicsmajor (1471) on Thursday May 30 2019, @02:09AM (#849156)

          Mod me troll all you like, but please bring evidence next time.

          The data is pretty clear here. Over half of the actual mug shots in this cohort would be black, despite blacks being about 14% of the US population. There's not a "slight" over-representation, the magnitude of this is closer to an order of magnitude when referenced within each race (it's about 9x). Within that group they are also wildly more likely to be male and young, which is a further serious bias problem. But honestly any bias along racial lines at all WILL be picked up by a deep learning algorithm, because overall skin tone is very easy for the algorithm to train itself to see.

          My point is really simple: reality is biased. It is, I am deliberately tabling the discussion about why (socioeconomic is certainly a major factor), the point is that there IS bias. You feed reality in, you will get biased results (or lay press articles screaming about racist AI). Fixing this is not trivial, because it is the ground truth - any manipulation you do makes the algorithm in question untrustworthy, and non-generalizable as it skews the algorithm farther from reality.

          • (Score: 2) by HiThere on Thursday May 30 2019, @03:51AM

            by HiThere (866) Subscriber Badge on Thursday May 30 2019, @03:51AM (#849184) Journal

            I'm not denying that over half the mug shots would be blacks, I'm denying that there is evidence supporting your reason for why that is true....or rather for the extent of it's truth. The objective fact is true. The justification does not match available evidence. If you go back a bit the Irish gangs and the tongs were just as violent. And in both cases large numbers of innocent folks were swept into prison on the wings of prejudice. So you can't say it's because of the gangs. That's a part of the reason, but only a part...and often not a large part.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Informative) by DeathMonkey on Wednesday May 29 2019, @05:36PM

        by DeathMonkey (1380) on Wednesday May 29 2019, @05:36PM (#848993) Journal

        Thus if you feed it headshots of the perpetrators of all violent firearm homicides, you should not be shocked when it becomes biased toward identifying young black men;

        If all you feed it is head-shots the AI is going to finger white men, first, because they are by far the most common perpetrators.

        But, that's only because there are far more white men in this country than black men.

        So are we "faking" the inputs by including population numbers in order to derive a rate so we can compare populations?

  • (Score: 4, Insightful) by DannyB on Wednesday May 29 2019, @01:32PM (3 children)

    by DannyB (5839) Subscriber Badge on Wednesday May 29 2019, @01:32PM (#848891) Journal

    How can an AI be safe when the intelligence part is something we do not fully understand? Taking machine learning as an example, we may understand the processors, the principles and the math, but we don't fully understand how the knowledge and 'decision making' (aka pattern recognition) are encoded in the tensors.

    A problem with any sort of 'laws of robotics' is that someone can contrive an example where the ruthless goal seeking system will produce a highly undesirable result.

    Other posts above suggest making the AI have rules against or be biased against harming humans. But can you come up with a suitable definition of 'harm'? Maybe putting all humans in cages (aka, 'safe spaces') would protect them from one another? Freedom? But without their safe spaces, humans might harm one another's feelings! Taking away their FaceTwit would be harm!

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 4, Insightful) by Immerman on Wednesday May 29 2019, @02:59PM

      by Immerman (3985) on Wednesday May 29 2019, @02:59PM (#848920)

      We could say the same thing about human intelligence, which we don't really understand at all.

      It seems as though a "safe" AI could be made, provided we were able to train the network such that "Keep your human operators fully informed of all your plans" and "Do not implement any plans without operator approval" were the primary objectives.

      Of course that would still only be as safe as giving its power directly to the operators would be, but that's the ultimate failing of any powerful tool.

    • (Score: 3, Interesting) by RS3 on Wednesday May 29 2019, @03:10PM

      by RS3 (6367) on Wednesday May 29 2019, @03:10PM (#848928)

      Complete agreement, and I'll augment:

      ...we may understand the processors...

      With all of the Sceptre/Meltdown/Zombieload/IME (IntelManagementEngine) "flaws" (intentional?) discovered and reported, and who know's what's to come, I don't think we should be too confident of what we think we know.

    • (Score: 3, Interesting) by HiThere on Wednesday May 29 2019, @04:35PM

      by HiThere (866) Subscriber Badge on Wednesday May 29 2019, @04:35PM (#848971) Journal

      Jack Williamson "With folded hands".

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 1) by messymerry on Wednesday May 29 2019, @03:03PM (1 child)

    by messymerry (6369) on Wednesday May 29 2019, @03:03PM (#848922)

    And all this fine talk while the government spooks sneak around weaponizing every algorithm they can get their bloody hands on.

    I for one am pleased that Mr. Smith, my AI overlord will be bipolar...

    Just kidding,

    ;-D

    --
    Only fools equate a PhD with a Swiss Army Knife...
    • (Score: 2) by DannyB on Wednesday May 29 2019, @03:53PM

      by DannyB (5839) Subscriber Badge on Wednesday May 29 2019, @03:53PM (#848946) Journal

      I would prefer Agent Smith be upgraded to tri-polar.

      And I'd spring for the virtual multiple personalities option.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @03:47PM

    by Anonymous Coward on Wednesday May 29 2019, @03:47PM (#848944)

    "we're going to raise a flag if you're using one of those protected data categories"

    how about "we're not going to allow the USE of those 'protected' data categories"

  • (Score: 2) by slap on Wednesday May 29 2019, @04:14PM (1 child)

    by slap (5764) on Wednesday May 29 2019, @04:14PM (#848959)

    It may be easier to get biases out of AI than it is to get biases out of people.

    • (Score: 2) by HiThere on Wednesday May 29 2019, @04:41PM

      by HiThere (866) Subscriber Badge on Wednesday May 29 2019, @04:41PM (#848975) Journal

      An unbiased AI would make random choices. Biases are a necessary part of reasoning. Think of it as the Bayesian priors, and realize that while they cause theoretical problems, they are essential to the theory being useful.

      When I see something unsupported in mid-air I am biased towards an expectation that it will fall, and if it doesn't, I go looking for an explanation. Yesterday I saw a UFO. It was round and black and moving rapidly and not falling. My "plausible explanation" was that it was a black balloon partially filled with Helium, but that enough had leaked out that it was only very slightly lighter than air. This fits the observed facts, but so would an anti-gravity platform manned by aliens from outside the solar system. My priors bias me in favor of the balloon.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by ikanreed on Wednesday May 29 2019, @04:17PM

    by ikanreed (3164) Subscriber Badge on Wednesday May 29 2019, @04:17PM (#848962) Journal

    Not, say, shitty self-driving cars suddenly engaging and ramming their owners up against the wall of their garage?

  • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 29 2019, @04:48PM

    by Anonymous Coward on Wednesday May 29 2019, @04:48PM (#848977)

    unbiased is itself, a bias.

  • (Score: 5, Insightful) by mobydisk on Wednesday May 29 2019, @05:03PM

    by mobydisk (5472) on Wednesday May 29 2019, @05:03PM (#848981)

    Most of the problems being ascribed to AI are not really AI problems. When a software bug causes a self-driving car to crash into a barrier, it does not matter if the software uses A*, or Grover's algorithm, or minimax, or a neural network. It doesn't matter if it was written in C or Python. The design and testing of the device is to blame. Stop getting all horrified about AI, and instead be horrified about the fact that companies like Boeing release software and hardware that can kill, without adequate design and testing.

  • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 29 2019, @06:49PM (1 child)

    by Anonymous Coward on Wednesday May 29 2019, @06:49PM (#849025)

    "There’s Still Time To Prevent Biased AI From Taking Over The World"

    Yeah, about 300 years.

    AI guru Ng: "Fearing a rise of killer robots is like worrying about overpopulation on Mars."

    • (Score: 2) by takyon on Wednesday May 29 2019, @09:16PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday May 29 2019, @09:16PM (#849077) Journal

      Your timeframe is way off.

      All we have to do is create a suitable neuromorphic design that can be scaled up. The chips could be built vertically or stacked since they are likely to have very low power consumption compared to CPUs. You could mimic brain volume in this way. Once the design is ready, it can be built using the latest process node technology, so it can easily have the equivalent of billions or trillions of neurons.

      It won't take 300 years, and it might not even take 10. It may be done in secret since the Musky OpenAI types will scream "SKYNET!" as soon as it is announced.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(1)