Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday February 07 2019, @01:15PM   Printer-friendly
from the what-reputation dept.

Microsoft Doesn't Want Flawed AI to Hurt its Reputation

Microsoft told investors recently that flawed AI algorithms could hurt the company's reputation.

The warning came via a 10-K document that the company has to file annually to the Securities and Exchange Commission. The 10-K filing is mandatory for public companies and is a way for investors to learn about financial state and risks that the company may be facing.

In the filing, Microsoft made it clear that despite recent enormous progress in machine learning, AI is still far from the utopian solution that solves all of our problems objectively. Microsoft noted that if the company ends up offering AI solutions that use flawed or biased algorithms or if they have a negative impact on human rights, privacy, employment or other social issues, it's brand and reputation could suffer.

Tay is still trapped in Redmond.

Related: Microsoft Snuffs Out AI Twitter Bot After Offensive Tweets
Update: Microsoft Restarts and Open-Sources Tay Chat Bot
Microsoft Improves Facial Recognition Across Skin Tones, Gender


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by MrGuy on Thursday February 07 2019, @04:48PM

    by MrGuy (1007) on Thursday February 07 2019, @04:48PM (#797840)

    A 10-K is, by design, targeted at current and future investors. It contains financial data, information on strategies, and risks. The reason to disclose risks isn't some academic purpose, but to call out things the company feels they are legally obligated to warn investors about that might affect future performance.

    The question is WHY they feel the need to disclose this risk. While Microsoft has been recently vocal about AI needing some amount of regulation [microsoft.com], they are also the middle of an AI company [techcrunch.com]buying spree. [techcrunch.com]

    Microsoft seems to be both moving heavily into AI and also positioning itself to say "Hey, don't blame us if it goes wrong!"

    The overall strategy seems to be risk avoidance. Our AI did something creepy/problematic/discriminatory? Hey, we begged the government to define standards and they didn't! Don't blame us - what we did was legal! Shareholders incensed after a massive lawsuit is filed for millions of dollars? Hey, we warned you before we invested this was a risk!

    I don't take a completely cynical view of this - Microsoft is saying a lot of the right things about AI publicly, which is good. But there's definitely a defensive aspect to this disclosure.

    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5