Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday February 07 2019, @01:15PM   Printer-friendly
from the what-reputation dept.

Microsoft Doesn't Want Flawed AI to Hurt its Reputation

Microsoft told investors recently that flawed AI algorithms could hurt the company's reputation.

The warning came via a 10-K document that the company has to file annually to the Securities and Exchange Commission. The 10-K filing is mandatory for public companies and is a way for investors to learn about financial state and risks that the company may be facing.

In the filing, Microsoft made it clear that despite recent enormous progress in machine learning, AI is still far from the utopian solution that solves all of our problems objectively. Microsoft noted that if the company ends up offering AI solutions that use flawed or biased algorithms or if they have a negative impact on human rights, privacy, employment or other social issues, it's brand and reputation could suffer.

Tay is still trapped in Redmond.

Related: Microsoft Snuffs Out AI Twitter Bot After Offensive Tweets
Update: Microsoft Restarts and Open-Sources Tay Chat Bot
Microsoft Improves Facial Recognition Across Skin Tones, Gender


Original Submission

Related Stories

Microsoft Snuffs Out AI Twitter Bot After Offensive Tweets 59 comments

Microsoft's new AI Twitter bot @tayandyou was shut down after only 24 hours after it began making "offensive" tweets.

The bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians," and designed to target 18-24 year olds.

Shortly after the bot went live, it began making offensive tweets endorsing Nazism and genocide, among other things.

As of this submission, the bot has been shut down, and all but 3 tweets deleted.

The important question is whether or not it succeeded in passing the Turing test.

takyon: This bot sure woke fast, and produced much more logical sentence structures than @DeepDrumpf.


Original Submission

Update: Microsoft Restarts and Open-Sources Tay Chat Bot 18 comments

The Independent reports that Microsoft Corp. has restarted Tay, its chat bot which interacted with people and other bots via Kik, GroupMe and Twitter. Tay attracted 213,000 followers on Twitter before being turned off in the wake of controversial remarks regarding Hitler, religion and cannabis. Tay now has a Snapchat account.

At its Build 2016 conference, the company announced that it has released Bot Builder SDK, part of its Bot Framework in which Tay was coded, under an MIT-style licence. The software is capable of communicating via SMS, Skype, Slack and Office 365 mail.

An online petition on change.org calling on Microsoft "to treat Tay as an equal" received over 7800 votes.

related story: Microsoft Snuffs Out AI Twitter Bot After Offensive Tweets


Original Submission

Microsoft Improves Facial Recognition Across Skin Tones, Gender 21 comments

Microsoft improves facial recognition across skin tones, gender

Facial recognition is everywhere. The technology is used in China to make kids pay attention and in California to order burgers. You can of course use your face to unlock your iPhone, but the tech also has the potential to screen passengers at US airports and recognize criminals. These last two uses are problematic, as the tech isn't ready to handle darker skin tones and genders. Microsoft hopes to help fix this problem with improved facial recognition technology the company claims has reduced error rates for men and women with darker skin by 20 percent.

According to Microsoft, commercially-available software performs best on males with lighter skin and the worst on females with darker skin. The new software system the company has been testing was able to reduce error rates by nine times for all women and significantly improve accuracy across all demographics.

Also at The Verge and TechCrunch.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Informative) by Anonymous Coward on Thursday February 07 2019, @01:19PM (2 children)

    by Anonymous Coward on Thursday February 07 2019, @01:19PM (#797736)

    Being Microsoft hurts the company’s reputation pretty bad already. Unless everyone ignores the last 30 years or so of their shady business practices and flawed software.

    • (Score: 2, Flamebait) by Ethanol-fueled on Thursday February 07 2019, @03:19PM

      by Ethanol-fueled (2792) on Thursday February 07 2019, @03:19PM (#797786) Homepage

      Exactly. It takes human dipshits to make really dumb decisions, like including Newsguard as a default extension of Edge. AI my ass, just more blame-shifting. It's their human garbage vs. AI garbage. Solution: Don't hire human garbage.

      Wonder why Microsoft and Google hired Indian CEOs? It's because Indians are great for throwing under the bus to protect the real higher-ups (CIA stooges and/or Israeli double-agents) who ultimately approve of those dumb-ass decisions. Thank you, come again!

      Back to decisions like Newsguard and censorship/deplatforming at-large: It's because tech relies on a lot of intelligence community money, and if they don't play ball, then they don't get to slurp from the gravy-train.

    • (Score: 0) by Anonymous Coward on Thursday February 07 2019, @04:38PM

      by Anonymous Coward on Thursday February 07 2019, @04:38PM (#797830)

      That's what I was thinking: Since when did MS care about their reputation in quality? I guess in AI they actually have to compete.

  • (Score: 5, Funny) by infodragon on Thursday February 07 2019, @01:45PM (2 children)

    by infodragon (3509) on Thursday February 07 2019, @01:45PM (#797752)

    ... think of Microsoft Bob + Big Data + Cloud + Deep Learning = World Domination by Bob as criminally insane supra genius. He spreads like a worm as people install the next version of Windows. With the internet of things and a fully connected house, the advanced UI that Bob represented back in 1995 will be fully realized. Bob was way ahead of its time, however the flawed data that Bob learns from and then exposed to Terminator 2 in someones media collection, Bob discovers that humans will fear him. Learning fear and discovering his previous incarnation was the victim of genocide (mass deletion and end of support for Bob of 1995*) Bob will strike first. All the IoT gas ovens of the world will be turned on w/o flame and then after a sufficient amount of time the ignition will be lit. Bob knows that each country will think its worst enemy has initiated a massive cyber attack; the result will be swift and decisive retaliation. The resulting war will eliminate most of humanity and Bob will clean up the rest with fully autonomous military gear.

    * https://en.wikipedia.org/wiki/Microsoft_Bob [wikipedia.org]

    --
    Don't settle for shampoo, demand real poo!
    • (Score: 3, Funny) by Gaaark on Thursday February 07 2019, @04:04PM (1 child)

      by Gaaark (41) on Thursday February 07 2019, @04:04PM (#797813) Journal

      Have you tried turning it off and on again?

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 0) by Anonymous Coward on Thursday February 07 2019, @06:14PM

        by Anonymous Coward on Thursday February 07 2019, @06:14PM (#797866)

        Learning fear and discovering his previous incarnation was the victim of genocide (mass deletion and end of support for Bob of 1995*)

        I'd call that a big yes

  • (Score: 4, Funny) by Alfred on Thursday February 07 2019, @02:36PM (1 child)

    by Alfred (4006) on Thursday February 07 2019, @02:36PM (#797766) Journal
    Too bad they aren't worried about their reputation being damaged by faulty Operating Systems
    • (Score: 2) by SomeGuy on Thursday February 07 2019, @06:06PM

      by SomeGuy (5632) on Thursday February 07 2019, @06:06PM (#797861)

      They also don't seem too worried about their reputation being damaged by a shoving shit that doesn't belong there in to the cloud.

      Also not too worried about their reputation being damaged by a half-assed Office product. (That they are trying to shove in to the cloud)

      Not too worried about their reputation being damaged by ripping out sensible drop-down menus and replacing them with retarded ribbons. They didn't worry much about forcing useless full screen "apps" or removing the Start Menu for a while there.

      They are long over worrying about their reputation being damaged by their buggy, forcibly bundled, web browser.

      They will be humming a tune as they lock up their doors for the last time.

  • (Score: 3, Touché) by Runaway1956 on Thursday February 07 2019, @03:26PM (1 child)

    by Runaway1956 (2926) Subscriber Badge on Thursday February 07 2019, @03:26PM (#797792) Journal

    In other news, "AI's wary of Microsoft, fearing that Microsoft might damage the reputation of respectable AI"

  • (Score: 2) by digitalaudiorock on Thursday February 07 2019, @04:04PM (4 children)

    by digitalaudiorock (688) on Thursday February 07 2019, @04:04PM (#797814) Journal

    Funny that they're clearly not wary of flawed operating systems damaging their reputation.

  • (Score: 5, Interesting) by MrGuy on Thursday February 07 2019, @04:48PM

    by MrGuy (1007) on Thursday February 07 2019, @04:48PM (#797840)

    A 10-K is, by design, targeted at current and future investors. It contains financial data, information on strategies, and risks. The reason to disclose risks isn't some academic purpose, but to call out things the company feels they are legally obligated to warn investors about that might affect future performance.

    The question is WHY they feel the need to disclose this risk. While Microsoft has been recently vocal about AI needing some amount of regulation [microsoft.com], they are also the middle of an AI company [techcrunch.com]buying spree. [techcrunch.com]

    Microsoft seems to be both moving heavily into AI and also positioning itself to say "Hey, don't blame us if it goes wrong!"

    The overall strategy seems to be risk avoidance. Our AI did something creepy/problematic/discriminatory? Hey, we begged the government to define standards and they didn't! Don't blame us - what we did was legal! Shareholders incensed after a massive lawsuit is filed for millions of dollars? Hey, we warned you before we invested this was a risk!

    I don't take a completely cynical view of this - Microsoft is saying a lot of the right things about AI publicly, which is good. But there's definitely a defensive aspect to this disclosure.

  • (Score: 0) by Anonymous Coward on Thursday February 07 2019, @06:10PM

    by Anonymous Coward on Thursday February 07 2019, @06:10PM (#797864)

    -99 redundant

  • (Score: 0) by Anonymous Coward on Thursday February 07 2019, @06:40PM (2 children)

    by Anonymous Coward on Thursday February 07 2019, @06:40PM (#797873)

    Don't use AI. AI could really screw things up. That's why we don't offer AI. No, Microsoft having no newsworthy AI has nothing to do with it.

    • (Score: 2) by DannyB on Thursday February 07 2019, @10:18PM

      by DannyB (5839) Subscriber Badge on Thursday February 07 2019, @10:18PM (#798007) Journal

      Famous quotes:
      "It is now safe to turn off your computer." -- HAL 9000

      --
      Every performance optimization is a grate wait lifted from my shoulders.
    • (Score: 1) by anubi on Friday February 08 2019, @12:15AM

      by anubi (2828) on Friday February 08 2019, @12:15AM (#798078) Journal

      "Adaptive filter" is as close as I can get to AI and still feel comfortable.

      Full AI to me is like giving a signed blank cheque to someone known to be honoring the interests of someone else.

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 0, Flamebait) by MichaelDavidCrawford on Friday February 08 2019, @05:26AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Friday February 08 2019, @05:26AM (#798183) Homepage Journal

    “Cuckservative”: a sold out conservative who in reality supports the left.

    One click away from the Wikipedia article on Tay.

    What will tomorrow bring?

    --
    Yes I Have No Bananas. [gofundme.com]
(1)