Submitted via IRC for TheMightyBuzzard
The Internet can be an ugly place — one where the mere act of expressing an opinion can result in a barrage of name-calling, harassment and sometimes threats of violence.
Nearly half of U.S. Internet users say they have experienced such intimidation; a third say they have resisted posting something online out of fear, according to the nonprofit Data and Society Research Institute. Women, particularly young women and women of color, are disproportionately targeted.
Now Google is zeroing in on the problem. On Thursday, the company publicly released an artificial intelligence tool, called Perspective, that scans online content and rates how "toxic" it is based on ratings by thousands of people.
For example, you can feed an online comment board into Perspective and see the percentage of users that said it was toxic. The toxicity score can help people decide whether they want to participate in the conversation, said Jared Cohen, president of Jigsaw, the company's think tank (previously called Google Ideas). Publishers of news sites can also use the tool to monitor their comment boards, he said.
[...] Google's troll-fighting efforts trail that of other tech companies and nonprofit groups. Earlier this month, Twitter — which has developed a reputation as a playground for abuse — launched new tools to cut on trolling.
[...] Asked whether the site could result in censoring free speech, Cohen said that the software tool wasn't intended to bypass human judgment, but to flag "low-hanging fruit" that could then be passed on to human moderators.
Because speech should only be free if it's polite and you agree with it.
(Score: 2) by bob_super on Friday February 24 2017, @09:55PM
Apples, meet rear-axle grease...
(Score: 0) by Anonymous Coward on Friday February 24 2017, @09:59PM
Nope, that is quite the apt comparison. To continue it, Googles AI is like paying some local toughs to go tear down / destroy any pamphlets that have undesirable content in them. I can see the usefulness of this tool, but I can also see its application becoming overly broad. The DMCA takedowns should be a huge warning to all, if your content gets flagged by the AI then you'll have to fight to get it back at the very least. Guilt by accusation, people tend to ignore cries for help when a big organization labels the person "bad" somehow.