New research has shown just how bad AI is at dealing with online trolls.
Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.
A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.
Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.
The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.
(Score: 4, Insightful) by Runaway1956 on Saturday September 01 2018, @08:11AM (5 children)
All software is pretty static. Software can't be updated and upgraded on an hourly basis - it's written, compiled, tested, released, and put into use. The average "hacker", for want of a better term, has the initiative. He can examine your code, poke it, prod it, kick it around, and watch what it does. When he's gained a little confidence, he can try to break your code. And, you can do nothing, other than to react to the break, days, weeks, months, or even years later.
All this AI is just software, after all. And the "hackers" are browsing Facefook, Twitter, and all the rest of the "social media" with nothing better to do, than test the software.
You, the defender - the software writer - can improve your defensive fortress forever. That won't change the fact that the attackers have the initiative, and they are destined to beat you.
How many protection schemes on software, music, games, or prorietary hardware remain undefeated? Anyone know where I can get a keygen for $software?
(Score: 1, Touché) by Anonymous Coward on Saturday September 01 2018, @08:39AM (2 children)
Bullshit. All the crypto algos are public, NSA still needs to buy that $5 wrench to get to the encryption key.
Da fuck - most of this AI is in the model that one trains - i.e. data.
Do you attend the local tautology club often?
Yes, you are right, today's AI is dumb and adversarial attacks are easy to craft. But you are right for the wrong (or intellectually bland) reasons.
(Score: 0) by Anonymous Coward on Saturday September 01 2018, @09:50AM (1 child)
nice warping you attempted there before conceding his point!
"they are destined to beat you."
the archetypal war between (ordered)day and (chaotic)night requires
that we walk along the razors edge of culture without falling into the deep on either side.
(Score: 0) by Anonymous Coward on Saturday September 01 2018, @10:10AM
His point is: "AI Sucks At Stopping Online Trolls Spewing Toxic Comments" with wrong explanations on why is that.
Where the wrong explanations are relevant.
Take a trained AI and go trial-and-error-hacker to find the cracks.
Then find adversarial attacks based on the knowledge on NN and compare the costs between the two approaches.
(Score: 2) by coolgopher on Saturday September 01 2018, @08:41AM
Sure, it's right over there, it comes prepackaged with $malware for your naivite^Wconvenience. Get the .exe file directly, it's quicker and safer than the .zip. ;)
(Score: 2) by fritsd on Saturday September 01 2018, @04:29PM
No.
The functionality of an AI program is much more defined by what kind of data it has been trained on.
Exhibit # 1: Microsoft "Tay" [soylentnews.org]
I loved the BBC article, Microsoft chatbot is taught to swear on Twitter [bbc.com] because it has a snapshot
of someone's funny tweet: