Arthur T Knackerbracket has found the following story:
MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:
The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence "The characters, cast in impossibly contrived situations, are totally estranged from reality" to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality" makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.
The results of this adversarial machine learning attack are impressive:
For example, Google's powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.
The paper:
-- submitted from IRC
(Score: 0) by Anonymous Coward on Wednesday April 29 2020, @02:42PM
If the weakness is in ALL neural nets.... it's not the dataset. /cluebat
And right on cue the NIH has thrown all its money into AI, done by Chinese 21 yr old grad students advised by newly minted Chinese asst/assc professors. A perfectly diverse workforce (as per the UC mandate) of 95% Chinese + 5% Iranian males between the ages of 21 and 35.
Oh and coronavirus "whatever just do something" (ditto on the Chinese component).