Arthur T Knackerbracket has processed the following story:
It seems that although the internet is increasingly drowning in fake images, we can at least take some stock in humanity’s ability to smell BS when it matters. A slew of recent research suggests that AI-generated misinformation did not have any material impact on this year’s elections around the globe because it is not very good yet.
[...] In the U.S., the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% was made using AI. On X, mentions of “deepfake” or “AI-generated” in Community Notes were typically only mentioned with the release of new image generation models, not around the time of elections.
Interestingly, it seems that users on social media were more likely to misidentify real images as being AI-generated than the other way around, but in general, users exhibited a healthy dose of skepticism. And fake media can still be debunked through official communications channels, or through other means like Google reverse image-search.
It is hard to quantify with certainty how many people have been influenced by deepfakes, but the findings that they have been ineffective would make a lot of sense. AI imagery is all over the place these days, but images generated using artificial intelligence still have an off-putting quality to them, exhibiting tell-tale signs of being fake. An arm might unusually long, or a face does not reflect onto a mirrored surface properly; there are many small cues that will give away that an image is synthetic. Photoshop can be used to create much more convincing forgeries, but doing so requires skill.
« Malware Botnets Exploit Outdated D-Link Routers in Recent Attacks | TSMC's Arizona Plant To Start Making Advanced Chips »
Related Stories
Arthur T Knackerbracket has processed the following story:
As AI-generated content gets more ubiquitous in our everyday lives, you may be wondering, "How do I identify AI text?"
It's no surprise that these models get more difficult to detect as AI technology evolves. For now, the good news is that content such as images and video aren't that hard to parse with the human eye.
If you're a teacher or just a seasoned internet traveler, what's the secret to spotting AI-generated text? Well, it's simpler than you might think: use your eyes. There are actually ways to train the human eye to discern AI statements. Experts like MIT Technology Review's Melissa Heikkilä write that the "magic" of these machines "lies in the illusion of correctness."
No two people write in the same way, but there are common patterns. If you've ever worked a corporate job, you know how everyone uses the same generic phrasing when drafting memos to their boss. That’s why AI text detectors often flag content as "likely AI-generated" — because distinguishing between a bland human writing style and a generic AI-generated voice is nearly impossible.
So here's some tips and tricks to spot some potential AI-generated text:
- Look for frequent use of words like "the," "it," and "its."
- Absence of typos—AI text is often too perfect.
- Conclusionary statements that neatly sum up paragraphs.
- Overly verbose or padded writing.
- False or fabricated information and sources.
- A tone more advanced than the writer's usual submissions.
- Repetitive phrasing or oddly polished grammar.
(Score: 1) by MonkeypoxBugChaser on Tuesday December 31, @02:27AM (1 child)
So putin, trump and biden weren't standing at my door with guns?
It's much more convincing to edit some tweet or post and then say the person deleted it if challenged. The "skill" or using inspect element and print screen.
It's for top wizards only.
(Score: 2) by OrugTor on Tuesday December 31, @04:13PM
The thing about political imagery is that the target audience doesn't care if it's fake. They never cared about misinformation of any kind. If a video shows a politician doing something its supporters don't like they will assume it's fake. If the video fits their narrative they will act as if it is real, regardless of its apparent degree of authenticity.
(Score: 5, Insightful) by Unixnut on Tuesday December 31, @02:47AM (6 children)
The "yet" is the important word there. I have noticed that the AI generated images are getting better. While AI images originally just could not do fingers and toes (making some really warped hands and feet), the newer models have drastically improved things.
Originally it was easy to tell AI/Deepfake images apart from real ones. However AI CGI has been getting so good that recently I find it hard to tell whether a video or image is real or not. Usually the only saving grace I find is in images that have text, as AI text is always garbled (or it is in a language only the AI understands).
The improvement has also been relatively fast over the last two years. If the rate of improvement continues at this pace eventually It will become almost impossible to discern whether the media is fake or real, so people will err on the side of caution and classify more media as fake, even if it is real (TFA mentions we are already reaching this point with social media users).
(Score: 0) by Anonymous Coward on Tuesday December 31, @04:28AM
In the 70's or 80's, there was a political-something (or a court case?) over unrealistic standards -- politicians (prosecutors?) swore up and down that the playmate of the year (month?) was made up of at least 5 different women, all combine into one perfect-being. The actual playmate was introduced as herself, that unbelievably "perfect" model.
In the early 2000's, there was the outrage over unrealistic beauty standards, and a video showing that a photoshopper could turn a slice of pizza into a (human) super model. (No one even complained of Hee-Man, though..)
In the early 2020's there was all the rage about computers being able to generate super-realistic AI-generated photos.
In the early 2020's there was fear that computer-generated video would be utterly indistinguishable from real video.
Sigh. We're all still waiting. The thing about outrage -- for all but the politicians, for whom it's a job, it becomes wearying. It becomes exhausting to keep up the facade. It gets sloughed off as we all get back to life.
"Yet." Of course. Not yet. Still not yet. Tomorrow: probably not yet.
But hey. Fear, Uncertainty, and Doubt -- "What If". We have to be ready. We have to do all that we can. We must...!!
(Score: 2) by mhajicek on Tuesday December 31, @07:14AM (1 child)
There are models that handle text properly now.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2) by Unixnut on Tuesday December 31, @12:17PM
It doesn't surprise me, just like there are models now that handle human digits correctly. I expect it will only get better. It will be interesting if a plateau will be reached or whether models will keep improving to the point where what they generate will be indistinguishable from reality.
(Score: 2) by JoeMerchant on Tuesday December 31, @04:01PM (2 children)
You know how you tell when AI images have really gotten good? You don't. At least if you're not the one making them, they should be 100% completely convincing.
You know how AI images have already affected popular opinion? Lots of people are convinced that real images could be AI, so they only believe what they want to believe and shrug off everything else as "fake news."
🌻🌻🌻 [google.com]
(Score: 2) by Unixnut on Tuesday December 31, @05:18PM (1 child)
They have definitely gotten good enough to muddy the waters. I admit myself that just the energy and mental hassle required to study each video and picture to see if it is AI or not gets higher the more convincing they become. Text (as in articles) is still relatively easy to detect (as was shown on SN recently with certain articles), but no doubt that will improve with time as well.
Imagine, an entire global network stuffed so much with bots and AI generated nonsense that it would be easier to go to your local library to read a book to get accurate information then it is to sift through all that garbage. Like going back full circle to before the internet became mainstream.
It is already reaching the point where I am withdrawing from most things online just because its gotten too much hassle to separate the information from the noise. We (humanity) are at risk of effectively rendering mass communication impossible just due to the sheer volume of noise that can now be generated.
At that point as you mentioned, people when unable to discern fantasy from reality, will default to their own opinions and biases, and just flat out reject whatever does not fit their preconceived notions as "fake/AI".
(Score: 2) by JoeMerchant on Tuesday December 31, @10:25PM
>Imagine, an entire global network stuffed so much with bots and AI generated nonsense that it would be easier to go to your local library to read a book to get accurate information
1980s Skynet was going to nuke us
2020s Skynet is going to SPAM us to death, possibly getting us to nuke each other in the aftermath.
>We (humanity) are at risk of effectively rendering mass communication impossible just due to the sheer volume of noise that can now be generated
There are some fringe-ish players (some of the people behind Mastodon, for one example) who are mulling over this issue. A lot comes down to "human scale" networks. People weren't evolved to cope with more than a few hundred active relationships. Throw bots in the mix and we have even less bandwidth left to relate to real people with. Of course, Meta is working on doing just that: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/ [rollingstone.com]
🌻🌻🌻 [google.com]
(Score: 2) by Rosco P. Coltrane on Tuesday December 31, @10:19AM (3 children)
The porn sites I patronize are absolutely flooded with fake stuff. There's a staggering amount of it. But after looking at a lot of it, I can tell the fake stuff almost immediately. There are subtle but unmistakable clues when stuff is AI-generated - particularly video content.
So the internet isn't overrun by fake stuff that's impossible to distinguish from the real thing. That's the good news. The bad news is, it IS overrun by fake stuff, and it's a massive time waster. In the age of AI, you have to stift through tons and tons of worthless garbage to find interesting content, and it's getting worse fast.
So king of like before, but exponentially worse.
(Score: 2) by JoeMerchant on Tuesday December 31, @04:07PM (2 children)
>So the internet isn't overrun by fake stuff that's impossible to distinguish from the real thing.
The actual news is: you don't know that.
By definition, if it's impossible to distinguish, you won't know. You may see a bunch of un-convincing crap out there, but just because there's bad stuff doesn't mean there's also not good stuff...
As for "fake porn" - centerfolds have been airbrushed and creatively lit since there have been centerfolds. They tend to avoid needing to do too much of that because it's cheaper to just get good looking models, and there's a certain appeal to "real" - but they still do the quick and easy stuff because: quick and easy = cheap, and if a little investment makes more magazines sell... they're not in business for any other reasons than to make money.
My perception of modern porn is: the industry is awash in low effort content. Unsurprising that the AI stuff is low effort too.
🌻🌻🌻 [google.com]
(Score: 2) by Rosco P. Coltrane on Tuesday December 31, @09:46PM (1 child)
Porn is the mirror of the internet. The entire internet is, and pretty much has always been since it got away from academia and became for-profit, low effort mediocrity. Because as per the Pareto principle, that's the amount of effort it takes to reach 80% of credulous people with money in their pocket.
So no, I don't think there's a massive influx of good, believable AI-generated fakery that stands a modicum of scrutiny and critical thinking, because there doesn't need to be: the low-quality crap is sufficient to get enough people riled up, swayed into voting with their reptilian brain and spending their money on shit they don't want and say thank you to boot, and turn a gigantic profit. There is for sure good hand-made and machine-made fake, but it's not the norm.
(Score: 2) by JoeMerchant on Tuesday December 31, @10:36PM
>There is for sure good hand-made and machine-made fake, but it's not the norm.
Agreed, but when "the good stuff" is crafted really well, even "experts" can't tell, and that's still a powerful tool when used judiciously.
The low effort way to exploit the perception of impossible to determine fake images, voices and videos is, of course, to undermine trust in the media so manufactured fake truth carries equal weight with old school mainstream reporting.
🌻🌻🌻 [google.com]
(Score: 2) by bzipitidoo on Wednesday January 01, @03:00AM
Although you can tell when textures don't look quite right, what really gives away the fakery is the impossibility of the situation that is being depicted.
Like, I saw a video of a jeep crossing a flooded ford, with water up to the hood. The water shoved it off course over and over but it kept pushing and made it. It fooled hardly anyone. Lots of people know that a combustion engine cannot operate underwater. A real jeep would have stalled the moment that engine was given a dip. Don't have to get the water as high as the air intake, the electrical system being shorted out by water's ability to conduct electricity is enough to disable an engine. No spark = no combustion = dead engine.
Another impossibility that video depicted was flood waters that high and moving that fast would have done way more than shove the jeep a foot or two downstream, the flood would have swept the jeep away to be deposited miles downstream regardless of whether the engine worked or not.