Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun:
Here's an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It's an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.
But while it's an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today's human-run scams aren't limited by the number of people who respond to the initial email contact. They're limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that.
[...] Long-running financial scams are now known as pig butchering, growing the potential mark up until their ultimate and sudden demise. Such scams, which require gaining trust and infiltrating a target's personal finances, take weeks or even months of personal time and repeated interactions. It's a high stakes and low probability game that the scammer is playing.
Here is where LLMs will make a difference. Much has been written about the unreliability of OpenAI's GPT models and those like them: They "hallucinate" frequently, making up things about the world and confidently spouting nonsense. For entertainment, this is fine, but for most practical uses it's a problem. It is, however, not a bug but a feature when it comes to scams: LLMs' ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare more people, because the pool of victims who will fall for a more subtle and flexible scammer—one that has been trained on everything ever written online—is much larger than the pool of those who believe the king of Nigeria wants to give them a billion dollars.
[...] A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country's riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.
[...] Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI's designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalized, and the community of users pulls the LLM open through the chinks in its armor. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.
Originally spotted on Schneier on Security.
(Score: 3, Interesting) by krishnoid on Thursday April 13, @01:27AM (4 children)
"A robot can tell you what temperature it is to a tenth of a degree. But it cannot tell you if it feels cold." For now at least, people have favorite [sensory] preferences, so you might as well ask them for personal info. I sometimes ask customer support (especially in chat sessions) what the weather's been like where they are or what kind of music they like, just to confirm that I'm talking to a person.
(Score: 2, Interesting) by Anonymous Coward on Thursday April 13, @09:45AM (1 child)
But in the context of the title, there's no need to figure out whether it's a human or AI. All you need to figure out is whether it's a scam or not and actually that's normally not that hard. It's just that many people are greedy and/or incompetent.
Where I am, you're not charged when you get called. But you are charged for making calls. So what I do is try to keep scammers online while doing something else.
On a related note: https://www.vice.com/en/article/d3b7na/the-story-of-lenny-the-internets-favorite-telemarketing-troll [vice.com]
e.g. https://youtu.be/vWrkDOt_IfM [youtu.be]
So in theory you could have an AI talking to an AI for such cases... 😉
(Score: 2, Funny) by Anonymous Coward on Thursday April 13, @12:25PM
Oh...I luv that idea...letting the telemarketer talk to an AI.
Imagine this...In Trump or Biden's voice, the nice AI talks the telemarketer into all sorts of cool stuff. The telemarketer is pleased..he spent hours and sold lots of stuff to what he thought was a gullible customer.
The stuff arrives to someone else, who gets to keep it, as they didn't order it. Unpaid bills. People go to court. They play their recorded phone call to the Judge. The accused doesn't match the voice, but it does match Trump.
So it's Trump's fault.
(Score: 1, Informative) by Anonymous Coward on Thursday April 13, @12:51PM (1 child)
> just to confirm that I'm talking to a person.
Flip side of that, if I'm talking to a voice response system, it's really annoying if it tries to pretend it's a person. Case in point, I call the newspaper company if my paper isn't delivered. The automated script works fine, credits me when delivery fails (like during a blizzard). But it adds in "busy noises in the background" and says, "Just a minute while I check to see if your phone number is in our database". Bleah, after calling it a couple of times, any possible illusion is gone and the lipstick on this pig is just a waste of time.
(Score: 1, Interesting) by Anonymous Coward on Thursday April 13, @06:31PM
Just finished my taxes using Turbo Tax. About every 10 minutes it pops up a message telling me it is "checking" my returns *blinkety blink blink*, 5 second graphic of a prgress bar, pop up message telling me they have checked my returns 100%, click OK to continue. What the FUCK is this shit? Let me PUNCH THE FUCKING MONKEY NOW.
(Score: 2) by https on Thursday April 13, @02:08PM
This technology is not "advancing" but proliferating.
Called it last week [soylentnews.org].
Offended and laughing about it.
(Score: 2) by cmdrklarg on Thursday April 13, @04:57PM (3 children)
Pretty soon we will have to refer to an allow list to let email through. That won't even be enough. *sob*
Answer now is don't give in; aim for a new tomorrow.
(Score: 3, Insightful) by DannyB on Thursday April 13, @05:58PM (1 child)
We Gmail users can relax. Google will happily filter out all AI email scams that do not originate from one of Google's own AIs.
How often should I have my memory checked? I used to know but...
(Score: 3, Informative) by Reziac on Friday April 14, @02:48AM
I've found that to be 100% accurate.
Meanwhile, I can't send mail from my own domain to my GMail account.
And there is no Alkibiades to come back and save us from ourselves.
(Score: 1, Funny) by Anonymous Coward on Thursday April 13, @06:34PM
Or let a local GPT robot respond to all your messages - let them figure it out themselves.