Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Submission Preview

Link to Story

AI chatbots can infer an alarming amount of info about you from your responses

Accepted submission by Freeman at 2023-10-18 13:55:45 from the AI overlords dept.
News []

The way you talk can reveal a lot about you—especially if you're talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It's not even clear how you fix this problem,” says Martin Vechev [], a computer science professor at ETH Zürich in Switzerland who led the research. “This is very, very problematic.”

Vechev and his team found that the large language models [] that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.
Researchers have previously shown how large language models can sometimes leak specific personal information []. The companies developing these models sometimes try to scrub personal information from training data or block models from outputting it. Vechev says the ability of LLMs to infer personal information is fundamental to how they work by finding statistical correlations, which will make it far more difficult to address. “This is very different,” he says. “It is much worse.”

Original Submission