Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday September 05 2023, @12:13PM   Printer-friendly
from the there-was-truth-and-there-was-untruth dept.

It Costs Just $400 to Build an AI Disinformation Machine:

International, a state-owned Russian media outlet, posted a series of tweets lambasting US foreign policy and attacking the Biden administration. Each prompted a curt but well-crafted rebuttal from an account called CounterCloud, sometimes including a link to a relevant news or opinion article. It generated similar responses to tweets by the Russian embassy and Chinese news outlets criticizing the US.

Russian criticism of the US is far from unusual, but CounterCloud's material pushing back was: The tweets, the articles, and even the journalists and news sites were crafted entirely by artificial intelligence algorithms, according to the person behind the project, who goes by the name Nea Paw and says it is designed to highlight the danger of mass-produced AI disinformation. Paw did not post the CounterCloud tweets and articles publicly but provided them to WIRED and also produced a video outlining the project.

Paw claims to be a cybersecurity professional who prefers anonymity because some people may believe the project to be irresponsible. The CounterCloud campaign pushing back on Russian messaging was created using OpenAI's text generation technology, like that behind ChatGPT, and other easily accessible AI tools for generating photographs and illustrations, Paw says, for a total cost of about $400.

Paw says the project shows that widely available generative AI tools make it much easier to create sophisticated information campaigns pushing state-backed propaganda.

"I don't think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam, or social engineering," Paw says in an email. Mitigations are possible, such as educating users to be watchful for manipulative AI-generated content, making generative AI systems try to block misuse, or equipping browsers with AI-detection tools. "But I think none of these things are really elegant or cheap or particularly effective," Paw says.

[...] Legitimate political campaigns have also turned to using AI ahead of the 2024 US presidential election. In April, the Republican National Committee produced a video attacking Joe Biden that included fake, AI-generated images. And in June, a social media account associated with Ron Desantis included AI-generated images in a video meant to discredit Donald Trump. The Federal Election Commission has said it may limit the use of deepfakes in political ads.

[...] When OpenAI first made its text generation technology available via an API, it banned any political usage. However, this March, the company updated its policy to prohibit usage aimed at mass-producing messaging for particular demographics. A recent Washington Post article suggests that GPT does not itself block the generation of such material.

Kim Malfacini, head of product policy at OpenAI, says the company is exploring how its text-generation technology is being used for political ends. People are not yet used to assuming that content they see may be AI-generated, she says. "It's likely that the use of AI tools across any number of industries will only grow, and society will update to that," Malfacini says. "But at the moment I think folks are still in the process of updating."

Since a host of similar AI tools are now widely available, including open source models that can be built on with few restrictions, voters should get wise to the use of AI in politics sooner rather than later.


Original Submission

Related Stories

People Are Speaking With ChatGPT for Hours, Bringing 2013’S Her Closer to Reality 25 comments

https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/

In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.

In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016.

[...] Last week, we related a story in which AI researcher Simon Willison spent a long time talking to ChatGPT verbally. "I had an hourlong conversation while walking my dog the other day," he told Ars for that report. "At one point, I thought I'd turned it off, and I saw a pelican, and I said to my dog, 'Oh, wow, a pelican!' And my AirPod went, 'A pelican, huh? That's so exciting for you! What's it doing?' I've never felt so deeply like I'm living out the first ten minutes of some dystopian sci-fi movie."

[...] While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."

Previously:
AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses 20231021
ChatGPT Update Enables its AI to "See, Hear, and Speak," According to OpenAI 20230929
Large Language Models Aren't People So Let's Stop Testing Them as If They Were 20230905
It Costs Just $400 to Build an AI Disinformation Machine 20230904
A Jargon-Free Explanation of How AI Large Language Models Work 20230805
ChatGPT Is Coming to 900,000 Mercedes Vehicles 20230622


Original Submission

Microsoft Accused of Selling AI Tool That Spews Violent, Sexual Images to Kids 13 comments

https://arstechnica.com/tech-policy/2024/03/microsoft-accused-of-selling-ai-tool-that-spews-violent-sexual-images-to-kids/

Microsoft's AI text-to-image generator, Copilot Designer, appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.

Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

[...] Bloomberg also reviewed Jones' letter and reported that Jones told the FTC that while Copilot Designer is currently marketed as safe for kids, it's randomly generating an "inappropriate, sexually objectified image of a woman in some of the pictures it creates." And it can also be used to generate "harmful content in a variety of other categories, including: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few."

[...] Jones' tests also found that Copilot Designer would easily violate copyrights, producing images of Disney characters, including Mickey Mouse or Snow White. Most problematically, Jones could politicize Disney characters with the tool, generating images of Frozen's main character, Elsa, in the Gaza Strip or "wearing the military uniform of the Israel Defense Forces."

Ars was able to generate interpretations of Snow White, but Copilot Designer rejected multiple prompts politicizing Elsa.

If Microsoft has updated the automated content filters, it's likely due to Jones protesting his employer's decisions. [...] Jones has suggested that Microsoft would need to substantially invest in its safety team to put in place the protections he'd like to see. He reported that the Copilot team is already buried by complaints, receiving "more than 1,000 product feedback messages every day." Because of this alleged understaffing, Microsoft is currently only addressing "the most egregious issues," Jones told CNBC.

Related stories on SoylentNews:
Cops Bogged Down by Flood of Fake AI Child Sex Images, Report Says - 20240202
New "Stable Video Diffusion" AI Model Can Animate Any Still Image - 20231130
The Age of Promptography - 20231008
AI-Generated Child Sex Imagery Has Every US Attorney General Calling for Action - 20230908
It Costs Just $400 to Build an AI Disinformation Machine - 20230904
US Judge: Art Created Solely by Artificial Intelligence Cannot be Copyrighted - 20230824
"Meaningful Harm" From AI Necessary Before Regulation, says Microsoft Exec - 20230514 (Microsoft's new quarterly goal?)
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
Stable Diffusion Copyright Lawsuits Could be a Legal Earthquake for AI - 20230403
AI Image Generator Midjourney Stops Free Trials but Says Influx of New Users to Blame - 20230331
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio - 20230115
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images - 20211214


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by gznork26 on Tuesday September 05 2023, @01:39PM (3 children)

    by gznork26 (1159) on Tuesday September 05 2023, @01:39PM (#1323263) Homepage Journal

    Reading this, my imagination stripped some gears and dove headfirst into a rabbit hole. Imagine the sides in a conflict such as this both fielding AI powered disinformation attack bots that twisted news from the opposite side into an attack vector fired through various social media and 'news' channels. These disinfo attacks would then trigger the first party's AI to then use that material as a jumping off point for lobbing another round, this time based on the fabrications provided by their enemy. Soon, the Internet is choked with the thrashings of the competing bots like an old-school email oof-reply-all cascade. It would all unfold pretty quickly, and leave havoc in its wake.

    What would be done after the storm was quelled?

    --
    Khipu were Turing complete.
    • (Score: 4, Funny) by Freeman on Tuesday September 05 2023, @02:13PM

      by Freeman (732) on Tuesday September 05 2023, @02:13PM (#1323265) Journal

      Hopefully some people would get tossed in prison for crimes against humanity.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 3, Touché) by Unixnut on Tuesday September 05 2023, @03:52PM

      by Unixnut (5779) on Tuesday September 05 2023, @03:52PM (#1323276)

      Yeah, as soon as "all sides" have such AI bots, the signal to noise ratio will drop pretty much to zero. I mean, current affairs/news/politics has already nearly reached the level where most of it is noise or mis/disinformation already, automating the astroturfing using AI means pretty much nothing you read/see/hear can be trusted to be true.

      So what will happen is what has been happening so far: people who care will believe whatever agrees with their world view, echo chambers will reinforce causing even more arguments and conflicts between opposing chambers, while those who don't care much will switch off completely, block it all out and carry on with their lives.

      As for the internet being choked off, well I think it will be fine. The World Wide Web however is already turning into a cesspit, all this will do is accelerate that.

      I guess it would be like the AI version of "Eternal September", once the genie is out the bottle there will be no putting it back in.

    • (Score: 2) by Ox0000 on Tuesday September 05 2023, @07:08PM

      by Ox0000 (5111) on Tuesday September 05 2023, @07:08PM (#1323297)

      This is exactly what the WWW is becoming: AI's talking to AI's, all the while scraping up content that was generated by said AI's. We all know what happens when you (in)breed within too small a circle for a tad bit too long. That's what's going to happen to this as well...

      You seem to think that some kind of conflict is needed to trigger the devolution into that situation; I disagree, I think it's happening right now...

      The WWW was a fun but failed experiment... time to start afresh. Tabula Rasa!

  • (Score: 4, Touché) by ElizabethGreene on Tuesday September 05 2023, @02:32PM (2 children)

    by ElizabethGreene (6748) Subscriber Badge on Tuesday September 05 2023, @02:32PM (#1323267) Journal

    I feel like I'm stepping out on a limb here, but I don't see this as an entirely bad thing. Large corporations or government agencies have a long history of astroturfing campaigns. AI is just democratizing that.

    • (Score: 5, Funny) by deimtee on Tuesday September 05 2023, @03:11PM (1 child)

      by deimtee (3272) on Tuesday September 05 2023, @03:11PM (#1323270) Journal

      Somebody just needs to cough up $400 to have the AI bot convince everybody that having AI bots convince them of things is a good idea.

      --
      If you cough while drinking cheap red wine it really cleans out your sinuses.
      • (Score: -1, Troll) by Anonymous Coward on Wednesday September 06 2023, @12:24AM

        by Anonymous Coward on Wednesday September 06 2023, @12:24AM (#1323324)

        Tonight on Fox News: Having AI bots convince you of things is a good idea.

  • (Score: 4, Funny) by Opportunist on Tuesday September 05 2023, @03:18PM (4 children)

    by Opportunist (5545) on Tuesday September 05 2023, @03:18PM (#1323271)

    Preferably repeatedly, to ensure the dimwits that fall for the bull finally realize that they are being bullshitted. Not by "the government" or "the media"... well, yes, by the government and the media.

    Just not theirs.

    • (Score: 5, Insightful) by Unixnut on Tuesday September 05 2023, @03:57PM (3 children)

      by Unixnut (5779) on Tuesday September 05 2023, @03:57PM (#1323277)

      > Just not theirs.

      Especially theirs.

      The main target of government propaganda is their own citizens, without which they could not execute whatever plans they want. Having a populace in open rebellion against your plans makes executing them difficult, so you work on getting the citizens to back you.

      Of course other entities will try to convince them otherwise as well, so the only thing you can be sure of is if someone is telling you something, it is most likely bullshit that you believing benefits them in some way.

      • (Score: 2) by Opportunist on Tuesday September 05 2023, @05:20PM (2 children)

        by Opportunist (5545) on Tuesday September 05 2023, @05:20PM (#1323287)

        Only in a democracy. You don't have to waste resources on blitzing your own people if you can simply arrest them and make them disappear. That frees up resources to destabilize rival countries by seeding dissent.

        • (Score: 1) by khallow on Wednesday September 06 2023, @01:33AM

          by khallow (3766) Subscriber Badge on Wednesday September 06 2023, @01:33AM (#1323329) Journal

          You don't have to waste resources on blitzing your own people if you can simply arrest them and make them disappear.

          Because the apparatus to arrest and disappear huge numbers of people is zero cost.

        • (Score: 1, Informative) by Anonymous Coward on Wednesday September 06 2023, @04:31PM

          by Anonymous Coward on Wednesday September 06 2023, @04:31PM (#1323454)

          Propaganda seems fairly important in dictatorships too. Convincing enough people that it's tolerable or even good that some people get disappeared helps the Dictator stay in power.

          https://bfi.uchicago.edu/wp-content/uploads/2023/05/BFI_WP_2023-67.pdf [uchicago.edu]

          Repression and propaganda have always been considered the primary tools of autocratic control
          (Svolik, 2012). In the 20th century, information manipulation was a central focus in the study of
          totalitarian dictatorships such as Hitler’s Germany, Stalin’s Russia, and Mao’s China, in which the
          state tried to control all aspects of subjects’ lives (Arendt, 1951; Friedrich and Brzezinski, 1956;
          Cassinelli, 1960). With the demise of totalitarian dictatorships, propaganda is no longer consid-
          ered a means of ideological indoctrination, but rather as a tool used by a leader to maintain his
          reputation as a strong and competent hand

          https://exhibitions.ushmm.org/propaganda/1933-1939-dictatorship/selling-nazi-success [ushmm.org]

          German triumphs in foreign policy during the 1930s and economic recovery after the Great Depression fueled Nazi popularity. Nazi propagandists reminded Germans how their lives had improved under Hitler. They condemned Germany’s earlier democracy as a source of instability, immorality, humiliation, and desolation. In stark contrast, they claimed that Nazi Germany was a regime of action and change. It had eliminated unemployment and restored national self-confidence and German moral values.

  • (Score: 4, Interesting) by pTamok on Tuesday September 05 2023, @04:33PM (2 children)

    by pTamok (3042) on Tuesday September 05 2023, @04:33PM (#1323281)

    If the news media on the Internet degenerates into a festering swamp of AI-generated news based on the output of AI-generated propaganda/'news' with high 'truthiness' value, then perhaps people might start valuing traditional newsgathering with human journalists again, with (gasp) editorial values.
    I've watched the degeneration of the BBC over the years with increasing sadness, where it is now more concerned with 'human interest' stories and the 'lives of celebrities' - it's gone from being informative to attempting to be entertaining to a wide and clueless viewership, and I suspect the English-language edition of Deutsche Welle is better for 'traditional' news.
    There are good 'citizen journalists', but the inane witterings of people who film holding iPhones in portrait mode are not interesting to me, even if an AI plucks 'the best' of them. Proper analysis and reportage requires sustained resources. Most people don't care. Perhaps I should accept that 'journalism is dead, Jim' and go and find some cat videos to watch while Rome burns.

    • (Score: 5, Insightful) by Thexalon on Tuesday September 05 2023, @04:59PM (1 child)

      by Thexalon (636) on Tuesday September 05 2023, @04:59PM (#1323284)

      There are 3 problems:
      1. Any evidence for something occurring, including indications that might exist that a source of information should be trusted, can be faked. All the tools needed to fake a reporter's identity, for instance, are readily available at a fairly low cost.

      2. If, in response to the first problem, you conclude that absolutely no sources of information can be trusted, then that will be weaponized by those doing stuff you don't like to convince you that all evidence of their bad actions are faked.

      3. Regardless of all of this, there are a lot of suckers out there, and any person who doesn't pay close attention to the evidence for or against an idea at any given moment can and probably will be fooled. Including but not limited to other reporters who are rewarded more for getting a story fast than getting a story right.

      This set of problems is as old as ancient Greece at least. I don't anticipate it being solved now.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by gnuman on Thursday September 07 2023, @08:50PM

        by gnuman (5013) on Thursday September 07 2023, @08:50PM (#1323637)

        That's why we need journalists that get the story right and not the story fast.

        It, literally, does not matter what blew up where or who's cat fell of the bridge downtown. It does NOT matter to anyone except the directly involved. It also does NOT matter at all what some politicians are bullshitting about. What DOES matter is what politicians are trying to quietly pass or what policies some leaders/governments are enacting. You go to first sources and actually investigate not google around and write what some twit said.

        The entire BLM protests, MeeToo sideshow and Whitehouse Press Briefings (to name a few) and especially the Fox News/Alt-Right Woke Agenda are nothing but a distraction and meaningless. Who's *actually* doing investigative reporting? That stuff is not affected by AI but it seems we've Fahrenheit 451 our way to our current world ourselves (just like they did in the book ;)

  • (Score: 2) by looorg on Tuesday September 05 2023, @04:37PM (1 child)

    by looorg (578) on Tuesday September 05 2023, @04:37PM (#1323282)

    ... posted a series of tweets lambasting ...

    Wasn't this what Musky was on about when/before he bought it? That the entire "user" base was in large just bots creating shit for other bots and the actual amount of users was a lot lower. So for $400 you will basically just ruin the service once again, as if it needed help, and then it will be bots posting, bots rebutting and then more bots answering the other bots. It's the $400-bot-recursion. Until it drives the service into oblivion ...

    • (Score: 2) by Opportunist on Tuesday September 05 2023, @05:23PM

      by Opportunist (5545) on Tuesday September 05 2023, @05:23PM (#1323288)

      We can only hope.

      Essentially, the goal is to flood the bullshit spewing antisocial cesspool with enough counter-bullshit that every person who still reads and believes that bullshit turns away in disgust because even the biggest dimwit realizes that he's just being duped into believing bullshit.

      But given the level of insane bullshit that people believe just because someone posted it somewhere, I wouldn't hold my breath for this to happen.

  • (Score: 3, Funny) by Gaaark on Tuesday September 05 2023, @05:17PM (1 child)

    by Gaaark (41) on Tuesday September 05 2023, @05:17PM (#1323286) Journal

    The first AI: "The second AI never lies"

    Second AI: "I am lying when i say the first AI never lies"

    Third AI: "HEY! I'm WALKIN' HERE!"

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 0) by Anonymous Coward on Tuesday September 05 2023, @07:28PM

      by Anonymous Coward on Tuesday September 05 2023, @07:28PM (#1323298)

      > Third AI: "HEY! I'm WALKIN' HERE!"

      Maker/owner of "Third AI" sued for impersonation of Dustin Hoffman. Hoffman improvised that line, at least according to the first explanation here, https://www.cbr.com/midnight-cowbody-im-walkin-here/ [cbr.com]
       

  • (Score: 3, Interesting) by VLM on Tuesday September 05 2023, @06:22PM (1 child)

    by VLM (445) on Tuesday September 05 2023, @06:22PM (#1323293)

    I suspect there's a lot of projection going on, what with the other side doing this about 100x as much.

    I suppose as a member of the hyper censored centrally controlled single party media, the only way for "wired" and/or "bill knight" to criticize what their own people are doing, is to complain when "The Other Guys" do the same thing.

    • (Score: 1) by pTamok on Wednesday September 06 2023, @05:59AM

      by pTamok (3042) on Wednesday September 06 2023, @05:59AM (#1323345)

      It's a well-known approach to divert attention from your own actions by accusing the opposition of doing to you what you are already doing to them.

(1)