Slash Boxes

SoylentNews is people

posted by janrinok on Wednesday February 07, @05:52PM   Printer-friendly
from the AI-overlords dept.

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called "Signals." The companies would not share financial details, but the amount of money is "substantial" to Semafor's business, said a person familiar with the matter.

[...] The partnerships come as media companies have become increasingly concerned over generative AI and its potential threat to their businesses. News publishers are grappling with how to use AI to improve their work and stay ahead of technology, while also fearing that they could lose traffic, and therefore revenue, to AI chatbots—which can churn out humanlike text and information in seconds.

The New York Times in December filed a lawsuit against Microsoft and OpenAI, alleging the tech companies have taken a "free ride" on millions of its articles to build their artificial intelligence chatbots, and seeking billions of dollars in damages.

[...] Semafor, which is free to read, is funded by wealthy individuals, including 3G capital founder Jorge Paulo Lemann and KKR co-founder Henry Kravis. The company made more than $10 million in revenue in 2023 and has more than 500,000 subscriptions to its free newsletters. Justin Smith said Semafor was "very close to a profit" in the fourth quarter of 2023.

Related stories on SoylentNews:
AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead - 20240112
New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement - 20231228
Microsoft Shamelessly Pumping Internet Full of Garbage AI-Generated "News" Articles - 20231104
Google, DOJ Still Blocking Public Access to Monopoly Trial Docs, NYT Says - 20231020
After ChatGPT Disruption, Stack Overflow Lays Off 28 Percent of Staff - 20231017
Security Risks Of Windows Copilot Are Unknowable - 20231011
Microsoft AI Team Accidentally Leaks 38TB of Private Company Data - 20230923
Microsoft Pulls AI-Generated Article Recommending Ottawa Food Bank to Tourists - 20230820
A Jargon-Free Explanation of How AI Large Language Models Work - 20230805
the Godfather of AI Leaves Google Amid Ethical Concerns - 20230502
The AI Doomers' Playbook - 20230418
Ads Are Coming for the Bing AI Chatbot, as They Come for All Microsoft Products - 20230404
Deepfakes, Synthetic Media: How Digital Propaganda Undermines Trust - 20230319

Original Submission

Related Stories

Deepfakes, Synthetic Media: How Digital Propaganda Undermines Trust 42 comments

Organizations must educate themselves and their users on how to detect, disrupt, and defend against the increasing volume of online disinformation:

More and more, nation-states are leveraging sophisticated cyber influence campaigns and digital propaganda to sway public opinion. Their goal? To decrease trust, increase polarization, and undermine democracies around the world.

In particular, synthetic media is becoming more commonplace thanks to an increase in tools that easily create and disseminate realistic artificial images, videos, and audio. This technology is advancing so quickly that soon anyone will be able to create a synthetic video of anyone saying or doing anything the creator wants. According to Sentinel, there was a 900% year-over-year increase in the proliferation of deepfakes in 2020.

It's up to organizations to protect against these cyber influence operations. But strategies are available for organizations to detect, disrupt, deter, and defend against online propaganda. Read on to learn more.

[...] As technology advances, tools that have traditionally been used in cyberattacks are now being applied to cyber influence operations. Nation-states have also begun collaborating to amplify each other's fake content.

These trends point to a need for greater consumer education on how to accurately identify foreign influence operations and avoid engaging with them. We believe the best way to promote this education is to increase collaboration between the federal government, the private sector, and end users in business and personal contexts.

Ads Are Coming for the Bing AI Chatbot, as They Come for All Microsoft Products 26 comments

Microsoft has spent a lot of time and energy over the last few months adding generative AI features to all its products, particularly its long-standing, long-struggling Bing search engine. And now the company is working on fusing this fast-moving, sometimes unsettling new technology with some old headaches: ads.

In a blog post earlier this week, Microsoft VP Yusuf Mehdi said the company was "exploring placing ads in the chat experience," one of several things the company is doing "to share the ad revenue with partners whose content contributed to the chat response." The company is also looking into ways to let Bing Chat show sources for its work, sort of like the ways Google, Bing, and other search engines display a source link below snippets of information they think might answer the question you asked.

Even the FBI Says You Should Use an Ad Blocker (20221227)
Microsoft Explores a Potentially Risky New Market (20220420)
Microsoft is Testing Ads in the Windows 11 File Explorer (20220314)
Sen. Ron Wyden Calls for an Investigation of the Ad-Blocking Industry (20200115)
Windows 10 App Starts Showing Ads, Microsoft Says You Can't Remove Them (20191215)
Microsoft Experiments with Ads in Windows Email (20181117)

Original Submission

The AI Doomers’ Playbook 85 comments

The AI Doomers' Playbook:

AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.

When a British tabloid headline screams, "Attack of the psycho chatbot," it's funny. When it's followed by another front-page headline, "Psycho killer chatbots are befuddled by Wordle," it's even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.

But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published "If we don't master AI, it will master us" (by Harari, Harris & Raskin), and TIME magazine published "Be willing to destroy a rogue datacenter by airstrike" (by Yudkowsky).

In just a few days, we went from "governments should force a 6-month pause" (the petition from the Future of Life Institute) to "wait, it's not enough, so data centers should be bombed." Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.

[...] Sam Altman has a habit of urging us to be scared. "Although current-generation AI tools aren't very scary, I think we are potentially not that far away from potentially scary ones," he tweeted. "If you're making AI, it is potentially very good, potentially very terrible," he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was "lights out for all of us."

[...] Altman's recent post "Planning for AGI and beyond" is as bombastic as it gets: "Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history."

It is at this point that you might ask yourself, "Why would someone frame his company like that?" Well, that's a good question. The answer is that making OpenAI's products "the most important and scary – in human history" is part of its marketing strategy. "The paranoia is the marketing."

the Godfather of AI Leaves Google Amid Ethical Concerns 27 comments

The Morning After: the Godfather of AI Leaves Google Amid Ethical Concerns

The Morning After: The Godfather of AI leaves Google amid ethical concerns:

Geoffrey Hinton, nicknamed the Godfather of AI, told The New York Times he resigned as Google VP and engineering fellow in April to freely warn of the risks associated with the technology. The researcher is concerned Google is giving up its previous restraint on public AI releases to compete with ChatGPT, Bing Chat and similar models. In the near term, Hinton says he's worried that generative AI could lead to a wave of misinformation. You might "not be able to know what is true anymore," he says. He's also concerned it might not just eliminate "drudge work," but outright replace some jobs – which I think is a valid worry already turning into a reality.

A Jargon-Free Explanation of How AI Large Language Models Work 13 comments

When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn't realize how powerful they had become.

Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work.

Microsoft Pulls AI-Generated Article Recommending Ottawa Food Bank to Tourists 17 comments

As reported by The Verge


the Canadian Broadcasting Corporation

In 2020 Microsoft laid off dozens of journalists, in a move to rely on artificial intelligence. Those journalists were responsible for selecting content for Microsoft platforms, including MSN and the Edge browser. A recent tourism article now reminds us of that earlier business decision.

Published last week and titled "Headed to Ottawa? Here's what you shouldn't miss!" the article listed 15 must-see attractions for visitors to the Canadian capital. Microsoft has since removed the article that advised tourists to visit the "beautiful" Ottawa Food Bank on an empty stomach. That appears to be an out-of-context rewrite of a paragraph on the food bank's website. "Life is challenging enough," it says. "Imagine facing it on an empty stomach."

The remainder of the must-see list was rife with errors. It featured a photo of the Rideau River in an entry about the Rideau Canal, and a photo of the Rideau Canal in an entry about Parc Omega near Montebello, Quebec. It advised tourists to enjoy the pristine grass of "Parliament Hills."

The article carried the byline "Microsoft Travel." There is nothing on the page that identifies it as the product of AI. Microsoft did not immediately respond to a request for comment on how the article was generated. While now removed, it is still available via the Internet Archive.

Original Submission

Microsoft AI Team Accidentally Leaks 38TB of Private Company Data 40 comments

Microsoft AI team accidentally leaks 38TB of private company data:

AI researchers at Microsoft have made a huge mistake.

According to a new report from cloud security company Wiz, the Microsoft AI research team accidentally leaked 38TB of the company's private data.

38 terabytes. That's a lot of data.

The exposed data included full backups of two employees' computers. These backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from more than 350 Microsoft employees.

So, how did this happen? The report explains that Microsoft's AI team uploaded a bucket of training data containing open-source code and AI models for image recognition. Users who came across the Github repository were provided with a link from Azure, Microsoft's cloud storage service, in order to download the models.

One problem: The link that was provided by Microsoft's AI team gave visitors complete access to the entire Azure storage account. And not only could visitors view everything in the account, they could upload, overwrite, or delete files as well.

[martyb ed. update: My first hard disk drive was a Seagare ST-231. It could store so much data that I had to partition it into two "devices" under Microsoft DOS 3.2: 32MB and 8MB. It was so large that I thought that nobody would be able to use all that disk space! Over time, newest drives has had: 80MB, 200MB, and 1TB. My current PC has a 2TB drive... and that is relatively "small" by today's standards. Microsoft lost 38TB?!]

How large were your drives over time?

Original Submission

Security Risks Of Windows Copilot Are Unknowable 21 comments

Arthur T Knackerbracket has processed the following story:

I am still amazed how few people – even in IT – have heard of Windows Copilot. Microsoft's deep integration of Bing Chat into Windows 11 was announced with much fanfare back in May.

Microsoft hasn't been quiet about it – indeed it can’t seem to shut up about Copilot this and Copilot that – yet it seems that the real impact of this sudden Copilotization of all the things has somehow managed to fly under the radar.

[...] Microsoft has rushed to get Copilot into its operating system

[...] Windows Copilot looks just like Bing Chat – which may be why IT folks haven't given it a second look. Bing Chat has been available in Microsoft's Edge Browser for months – no biggie.

But Windows Copilot only looks like Bing Chat. While Bing Chat runs within the isolated environment of the web browser, Copilot abandons those safeties. Copilot can touch and change Windows system settings – not all of them (at least not yet) but some of them, with more being added all the time. That means Microsoft's AI chatbot has broken loose of its hermetically sealed browser, and has the run of our PCs.

[...] Every day we learn of new prompt injection attacks – weaponizing the ambiguities of human language (and, sometimes, just the right level of noise) to override the guardrails keeping AI chatbots on the straight and narrow. Consider a prompt injection attack hidden within a Word document: Submitted to Windows Copilot for an analysis and summary, the document also injects a script that silently transmits a copy of the files in the working directory to the attacker.

After ChatGPT Disruption, Stack Overflow Lays Off 28 Percent of Staff 17 comments

After ChatGPT disruption, Stack Overflow lays off 28 percent of staff:

Stack Overflow used to be every developer's favorite site for coding help, but with the rise of generative AI like ChatGPT, chatbots can offer more specific help than a 5-year-old forum post ever could.

[...] You might think of Stack Overflow as "just a forum," but the company is working on a direct answer to ChatGPT in the form of "Overflow AI," which was announced in July. Stack Overflow's profitability plan includes cutting costs, and that's the justification for the layoffs. Stack Overflow doubled its headcount in 2022 with 525 people. ChatGPT launched at the end of 2022, making for unfortunate timing.

[... ] OpenAI is working on web crawler controls for ChatGPT, which would let sites like Stack Overflow opt out of crawling. [...] Chandrasekar has argued that sites like Stack Overflow are essential for chatbots, saying they need "to be trained on something that's progressing knowledge forward. They need new knowledge to be created."

Original Submission #1Original Submission #2

Google, DOJ Still Blocking Public Access to Monopoly Trial Docs, NYT Says

Dozens of exhibits from the Google antitrust trial are still being hidden from the public, The New York Times Company alleged in a court filing today.

According to The Times, there are several issues with access to public trial exhibits on both sides. The Department of Justice has failed to post at least 68 exhibits on its website that were shared in the trial, The Times alleged, and states have not provided access to 18 records despite reporters' requests.
Currently, The Times said it is seeking to unseal redactions in two exhibits, and it remains "unclear why the exhibits have been redacted" because "they date to 2007 and relate to a version of an agreement between Apple and Google that has not been operative for more than a decade."

Perhaps most notably, The Times has also asked the court to unseal testimony from Apple exec Eddy Cue and Google vice president and general manager of ads, Jerry Dischler, in their entirety.

"The Court has upheld redactions to certain transcripts in the absence of a showing by the parties on the public record that the sealing is justified and without providing its own 'full explanation of the basis for the redactions," The Times alleged, "even though some of the redactions have been applied to material that is both of great public interest and goes to the core of the litigation."

Microsoft CEO Warns of "Nightmare" Future for AI If Google's Search Dominance Continues

Original Submission

Microsoft Shamelessly Pumping Internet Full of Garbage AI-Generated "News" Articles 34 comments

The company pumps out trash-tier AI content, then waits until it's called out publicly to quietly delete it and move onto the next trainwreck:

We've known that Microsoft's MSN news portal has been pumping out a garbled, AI-generated firehose for well over a year now.

The company has been using the website to distribute misleading and oftentimes incomprehensible garbage to hundreds of millions of readers per month.

As CNN now reports, that's likely in large part due to MSN's decision to lay off most of the human editors at MSN over the years. In the wake of that culling, the company has redirected its efforts toward AI, culminating in a $10 billion stake in ChatGPT maker OpenAI earlier this year.

And if MSN presents a vision of how the tech industry's obsession with AI is going to play out in the information ecosystem, we're in for a rough ride.

Beyond republishing stories by small, unknown publishers — like the one that infamously called former NBA player Brandon Hunter, who passed away unexpectedly at the age of 42 in September, "useless" in its headline — Microsoft using a variety of tactics to shoehorn AI into its MSN.

Sometimes it's even generating AI content itself, like when it published and then deleted a bizarre travel guide to Ottawa, Canada that recommended visiting a food bank "on an empty stomach."

"This article has been removed and we are investigating how it made it through our review process," Microsoft's senior director of communications said in the wake of the embarrassment.

New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement 51 comments

New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement

The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind popular AI chatbot ChatGPT, accusing the companies of creating a business model based on "mass copyright infringement," stating their AI systems "exploit and, in many cases, retain large portions of the copyrightable expression contained in those works:"

Microsoft both invests in and supplies OpenAI, providing it with access to the Redmond, Washington, giant's Azure cloud computing technology.

The publisher said in a filing in the U.S. District Court for the Southern District of New York that it seeks to hold Microsoft and OpenAI to account for the "billions of dollars in statutory and actual damages" it believes it is owed for the "unlawful copying and use of The Times's uniquely valuable works."

[...] The Times said in an emailed statement that it "recognizes the power and potential of GenAI for the public and for journalism," but added that journalistic material should be used for commercial gain with permission from the original source.

"These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise," the Times said.

AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead 18 comments

Media outlets are calling foul play over AI companies using their content to build chatbots. They may find friends in the Senate:

Logo text More than a decade ago, the normalization of tech companies carrying content created by news organizations without directly paying them — cannibalizing readership and ad revenue — precipitated the decline of the media industry. With the rise of generative artificial intelligence, those same firms threaten to further tilt the balance of power between Big Tech and news.

On Wednesday, lawmakers in the Senate Judiciary Committee referenced their failure to adopt legislation that would've barred the exploitation of content by Big Tech in backing proposals that would require AI companies to strike licensing deals with news organizations.

Richard Blumenthal, Democrat of Connecticut and chair of the committee, joined several other senators in supporting calls for a licensing regime and to establish a framework clarifying that intellectual property laws don't protect AI companies using copyrighted material to build their chatbots.

[...] The fight over the legality of AI firms eating content from news organizations without consent or compensation is split into two camps: Those who believe the practice is protected under the "fair use" doctrine in intellectual property law that allows creators to build upon copyrighted works, and those who argue that it constitutes copyright infringement. Courts are currently wrestling with the issue, but an answer to the question is likely years away. In the meantime, AI companies continue to use copyrighted content as training materials, endangering the financial viability of media in a landscape in which readers can bypass direct sources in favor of search results generated by AI tools.

[...] A lawsuit from The New York Times, filed last month, pulled back the curtain behind negotiations over the price and terms of licensing its content. Before suing, it said that it had been talking for months with OpenAI and Microsoft about a deal, though the talks reached no such truce. In the backdrop of AI companies crawling the internet for high-quality written content, news organizations have been backed into a corner, having to decide whether to accept lowball offers to license their content or expend the time and money to sue in a lawsuit. Some companies, like Axel Springer, took the money.

It's important to note that under intellectual property laws, facts are not protected.

Also at Courthouse News Service and Axios.


Original Submission

Why the New York Times Might Win its Copyright Lawsuit Against OpenAI 23 comments

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times "has a near zero probability of winning" its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

"Trying to get everyone to license training data is not going to work because that's not what copyright is about," Jeffries wrote. "Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works."

[...] Courts are supposed to consider four factors in fair use cases, but two of these factors tend to be the most important. One is the nature of the use. A use is more likely to be fair if it is "transformative"—that is, if the new use has a dramatically different purpose and character from the original. Judge Rakoff dinged as non-transformative because songs were merely "being retransmitted in another medium."

In contrast, Google argued that a book search engine is highly transformative because it serves a very different function than an individual book. People read books to enjoy and learn from them. But a search engine is more like a card catalog; it helps people find books.

The other key factor is how a use impacts the market for the original work. Here, too, Google had a strong argument since a book search engine helps people find new books to buy.

[...] In 2015, the Second Circuit ruled for Google. An important theme of the court's opinion is that Google's search engine was giving users factual, uncopyrightable information rather than reproducing much creative expression from the books themselves.

[...] Recently, we visited Stability AI's website and requested an image of a "video game Italian plumber" from its image model Stable Diffusion.

[...] Clearly, these models did not just learn abstract facts about plumbers—for example, that they wear overalls and carry wrenches. They learned facts about a specific fictional Italian plumber who wears white gloves, blue overalls with yellow buttons, and a red hat with an "M" on the front.

These are not facts about the world that lie beyond the reach of copyright. Rather, the creative choices that define Mario are likely covered by copyrights held by Nintendo.

AI Story Roundup 27 comments

[We have had several complaints recently (polite ones, not a problem) regarding the number of AI stories that we are printing. I agree, but that reflects the number of submissions that we receive on the subject. So I have compiled a small selection of AI stories into one and you can read them or ignore them as you wish. If you are making a comment please make it clear exactly which story you are referring to unless your comment is generic. The submitters each receive the normal karma for a submission. JR]

Image-scraping Midjourney bans rival AI firm for scraping images

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected "botnet-like" activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney's official Discord channel.

[...] Siobhan Ball of The Mary Sue found it ironic that a company like Midjourney, which built its AI image synthesis models using training data scraped off the Internet without seeking permission, would be sensitive about having its own material scraped. "It turns out that generative AI companies don't like it when you steal, sorry, scrape, images from them. Cue the world's smallest violin."

[...] Shortly after the news of the ban emerged, Stability AI CEO Emad Mostaque said that he was looking into it and claimed that whatever happened was not intentional. He also said it would be great if Midjourney reached out to him directly. In a reply on X, Midjourney CEO David Holz wrote, "sent you some information to help with your internal investigation."

[...] When asked about Stability's relationship with Midjourney these days, Mostaque played down the rivalry. "No real overlap, we get on fine though," he told Ars and emphasized a key link in their histories. "I funded Midjourney to get [them] off the ground with a cash grant to cover [Nvidia] A100s for the beta."

Midjourney stories on SoylentNews:
Stable Diffusion (Stability AI) stories on SoylentNews:

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by Anonymous Coward on Wednesday February 07, @06:49PM (11 children)

    by Anonymous Coward on Wednesday February 07, @06:49PM (#1343537)

    They've dropped all the false pretenses about fake news. Now they're just going to get AI to make up the bullshit.

    • (Score: 4, Insightful) by VLM on Wednesday February 07, @08:27PM (1 child)

      by VLM (445) on Wednesday February 07, @08:27PM (#1343545)

      I don't entirely disagree, but consider that its more like paid propaganda at this point than "bullshit".

      If the business model is getting paid to push stuff, does it matter if journalists get a cut? Do we really need journalists if the entire business model is now basically all infomercials all the time?

      • (Score: 3, Insightful) by mcgrew on Thursday February 08, @07:21PM

        by mcgrew (701) <> on Thursday February 08, @07:21PM (#1343650) Homepage Journal

        If the business model is getting paid to push stuff, does it matter if journalists get a cut?

        There's a word [] you might want to look up.

    • (Score: 3, Touché) by DannyB on Wednesday February 07, @08:53PM (1 child)

      by DannyB (5839) Subscriber Badge on Wednesday February 07, @08:53PM (#1343546) Journal

      They've dropped all the false pretenses about fake news.

      This is not fake news.

      This is artificial news.

      Brought to you by AI.

      Now they're just going to get AI to make up the bullshit.

      Wouldn't that make the AI news equally as good as some current news sources?

      At least CNN doesn't just make stuff up and call it news.

      CNN calls it BREAKING NEWS.

      People who think Republicans wouldn't dare destroy Social Security or Medicare should ask women about Roe v Wade.
      • (Score: 0) by Anonymous Coward on Thursday February 08, @01:26PM

        by Anonymous Coward on Thursday February 08, @01:26PM (#1343622)

        This is artificial news....
        CNN calls it BREAKING NEWS.

        The final of a fantasy football championship interrupted by an AI generated Heidi [], eh?

    • (Score: 4, Interesting) by crafoo on Wednesday February 07, @08:58PM (6 children)

      by crafoo (6639) on Wednesday February 07, @08:58PM (#1343547)

      yeah. most people don't understand that our USA mainstream media is state media, given direction from the elites and the DoD. pretty much the same game plan they ran on Russia, China, Eastern Europe, and everywhere else they've used Voice of America to sow division and color revolutions. In the USA: the same. Mainstream media drives wedges between racial/ethnic/religious/political groups to keep the populace atomized and more easily controlled.

      have you ever looked into the backgrounds of the talking heads? they do their best to scrub but it's still out there. do you know who their families are and who they are connected to? Do you know who Tucker Carlson's father is?

      • (Score: 2) by Ox0000 on Wednesday February 07, @10:58PM

        by Ox0000 (5111) on Wednesday February 07, @10:58PM (#1343556)

        I love conspiracy theorists. Their optimism is so adorable.
        They have clearly never been project managers.

      • (Score: 1) by khallow on Thursday February 08, @03:25AM

        by khallow (3766) Subscriber Badge on Thursday February 08, @03:25AM (#1343576) Journal

        pretty much the same game plan they ran on Russia, China, Eastern Europe, and everywhere else they've used Voice of America to sow division and color revolutions.

        They missed a couple: Russia and China. Imagine how much less shitty the world would be with a couple VOA color revolutions in those places. I think this is telling about the quality of your conspiracy - that it'd be better for everyone, if it were actually happening.

      • (Score: 3, Touché) by captain normal on Thursday February 08, @03:50AM (2 children)

        by captain normal (2205) on Thursday February 08, @03:50AM (#1343578)

        Does Tucker Carlson know who his father is?

        "If men were angels, government would not be necessary." James Madison
        • (Score: 2) by mcgrew on Thursday February 08, @07:30PM

          by mcgrew (701) <> on Thursday February 08, @07:30PM (#1343653) Homepage Journal

          Darth Vader.


        • (Score: -1, Redundant) by Anonymous Coward on Friday February 09, @04:43AM

          by Anonymous Coward on Friday February 09, @04:43AM (#1343685)

          I've always thought he's just a bastard.

      • (Score: 3, Insightful) by mcgrew on Thursday February 08, @07:27PM

        by mcgrew (701) <> on Thursday February 08, @07:27PM (#1343652) Homepage Journal

        yeah. most people don't understand that our USA mainstream media is state media, given direction from the elites and the DoD. pretty much the same game plan they ran on Russia...

        "Elites"? You really believe that filthy damned rich people are "elite"? Because that's who runs America, the Sacklers, the Waltons, the Bezos, and a few dozen more in the .1% who fund both the media and elections.

        That's why I like to also get news from Germany's DW, Britain's BBC, and any other foreign outlet. The trouble is, the same rich assholes (e.g. the Aussie who started Fox) own the whole God damned world.

  • (Score: 2, Interesting) by Runaway1956 on Wednesday February 07, @07:49PM (5 children)

    by Runaway1956 (2926) Subscriber Badge on Wednesday February 07, @07:49PM (#1343541) Journal

    We've had eye candy news anchors breathlessly reporting non-news for decades already. Most of lame-stream media already makes mountains out of mole hills, completely ignoring real news. I can skim my news feeds, and quickly decide that only 2 or 3% of the headlines are worth clicking on. Can AI do any worse?

    Do political debates really matter? Ask Joe!
    • (Score: 3, Touché) by Anonymous Coward on Wednesday February 07, @08:02PM

      by Anonymous Coward on Wednesday February 07, @08:02PM (#1343542)
      Considering that you've shared articles from satire-sites we all know you'll share whatever AI says when you agree with it.
    • (Score: 4, Insightful) by Freeman on Wednesday February 07, @08:26PM

      by Freeman (732) on Wednesday February 07, @08:26PM (#1343544) Journal

      That 2-3% usefulness in all the fluff was until recently created by actual humans. Now, you'll get 0.5% useful stuff (which was actually created by humans) and 99.5% useless stuff, good luck in the near future AI misinformation apocalypse!

      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by mcgrew on Thursday February 08, @07:33PM (2 children)

      by mcgrew (701) <> on Thursday February 08, @07:33PM (#1343654) Homepage Journal

      It wasn't like that half a century ago. This century seems to be the century of dishonesty, where lies, slander, fraud, and theft are not only allowed but encouraged!

      • (Score: 2, Touché) by Runaway1956 on Thursday February 08, @08:18PM (1 child)

        by Runaway1956 (2926) Subscriber Badge on Thursday February 08, @08:18PM (#1343656) Journal

        Mmm-hmmm. And which of the news media rejected the CIA's bogus reports justifying the invasion of Vietnam? Which of those media did their own investigative reporting, to expose the fact that the Military Industrial Complex was using Vietnam to test out all their new toys, like the Huey and Cobra helicopters? I know it was eventually exposed, but not until about 1969 or thereabouts.

        Do political debates really matter? Ask Joe!
        • (Score: 0) by Anonymous Coward on Friday February 09, @06:40AM

          by Anonymous Coward on Friday February 09, @06:40AM (#1343687)
          The "advantage" now is they can shift some blame to the AIs when stuff goes wrong or they "get caught".

          And also do more copyright infringement laundering stuff that the AI bunch have been doing ( GPLed? No longer... ).

          Copyrighted news? The AI coincidentally produced a similar article from various different sources.

          Fake news? Was a borked algorithm..