Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by hubie on Friday January 20 2023, @06:56PM   Printer-friendly
from the SoylentNews-thought-leader dept.

Generative AI, like OpenAI's ChatGPT, could completely revamp how digital content is developed, said Nina Schick, adviser, speaker, and A.I. thought leader told Yahoo Finance Live:

"I think we might reach 90% of online content generated by AI by 2025, so this technology is exponential," she said. "I believe that the majority of digital content is going to start to be produced by AI. You see ChatGPT... but there are a whole plethora of other platforms and applications that are coming up."

The surge of interest in OpenAI's DALL-E and ChatGPT has facilitated a wide-ranging public discussion about AI and its expanding role in our world, particularly generative AI.

[...] Though it's complicated, the extent to which ChatGPT in its current form is a viable Google competitor, there's little doubt of the possibilities. Meanwhile, Microsoft already has invested $1 billion in OpenAI, and there's talk of further investment from the enterprise tech giant, which owns search engine Bing. The company is reportedly looking to invest another $10 billion in OpenAI.

Previously:


Original Submission

Related Stories

Google Engineer Suspended After Claiming AI Bot Sentient 79 comments

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of 22 comments

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio 16 comments

Text-to-speech model can preserve speaker's emotional tone and acoustic environment:

On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.

Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film 15 comments

https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.

[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.

Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Erasing Authors, Google and Bing’s AI Bots Endanger Open Web 27 comments

The new AIs draw from human-generated content, while pushing it away:

With the massive growth of ChatGPT making headlines every day, Google and Microsoft have responded by showing off AI chatbots built into their search engines. It's self-evident that AI is the future. But the future of what?

[...] Built on information from human authors, both companies' [(Microsoft's "New Bing" and Google's Bard)] AI engines are being positioned as alternatives to the articles they learned from. The end result could be a more closed web with less free information and fewer experts to offer you good advice.

[...] A lot of critics will justifiably be concerned about possible factual inaccuracies in chatbot results, but we can likely assume that, as the technology improves, it will get better at weeding out mistakes. The larger issue is that the bots are giving you advice that seems to come from nowhere – though it was obviously compiled by grabbing content from human writers whom Bard is not even crediting.

[...] I'll admit another bias. I'm a professional writer, and chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words. Most websites rely heavily on search as a source of traffic and, without those eyeballs, the business model of many publishers is broken. No traffic means no ads, no ecommerce clicks, no revenue and no jobs.

Eventually, some publishers could be forced out of business. Others could retreat behind paywalls and still others could block Google and Bing from indexing their content. AI bots would run out of quality sources to scrape, making their advice less reliable. And readers would either have to pay more for quality content or settle for fewer voices.

Related: 90% of Online Content Could be 'Generated by AI by 2025,' Expert Says


Original Submission

Tyler Perry Puts $800 Million Studio Expansion on Hold Because of OpenAI's Sora 16 comments

https://arstechnica.com/information-technology/2024/02/i-just-dont-see-how-we-survive-tyler-perry-issues-hollywood-warning-over-ai-video-tech/

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.

"I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would've increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I'm seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it's able to do. It's shocking to me."

[...] "It makes me worry so much about all of the people in the business," he told The Hollywood Reporter. "Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I'm thinking this will touch every corner of our industry."

You can read the full interview at The Hollywood Reporter

[...] Perry also looks beyond Hollywood and says that it's not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. "If you look at it across the world, how it's changing so quickly, I'm hoping that there's a whole government approach to help everyone be able to sustain."

Previously on SoylentNews:
OpenAI Teases a New Generative Video Model Called Sora - 20240222

OpenAI Launches ChatGPT With Search, Taking Google Head-on 13 comments

https://arstechnica.com/ai/2024/10/openai-launches-chatgpt-with-search-taking-google-head-on/

One of the biggest bummers about the modern Internet has been the decline of Google Search. Once an essential part of using the web, it's now a shadow of its former self, full of SEO-fueled junk and AI-generated spam.

On Thursday, OpenAI announced a new feature of ChatGPT that could potentially replace Google Search for some people: an upgraded web search capability for its AI assistant that provides answers with source attribution during conversations.
[...]
Each search result in ChatGPT comes with a citation link, and users can click a "Sources" button beneath responses to view referenced materials in a sidebar that pops up beside the chat history.

The new search system runs on a fine-tuned version of GPT-4o, which OpenAI says it post-trained using synthetic data output from its o1-preview model.
[...]
ChatGPT with Search also helps OpenAI take advantage of its new publishing partnerships and reframe those media relationships into something beyond merely scraping web data to train its AI models, which caused legal trouble in the past.
[...]
As mentioned above, over the past few years, OpenAI has established new partnerships with major news organizations, collaborating with the Associated Press, Axel Springer, Ars Technica parent Condé Nast, Dotdash Meredith, Financial Times, GEDI, Hearst, Le Monde, News Corp, Prisa (El País), Reuters, The Atlantic, Time, and Vox Media.
[...]
In a hands-on test of ChatGPT with Search, the new feature seemed to consistently pull relevant links from the web while answering our questions, but it wasn't perfect, returning a few errant sources here and there. It also sometimes provided irrelevant images that were shown beside some search results.
[...]
All these new avenues for ChatGPT to potentially prefer one website, source of information, company, brand, or shop brings up a big question: Will OpenAI offer preferential content placement for media partners or advertisers in the future?
[...]
In the future, OpenAI plans to add to the new search feature with custom answers for shopping and travel-related queries. The company also plans to use OpenAI's o1 series for deeper search capabilities and expand the search experience to Advanced Voice Mode and Canvas features.

The search function launches today for ChatGPT Plus and Team subscribers through chatgpt.com and mobile apps. Enterprise and education users will gain access in the coming weeks, with a broader rollout to free users planned over several months.

Ethical AI art generation? Adobe Firefly may be the answer. 13 comments

https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E, Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists.

Related:
Paper: Stable Diffusion "Memorizes" Some Images, Sparking Privacy Concerns
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Adobe Stock Begins Selling AI-Generated Artwork
A Startup Wants to Democratize the Tech Behind DALL-E 2, Consequences be Damned
Adobe Creative Cloud Experience Makes It Easier to Run Malware
Adobe Goes After 27-Year Old 'Pirated' Copy of Acrobat Reader 1.0 for MS-DOS
Adobe Critical Code-Execution Flaws Plague Windows Users
When Adobe Stopped Flash Content from Running it Also Stopped a Chinese Railroad
Adobe Has Finally and Formally Killed Flash
Adobe Lightroom iOS Update Permanently Deleted Users' Photos


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Touché) by MostCynical on Friday January 20 2023, @07:13PM (13 children)

    by MostCynical (2589) on Friday January 20 2023, @07:13PM (#1287772) Journal

    since 99% of online content is produced by morons, having AI take over may actually improve things..

    --
    "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
    • (Score: 5, Insightful) by vux984 on Friday January 20 2023, @08:00PM (8 children)

      by vux984 (5045) on Friday January 20 2023, @08:00PM (#1287786)

      Not if they're training the AI on the stuff written by the morons. Which is what they're doing.

      • (Score: 3, Insightful) by RS3 on Friday January 20 2023, @08:36PM (7 children)

        by RS3 (6367) on Friday January 20 2023, @08:36PM (#1287797)

        Opens up a lot of philosophical questions. I wonder what will happen when the AIs read other AI's writings. Will it lead to a Utopian set of solutions to human problems, or will it devolve? What biases are being written into the AI's processing? What priorities and goals, if any, are steering the AIs? Do they have, or might they develop ego and id? Will they develop their own language?

        • (Score: 4, Insightful) by Samantha Wright on Friday January 20 2023, @10:29PM (5 children)

          by Samantha Wright (4062) on Friday January 20 2023, @10:29PM (#1287812)

          None of the above. It will degrade. Imagine a stream of diarrhea stamping on a human face—forever.

          • (Score: 1, Touché) by Anonymous Coward on Friday January 20 2023, @11:13PM (3 children)

            by Anonymous Coward on Friday January 20 2023, @11:13PM (#1287820)

            Imagine a stream of diarrhea...

            Been there done that. Next?

            • (Score: 0) by Anonymous Coward on Saturday January 21 2023, @02:58PM (2 children)

              by Anonymous Coward on Saturday January 21 2023, @02:58PM (#1287903)

              No one is forcing you to read Twitter posts

              • (Score: 0) by Anonymous Coward on Saturday January 21 2023, @03:04PM

                by Anonymous Coward on Saturday January 21 2023, @03:04PM (#1287905)

                Eng Lit study on how microblogging has destroyed our language#srs#sus#comeatme#jk#debat#3amthoughts#amhangry#donatblud2day#examsuck#halp#nu

              • (Score: 0) by Anonymous Coward on Monday January 23 2023, @12:01AM

                by Anonymous Coward on Monday January 23 2023, @12:01AM (#1288110)

                Oh, now you tell me. Gee thanks.

                Now leave me alone, got a lot of cleaning up to do.

          • (Score: 0) by Anonymous Coward on Monday January 23 2023, @02:22PM

            by Anonymous Coward on Monday January 23 2023, @02:22PM (#1288177)

            Hey! No fetish shaming!

        • (Score: 4, Interesting) by aafcac on Sunday January 22 2023, @11:01PM

          by aafcac (17646) on Sunday January 22 2023, @11:01PM (#1288106)

          Probably not, the one time I know of where they had AI talking to AI they quickly started to speak their own language and the plug was pulled. I think this will probably always be the way that ends up, unless they're stuck sticking to an existent language, in which case don't expect any real creative expression.

    • (Score: 2) by SomeRandomGeek on Friday January 20 2023, @08:29PM (2 children)

      by SomeRandomGeek (856) on Friday January 20 2023, @08:29PM (#1287795)

      90% of online content is selfies, and pictures of lunch, and liking and re-tweeting other people's selfies and pictures of lunch. This could all be done by AIs, which would free up actual human beings to do something less mind numbing.

      This post has was crafted by ChatGPT. If you enjoyed it, please like and subscribe. It really does matter to us!

      • (Score: 4, Funny) by turgid on Friday January 20 2023, @08:58PM

        by turgid (4318) Subscriber Badge on Friday January 20 2023, @08:58PM (#1287802) Journal

        Once again, Douglas Adams' vision of the electric monk [technovelgy.com] was very prescient.

      • (Score: 0) by Anonymous Coward on Sunday January 22 2023, @06:29PM

        by Anonymous Coward on Sunday January 22 2023, @06:29PM (#1288066)

        > This post has was crafted by ChatGPT. If you enjoyed it, please like and subscribe. It really does matter to us!

        Thanks - I'll add your site to my AI scraper which reads and filters everything for me.

    • (Score: 2) by VLM on Sunday January 22 2023, @04:46PM

      by VLM (445) on Sunday January 22 2023, @04:46PM (#1288062)

      The percentage of bots is way higher than 1%

  • (Score: 2) by PastTense on Friday January 20 2023, @07:30PM (2 children)

    by PastTense (6879) on Friday January 20 2023, @07:30PM (#1287777)

    I think there is a good chance that the people who run these chatboxes won't bother with very small sites like this one while overrunning big sites like Reddit--which will turn into chatboxes chatting with chatboxes.

    • (Score: 2, Interesting) by Anonymous Coward on Friday January 20 2023, @08:11PM (1 child)

      by Anonymous Coward on Friday January 20 2023, @08:11PM (#1287788)

      Is there a way to keep the posts here on SN out of AI training sets? Do the scrapers obey anything like the "no robots" setting used (I guess) to prevent a page being indexed?

      • (Score: 1, Touché) by Anonymous Coward on Sunday January 22 2023, @06:31PM

        by Anonymous Coward on Sunday January 22 2023, @06:31PM (#1288067)

        lol sure and make sure to check No Track on your browser and everyone gets a free pony

  • (Score: 3, Insightful) by turgid on Friday January 20 2023, @07:52PM (3 children)

    by turgid (4318) Subscriber Badge on Friday January 20 2023, @07:52PM (#1287784) Journal

    So the WWW is where "everyone" hangs out on the intertubes and that will be the target for the Automated Rhubarb Dispensers (ARD). There is a simple solution. Leave normal people and the ARD to the WWW and we'll just invent something else.

    • (Score: 0) by Anonymous Coward on Friday January 20 2023, @07:54PM (1 child)

      by Anonymous Coward on Friday January 20 2023, @07:54PM (#1287785)

      Shortwave radio. Until that gets flooded by virtual HAMbots.

      • (Score: 1, Insightful) by Anonymous Coward on Sunday January 22 2023, @06:33PM

        by Anonymous Coward on Sunday January 22 2023, @06:33PM (#1288069)

        Hand written letters, smoke signals... wherever we go, there will some shithead trying to shovel cheap content and propaganda at us for ad revenue. This really is how it ends. Goodbye, cruel world.

    • (Score: 0) by Anonymous Coward on Friday January 20 2023, @08:21PM

      by Anonymous Coward on Friday January 20 2023, @08:21PM (#1287791)

      So it won't be Skynet after all, instead something more subtle? Brings this to mind,

      This is the way the world ends
              Not with a bang but a whimper.

      https://allpoetry.com/the-hollow-men [allpoetry.com]

  • (Score: 4, Insightful) by Revek on Friday January 20 2023, @08:39PM (1 child)

    by Revek (5022) on Friday January 20 2023, @08:39PM (#1287798)

    A reason to quit the net.

    --
    This page was generated by a Swarm of Roaming Elephants
    • (Score: 0) by Anonymous Coward on Sunday January 22 2023, @06:37PM

      by Anonymous Coward on Sunday January 22 2023, @06:37PM (#1288071)

      Fortunately I have a content blocker as well as an ad blocker so all I get is blank screen. The internet is like early 1990s bliss.

  • (Score: 2) by ikanreed on Friday January 20 2023, @10:07PM (4 children)

    by ikanreed (3164) Subscriber Badge on Friday January 20 2023, @10:07PM (#1287811) Journal

    There is so much computer-generated recycled trash on any given google search. The fact that it's mostly "traditional" algorithms and not "AI" doesn't matter in the fucking slightest to me.

    • (Score: 2) by corey on Friday January 20 2023, @10:43PM

      by corey (2202) on Friday January 20 2023, @10:43PM (#1287817)

      I didn’t read TFA but I was wondering what “digital content” meant. Ones and zeroes? Images? Videos? Emails/spam? Writing?

      Anyway. People at work were wowing over ChatGPT this week and showed me stuff with wide eyes. Yeah ok, I know a bit about CNNs and DL. I guess it did surprise me how far the native language models have come. But I’m concerned with it and the lack of any ethics standards around it. Where will this be in 2035, 2045+?

      Be interesting to see if it could rewrite itself. They taught it Python. Spawn a new instance of itself then talk together.

    • (Score: 1, Interesting) by Anonymous Coward on Saturday January 21 2023, @03:07PM (2 children)

      by Anonymous Coward on Saturday January 21 2023, @03:07PM (#1287906)

      Seriously, what is with those blogs and pages that are just generated chopped repeated chunks of existing sites?

      • (Score: 2) by ikanreed on Sunday January 22 2023, @01:43AM

        by ikanreed (3164) Subscriber Badge on Sunday January 22 2023, @01:43AM (#1287987) Journal

        It's cheaper than actually paying writers and gets clicks.

      • (Score: 2) by aafcac on Sunday January 22 2023, @11:05PM

        by aafcac (17646) on Sunday January 22 2023, @11:05PM (#1288108)

        It's a byproduct of Google going with fast rather than good as the sole basis for designing their search algorithms. Even in the day they weren't great, but they were able to search more of the net. Unfortunately, there's a bunch of stuff that you used to be able to do that either aren't possible with Google or are very awkward. A lot of that has to do with Google making basically no attempt at understanding any of the pages that it indexes.

  • (Score: 2) by oumuamua on Saturday January 21 2023, @01:15AM (2 children)

    by oumuamua (8401) on Saturday January 21 2023, @01:15AM (#1287828)

    Currently there are for-profit corporations and non-profits but time to make a middle ground corporation that pursues a goal:
    https://www.genolve.com/design/socialmedia/memes/Goal-Based-Corporations-to-Counteract-AI-Job-Los [genolve.com]
    Of course the first such of these corporations should have a goal of supplying food to other goal-based corporations.

    • (Score: 1) by khallow on Saturday January 21 2023, @07:11AM

      by khallow (3766) Subscriber Badge on Saturday January 21 2023, @07:11AM (#1287856) Journal

      Currently there are for-profit corporations and non-profits but time to make a middle ground corporation that pursues a goal

      You mean like a for profit corporation with bylaws? Been there. Done that. Turns out most corporation creators aren't interested in goals other than the usual ones.

      And given that you haven't mentioned what these goals are or why we need a different sort of corporation to pursue them, I'm struck with the thought that maybe we don't need those goals much less specialized corporations to pursue them!

    • (Score: 1) by khallow on Saturday January 21 2023, @07:21AM

      by khallow (3766) Subscriber Badge on Saturday January 21 2023, @07:21AM (#1287859) Journal
      I notice that there's some unintended consequences in there too. For a glaring example, consider the last sentence of that paragraph, "only one 501g may exist per goal". This creates a whole new scheme of abuse, such as a tobacco company creating the one 501g that has the goal of seeking reparations for tobacco products harm.
  • (Score: 3, Interesting) by looorg on Saturday January 21 2023, @05:54AM (3 children)

    by looorg (578) on Saturday January 21 2023, @05:54AM (#1287845)

    90% of the content made by bots observed by the "users" that is probably also made up of a large or substantial amount of bots. Bot on bot action triggering itself to produce more content that it likes that its ai tells it humans might like. But does so less and less with each cycle and iteration.

    Like email spam, it grows as part of traffic and content but garners less and less human eyeballs, so they try to send more to compensate. Repeat.

    • (Score: 1) by khallow on Saturday January 21 2023, @07:16AM (1 child)

      by khallow (3766) Subscriber Badge on Saturday January 21 2023, @07:16AM (#1287858) Journal
      I certainly wouldn't advertise to something like this because of what you observe. If a business can generate AI content so easily, then they can generate the readers just as easily. I think it's a safe bet that someone will figure out how to immensely inflate their ad view numbers with AI readers (given that various parties have already figured this trick out with various non-AI schemes) who won't be buying product.
    • (Score: 3, Funny) by choose another one on Sunday January 22 2023, @10:08AM

      by choose another one (515) Subscriber Badge on Sunday January 22 2023, @10:08AM (#1288043)

      Nicely done click-bait comment title that I thought meant someone had got in ahead of me, but no, you went somewhere else.

      See, what I would have done with that title was point out that I hope to live to see the day when there are thousands of out-of-work onlyFans "models" out there desperate for cash donors because someone has merged animatronic RealDolls with ChatGPT and used deepfake to scale out/up even cheaper. Because, lets face it, porn must be the single largest training dataset there is. In fact, the first part of the job is probably already done somewhere, just the training to finish...

(1)