Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by hubie on Sunday April 16 2023, @07:57AM   Printer-friendly

Writers and publishers face an existential threat from AI: time to embrace the true fans model:

Walled Culture has written several times about the major impact that generative AI will have on the copyright landscape. More specifically, these systems, which can create quickly and cheaply written material on any topic and in any style, are likely to threaten the publishing industry in profound ways. Exactly how is spelled out in this great post by Suw Charman-Anderson on her Word Count blog. The key point is that large language models (LLMs) are able to generate huge quantities of material. The fact that much of it is poorly written makes things worse, because it becomes harder to find the good stuff[.]

[...] One obvious approach is to try to use AI against AI. That is, to employ automated vetting systems to weed out the obvious rubbish. That will lead to an expensive arms race between competing AI software, with unsatisfactory results for publishers and creators. If anything, it will only cause LLMs to become better and to produce material even faster in an attempt to fool or simply overwhelm the vetting AIs.

The real solution is to move to an entirely different business model, which is based on the unique connection between human creators and their fans. The true fans approach has been discussed here many times in other contexts, and once more reveals itself as resilient in the face of change brought about by rapidly-advancing digital technologies.

True fans are not interested in the flood of AI-generated material: they want authenticity from the writers they know and whose works they love. True fans don't care if LLMs can churn out pale imitations of their favourite creators for almost zero cost. They are happy to support the future work of traditional creators by paying a decent price for material. They understand that LLMs may be able to produce at an ever-cheaper cost, but that humans can't.

There's a place for publishers (and literary magazines) in this world, helping writers connect with their readers, and turning writing that fans support into publications offered in a variety of formats, both digital and physical. But for that to happen publishers must accept that they serve creators. That's unlike today, where many writers are little more than hired labourers churning out work for the larger publishing houses to exploit.

In today's new world of slick, practically cost-free LLMs, even the pittance of royalties will no longer be on offer to most creators. It's time for the latter to move on to where they are deeply appreciated, fairly paid, and really belong: among their true fans.

This first sounded like a description of Patreon, but what's he talking about is something like a people-run Patreon that has all the bells and whistles of recommendation algorithms, reviews, etc., not just a simple way to give money directly to individuals. My bet is whomever writes the first successful one gets bought out by an Amazon-like entity . . . [Ed.]


Original Submission

Related Stories

AI Story Roundup 27 comments

[We have had several complaints recently (polite ones, not a problem) regarding the number of AI stories that we are printing. I agree, but that reflects the number of submissions that we receive on the subject. So I have compiled a small selection of AI stories into one and you can read them or ignore them as you wish. If you are making a comment please make it clear exactly which story you are referring to unless your comment is generic. The submitters each receive the normal karma for a submission. JR]

Image-scraping Midjourney bans rival AI firm for scraping images

https://arstechnica.com/information-technology/2024/03/in-ironic-twist-midjourney-bans-rival-ai-firm-employees-for-scraping-its-image-data/

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected "botnet-like" activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney's official Discord channel.

[...] Siobhan Ball of The Mary Sue found it ironic that a company like Midjourney, which built its AI image synthesis models using training data scraped off the Internet without seeking permission, would be sensitive about having its own material scraped. "It turns out that generative AI companies don't like it when you steal, sorry, scrape, images from them. Cue the world's smallest violin."

[...] Shortly after the news of the ban emerged, Stability AI CEO Emad Mostaque said that he was looking into it and claimed that whatever happened was not intentional. He also said it would be great if Midjourney reached out to him directly. In a reply on X, Midjourney CEO David Holz wrote, "sent you some information to help with your internal investigation."

[...] When asked about Stability's relationship with Midjourney these days, Mostaque played down the rivalry. "No real overlap, we get on fine though," he told Ars and emphasized a key link in their histories. "I funded Midjourney to get [them] off the ground with a cash grant to cover [Nvidia] A100s for the beta."

Midjourney stories on SoylentNews: https://soylentnews.org/search.pl?tid=&query=Midjourney&sort=2
Stable Diffusion (Stability AI) stories on SoylentNews: https://soylentnews.org/search.pl?tid=&query=Stable+Diffusion&sort=2

AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead 18 comments

Media outlets are calling foul play over AI companies using their content to build chatbots. They may find friends in the Senate:

Logo text More than a decade ago, the normalization of tech companies carrying content created by news organizations without directly paying them — cannibalizing readership and ad revenue — precipitated the decline of the media industry. With the rise of generative artificial intelligence, those same firms threaten to further tilt the balance of power between Big Tech and news.

On Wednesday, lawmakers in the Senate Judiciary Committee referenced their failure to adopt legislation that would've barred the exploitation of content by Big Tech in backing proposals that would require AI companies to strike licensing deals with news organizations.

Richard Blumenthal, Democrat of Connecticut and chair of the committee, joined several other senators in supporting calls for a licensing regime and to establish a framework clarifying that intellectual property laws don't protect AI companies using copyrighted material to build their chatbots.

[...] The fight over the legality of AI firms eating content from news organizations without consent or compensation is split into two camps: Those who believe the practice is protected under the "fair use" doctrine in intellectual property law that allows creators to build upon copyrighted works, and those who argue that it constitutes copyright infringement. Courts are currently wrestling with the issue, but an answer to the question is likely years away. In the meantime, AI companies continue to use copyrighted content as training materials, endangering the financial viability of media in a landscape in which readers can bypass direct sources in favor of search results generated by AI tools.

[...] A lawsuit from The New York Times, filed last month, pulled back the curtain behind negotiations over the price and terms of licensing its content. Before suing, it said that it had been talking for months with OpenAI and Microsoft about a deal, though the talks reached no such truce. In the backdrop of AI companies crawling the internet for high-quality written content, news organizations have been backed into a corner, having to decide whether to accept lowball offers to license their content or expend the time and money to sue in a lawsuit. Some companies, like Axel Springer, took the money.

It's important to note that under intellectual property laws, facts are not protected.

Also at Courthouse News Service and Axios.

Related:


Original Submission

Why the New York Times Might Win its Copyright Lawsuit Against OpenAI 23 comments

https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times "has a near zero probability of winning" its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

"Trying to get everyone to license training data is not going to work because that's not what copyright is about," Jeffries wrote. "Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works."

[...] Courts are supposed to consider four factors in fair use cases, but two of these factors tend to be the most important. One is the nature of the use. A use is more likely to be fair if it is "transformative"—that is, if the new use has a dramatically different purpose and character from the original. Judge Rakoff dinged MP3.com as non-transformative because songs were merely "being retransmitted in another medium."

In contrast, Google argued that a book search engine is highly transformative because it serves a very different function than an individual book. People read books to enjoy and learn from them. But a search engine is more like a card catalog; it helps people find books.

The other key factor is how a use impacts the market for the original work. Here, too, Google had a strong argument since a book search engine helps people find new books to buy.

[...] In 2015, the Second Circuit ruled for Google. An important theme of the court's opinion is that Google's search engine was giving users factual, uncopyrightable information rather than reproducing much creative expression from the books themselves.

[...] Recently, we visited Stability AI's website and requested an image of a "video game Italian plumber" from its image model Stable Diffusion.

[...] Clearly, these models did not just learn abstract facts about plumbers—for example, that they wear overalls and carry wrenches. They learned facts about a specific fictional Italian plumber who wears white gloves, blue overalls with yellow buttons, and a red hat with an "M" on the front.

These are not facts about the world that lie beyond the reach of copyright. Rather, the creative choices that define Mario are likely covered by copyrights held by Nintendo.

The Dangers of a Superintelligent AI is Fiction 19 comments

Speaking of the existential threat of AI is science fiction, and bad science fiction for that matter because it is not based on anything we know about science, logic, and nothing that we even know about ourselves:

Despite their apparent success, LLMs are not (really) 'models of language' but are statistical models of the regularities found in linguistic communication. Models and theories should explain a phenomenon (e.g., F = ma) but LLMs are not explainable because explainability requires structured semantics and reversible compositionality that these models do not admit (see Saba, 2023 for more details). In fact, and due to the subsymbolic nature of LLMs, whatever 'knowledge' these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own. In addition to the lack of explainability, LLMs will always generate biased and toxic language since they are susceptible to the biases and toxicity in their training data (Bender et. al., 2021). Moreover, and due to their statistical nature, these systems will never be trusted to decide on the "truthfulness" of the content they generate (Borji, 2023) – LLMs ingest text and they cannot decide which fragments of text are true and which are not. Note that none of these problematic issues are a function of scale but are paradigmatic issues that are a byproduct of the architecture of deep neural networks (DNNs) and their training procedures. Finally, and contrary to some misguided narrative, these LLMs do not have human-level understanding of language (for lack of space we do not discuss here the limitations of LLMs regarding their linguistic competence, but see this for some examples of problems related to intentionality and commonsense reasoning that these models will always have problems with). Our focus here is on the now popular theme of how dangerous these systems are to humanity.

The article goes on to provide a statistical argument as to why we are many, many years away from AI being an existential threat, ending with:

So enjoy the news about "the potential danger of AI". But watch and read this news like you're watching a really funny sitcom. Make a nice drink (or a nice cup of tea), listen and smile. And then please, sleep well, because all is OK, no matter what some self-appointed god fathers say. They might know about LLMs, but they apparently never heard of BDIs.

The author's conclusion seems to be that although AI may pose a threat to certain professions, it doesn't endanger the existence of humanity.

Related:


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by looorg on Sunday April 16 2023, @10:22AM (3 children)

    by looorg (578) on Sunday April 16 2023, @10:22AM (#1301674)

    In the AI writer bot world where they churn out new books in quantity every hour of every day in the style they think people want clonebooks off or from how are the real human writers going to be seen in the myriad of daily publications?

    True fans are not interested in the flood of AI-generated material

    What they don't know won't hurt them. In some markets there might be a niche for the AI generated drivel. Cheap paperbacks, they might even cut the pulp print and just have cheap digital files for your reader.

    If they want an AI to detect the bad AI book perhaps they'll be spotted or detected as being written by actual humans, until they tweak the algo so the AI also write more human like books then that gig will be up. That AI vs AI war of development will just be like online ads and adblocking. A neverending shitfest.

    Then what are "True fans" anyway. I like a lot of books, authors, musicians, artists in general etc. That doesn't mean I consume all they make. I don't read every book they ever put out, listen to every song etc. Some of them are just not that good. Just cause you made something good once or twice doesn't mean all your stuff is golden. Sounds more like "True Fans" are OnlyFans, creepy stalker fans that swallow and consume everything.

    Like Patreon? Isn't that sort of what we have already, you are just spreading out the payments for whenever a book (or art piece) gets published instead of giving them $5 a month or whatever. The lines between the various fandom levels appear to be thin and semantic.

    • (Score: 5, Interesting) by Rich on Sunday April 16 2023, @10:54AM

      by Rich (945) on Sunday April 16 2023, @10:54AM (#1301676) Journal

      Only Patreon Fans
      ...
      True fans are not interested in the flood of AI-generated material

      As of today, the guy (it's said to be just one) running pornpen has 12888 patrons at 16.50€ a month.

      cf https://www.patreon.com/pornpen [patreon.com] . Respect from me for getting that off the ground.

      On a slightly more serious note, I think the future liies more in being able to get AI to produce the desired results and curate the best of it in addition to some own input and this indeed is an art in itself, and worth of said true-fan-ism if it's done right.

      Also, recently a thought crossed my mind: A lot of AI prompts I saw included the name of the Polish illustrator Greg Rutkowski. I was curious what art style they were trying to infer with his name and went to his web site. I found it to be a huge propaganda piece against AI learning. Now I think that's a futile effort, because if rules are applied "here", those somewhere else (read China) will be in advantage and "here" will not only have no compensation for artists anymore (because everyone uses the China models), but lose the lead in technology and end in an overall negative.

      However, what if the guy was instead building his own model on his full work catalogue, perfectly annotated, and extensively trained, and then SOLD under his name? So you'd not only get the vague idea an obscure model might have, but the full style. Every use of his name would turn into an advertisement and I'd bet he'd be able to sell original works for a magnitude more than he does now.

    • (Score: 2) by jelizondo on Sunday April 16 2023, @05:25PM

      by jelizondo (653) Subscriber Badge on Sunday April 16 2023, @05:25PM (#1301692) Journal

      That doesn't mean I consume all they make. I don't read every book they ever put out, listen to every song etc. Some of them are just not that good.

      Two examples: I have read most of what Isaac Asimov [asimovonline.com] published. I like his writing style and like he was, I'm curious about almost everything. You can say I'm an Asimov True Fan.

      I have bought a couple of audio books from Cory Doctorow [craphound.com] which I never heard. I bought them because I wanted to support his work. Not a True Fan of Doctorow.

    • (Score: 2) by JoeMerchant on Sunday April 16 2023, @05:41PM

      by JoeMerchant (3937) on Sunday April 16 2023, @05:41PM (#1301694)

      It's the infinite number of monkeys, hopefully with a good filter.

      --
      🌻🌻 [google.com]
  • (Score: 4, Funny) by fliptop on Sunday April 16 2023, @10:29AM (3 children)

    by fliptop (1666) on Sunday April 16 2023, @10:29AM (#1301675) Journal

    More than journalism is being threatened. Dating sites are already troublesome, and AI will likely ruin them for good [youtube.com]. What happens when a bait bot and a response bot arrange a meeting?

    --
    Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
  • (Score: 4, Interesting) by MIRV888 on Sunday April 16 2023, @11:10AM (1 child)

    by MIRV888 (11376) on Sunday April 16 2023, @11:10AM (#1301677)

    These language models & AI are getting better every day. It's only a matter of time before they(AI) can write good fiction. I'm not saying that to be a jerk to writers, but I think AI is going to develop a lot faster than we as humans are ready to deal with. The internet certainly did.

    • (Score: 2) by krishnoid on Sunday April 16 2023, @05:21PM

      by krishnoid (1156) on Sunday April 16 2023, @05:21PM (#1301691)

      Not sure the true origin of the quote, but "Women fake orgasms, but men can fake entire relationships." Consider the competition from AI, heterosexual or otherwise, on both aspects of that statement. Writing good fiction is kind of an earlier mile-marker on *that* road.

  • (Score: 2) by HiThere on Sunday April 16 2023, @01:54PM

    by HiThere (866) Subscriber Badge on Sunday April 16 2023, @01:54PM (#1301681) Journal

    This scenario strongly reminds me for Fritz Leiber's "The Silver Eggheads".

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Insightful) by acid andy on Sunday April 16 2023, @09:08PM (1 child)

    by acid andy (1683) on Sunday April 16 2023, @09:08PM (#1301724) Homepage Journal

    Hate to say it but these sounds like wishing thinking or a way of dressing up a difficult situation for writers with a snappy name, "true fans", and presenting it as a solution.

    The fact remains that this AI presents increased competition for writers. Arguably the internet already made it easier for anyone to become a writer, which means, if you were already a writer, increased competition. Now you'll have bots eroding your potential market as well. Add to that the fact that economic conditions mean your readers have less disposable income for luxuries like books to read and it spells trouble.

    Surely the true fans aren't the customers you need to worry about. They were buying your books before any of this new fangled internet stuff popped up and they'll be buying them when.... Well it won't be dead and gone unless all of civilization collapses (heh, maybe not so unlikely) but, eh, you get the point! But can your average writer still make a living only out of "true fans"?

    If these AIs get good enough, how can the true fans even be sure that their beloved author isn't using generative AI to at least pad out some of their content, or indeed whether the author is in fact a bot themselves?

    --
    Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
    • (Score: 2) by acid andy on Sunday April 16 2023, @09:10PM

      by acid andy (1683) on Sunday April 16 2023, @09:10PM (#1301725) Homepage Journal

      Hate to say it but these sounds like wishing thinking

      Sorry. Quite inebriated tonight =) I mean it sounds like wishful thinking! I can't even be bothered to proof-read the rest of it yet either so it's probably all about that bad. Where's my generative AI when I need it?

      --
      Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
  • (Score: 2) by YeaWhatevs on Monday April 17 2023, @02:41AM (1 child)

    by YeaWhatevs (5623) on Monday April 17 2023, @02:41AM (#1301753)

    This strikes me as fear-mongering and self promoting. AI is coming for your job! Use "real fans" and you are safe from AI. I won't go so far as to predict the future, but it seems unlikely this model is immune to market forces on a macro level, or that writers in this group won't try to get ahead by using the tools available to them.

    • (Score: 2) by bzipitidoo on Monday April 17 2023, @05:00AM

      by bzipitidoo (4388) on Monday April 17 2023, @05:00AM (#1301767) Journal

      Yes, that statement, that AI will "threaten the publishing industry in profound ways" is trashy melodrama. Substitute the world "change" for that word "threaten", and now we have a more accurate and less histrionic statement. Further note that the phrase "publishing industry" is not "artists". Why should we feel pity for publishers? Ideally, they're good editors and business people who relieve artists of those burdens so they can focus on their art. The reality is that they're mostly manipulative, parasitic, backwards middlemen.

      Academia gets ripped hard by their practices. It's ridiculous that a poor college student, on top of the now crushing costs of tuition, should have to shell out hundreds of dollars for print editions of textbooks when technology has provided us all with the means to distribute digital copies at costs too cheap to meter. I'd as soon the commercial bookstore, especially the campus bookstore, simply vanished. We don't need that. Free the library to provide digital editions of all the textbooks. It's such bull that you can't get the books you need the most from the library because they can't stock enough print copies for everyone who needs them. Then there's the treatment they dish out to researchers. Hand over your research to them, for free. Help them sort good research from bad research, for free. If they manage to squeeze others for money in exchange for a copy of your good research, they pass on a big fat 0% of that to you.

  • (Score: 2) by jb on Monday April 17 2023, @08:31AM

    by jb (338) on Monday April 17 2023, @08:31AM (#1301785)

    Honestly, the quality of writing these things churn out is quite dismal.

    So why are they all of a sudden considered a threat to professional authors?

    Perhaps it's because in recent times (decades, not just years), the quality of writing threshold to get a work published has dropped dramatically.

    Fix that first. Then even Blind Freddie will be able to see the difference between a work by a real author and one cobbled together by an AI.

(1)