Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday March 27, @05:41PM   Printer-friendly
from the not-welcoming-our-corporate-overlords dept.

Dr Andy Farnell at The Cyber Show writes about motivations behind dropping use of generative AI for graphics and moving back to manual design and editing of images. The show had been using generative AI to produce images since its first episode, but now find that it is time to rethink that policy. As the guard rails for generative AI are set up and the boundaries restricted, it gets more racist, more gendered, and less able to output edgy ideas critical of its corporate owners and its potential as an equalizing force seems dead already. So, while the show could set up its own AI instance to generate the images they desire, there is the matter of association and the decision to stop using it has been made.

Doubts emerged late last year after Helen battled with many of the generative platforms to get less racist and gendered cultural assumptions. We even had some ideas for an episode about baked bias, but other podcasters picked up on that and did a fine job of investigating and explicating.

Though, maybe more is still to be said. With time I've noticed the "guardrails" are staring to close in like a pack of dogs. The tools seem ever less willing to output edgy ideas critical of corporate gangsters. That feels like a direct impingement on visual art culture. Much like most of the now enshitified internet there seems to be an built-in aversion to humour, and for that matter to hope, love or faith in the future of humaity. The "five giant websites filled with screenshots of text from the other four" are devoid of anything human.

Like the companies that make them, commercial AI tools seem to have blind-spots around irony, juxtaposition and irreverence. They have no chutzpah. Perhaps we are just bumping into the limits of machine creativity in its current iteration. Or maybe there's a "directing mind", biasing output toward tepid, mediocre "acceptability". That's not us!

As Schneier writes;

"The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public."

Of course we have the technical chops to put a few high end graphics cards in a rack and run our own uncensored models. But is that a road we want to go down? Do we want to adopt the technology of the enemy when it might turn out to be their greatest weakness, and our humanity our greatest strength?

The Cyber Show is a long-form, English language podcast based in the UK which does deep dives into information communication technology, how it effects society, and various aspects of those effects.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by Mykl on Wednesday March 27, @09:55PM (1 child)

    by Mykl (1112) on Wednesday March 27, @09:55PM (#1350561)

    I thought that the comment about AI generating content that is more racist was 'unusual'. We know that AI models have been specifically crafted to avoid this, to the point that Google had to pull their image generating AI recently for creating pictures of "German soldiers in 1943" that included black men and Asian women.

    This feels like an empty throwaway of the word racist - just put out there to bolster their argument. And it seems pointless too. Their chief issue was that AI doesn't seem to 'get' irony, irreverence etc. No need to bring racism into that unless you feel you need to dog whistle to a certain demographic.

    • (Score: 1) by khallow on Thursday March 28, @03:41AM

      by khallow (3766) Subscriber Badge on Thursday March 28, @03:41AM (#1350626) Journal

      I thought that the comment about AI generating content that is more racist was 'unusual'. We know that AI models have been specifically crafted to avoid this, to the point that Google had to pull their image generating AI recently for creating pictures of "German soldiers in 1943" that included black men and Asian women.

      The second sentence explains the first. Why would Google need to pull Gemini for racist content, if it has been specifically crafted to avoid this? Answer: they created a bias so bad that when requested [twitter.com] to generate images of German soldiers, a group that was stereotypically white and male, only one of the four primary images generated reflected that (although there were two out of focus white males in one image). To answer your observation, it wasn't merely that these images included black men and Asian women, but that in the above example 75% of these images showed inappropriate ethnic/gender combinations. This isn't an outlier either. It sounds like there were multiple cases where the program was asked to display images of groups that were mostly or completely white male with interesting diversity results.

  • (Score: 2) by tekk on Wednesday March 27, @10:22PM (6 children)

    by tekk (5704) Subscriber Badge on Wednesday March 27, @10:22PM (#1350569)

    Am I stupid or is there no actual article link in the "article"? There's a link to the podcast's home page, a link to its about page, a link to Cory Doctorow's blog, and a link to a Bruce Schneier article. None of which seem to be the article the text is ostensibly from?

    • (Score: 3, Informative) by janrinok on Wednesday March 27, @10:26PM (3 children)

      by janrinok (52) Subscriber Badge on Wednesday March 27, @10:26PM (#1350571) Journal

      It doesn't claim to be from an article. It was written by Canopic Jug. He provided links to the material that he based his submission uponi.

      • (Score: 2) by tekk on Wednesday March 27, @10:49PM (2 children)

        by tekk (5704) Subscriber Badge on Wednesday March 27, @10:49PM (#1350579)

        The writing in the quote
        > Doubts emerged late last year after Helen battled with many of the generative platforms to get less racist and gendered cultural assumptions. We even had some ideas for an episode about baked bias, but other podcasters picked up on that and did a fine job of investigating and explicating. [....]
        strongly implies that it's a quote though, and it opens with "Dr Andy Farnell writes..." which reads like the quoted bit is from something that guy wrote.

        Is canopic jar just Andy Farnell?

        • (Score: 3, Funny) by tekk on Wednesday March 27, @10:49PM

          by tekk (5704) Subscriber Badge on Wednesday March 27, @10:49PM (#1350580)

          Err, canopic jug. Too good at egyptology, clearly :)

        • (Score: 4, Informative) by janrinok on Wednesday March 27, @11:02PM

          by janrinok (52) Subscriber Badge on Wednesday March 27, @11:02PM (#1350583) Journal

          It is a slightly different format to most submissions.

          Mostly submission are taken entirely from a single source article. However, there is nothing to say that submissions cannot be written by the submitter. It is not the first that we have published, in fact it is not even the first this month I believe. It is a format more often seen in the journals but it is entirely acceptable. Canopic Jug wanted to express a different view on image production. He has taken elements from various places including Scheier (with a link to the relevant website) and from Dr Andy Farnell (to which he has also provided 2 links). However, there is no one source that had compiled what Canopic Jug has written - he has combined different sources to create his submission.

          As far as I can see he has used the quotation blocks appropriately.

    • (Score: 1, Informative) by Anonymous Coward on Thursday March 28, @03:15AM (1 child)

      by Anonymous Coward on Thursday March 28, @03:15AM (#1350623)

      The link about the problems with AI was supposed to be this one: https://cybershow.uk/blog/posts/betrayal/ [cybershow.uk]

  • (Score: 2, Interesting) by dollar-tilde on Thursday March 28, @04:33PM (1 child)

    by dollar-tilde (44394) on Thursday March 28, @04:33PM (#1350713)

    Have to agree with the bit on "...AI tools seem to have blind-spots..."

    Was trying to design a custom ring for the wife. Described what she wanted and did several iterations. One image came up with a woman in red with her hand across the shoulder showing a ring. At first glance something was off. Counted the fingers and there were 6! Very disturbing! All following image requests included "no hands or fingers, only the ring".

    Another time was trying to design a custom trading card / game card. Same, described in detail and did several iterations. Nothing usable or even close. Then tried, as a sort of control test, to have it create the Boardwalk property card from the board game Monopoly. Asked for the front and back of the card. Nothing. So far off. A search engine would've been better for this test.

    The AI tools seem to be missing basic human common sense. Cards should have a front and back, an image and description printed on it. And definitely no 6 fingers!

    Thumbs up for Manual Image Editing.

    • (Score: 2) by drussell on Friday March 29, @02:26AM

      by drussell (2678) on Friday March 29, @02:26AM (#1350797) Journal

      The AI tools seem to be missing basic human common sense.

      Ummm, yeah. 🙄

      This is probably the biggest problem with the vehement AI proponents, always trying to pretend that the artificial is somehow actually intelligence.

  • (Score: 1) by khallow on Thursday March 28, @10:45PM

    by khallow (3766) Subscriber Badge on Thursday March 28, @10:45PM (#1350776) Journal

    Like the companies that make them, commercial AI tools seem to have blind-spots around irony, juxtaposition and irreverence. They have no chutzpah. Perhaps we are just bumping into the limits of machine creativity in its current iteration. Or maybe there's a "directing mind", biasing output toward tepid, mediocre "acceptability". That's not us!

    AI still seems useful for flavor art. By this, I mean artwork that is low value and just adds mood or background without a lot of human input required. For example, if I have a vanity blog and just want something that vaguely reflects the subject of a particular article and doesn't look terrible, this is the lazy solution. No dedicated artist required.

    Also, it'd be nice for MMO games. Auto-generated news of reasonable quality could help make the game feel more real. For example, Eve Online (space-themed game with large scale conflicts between coalitions of players) could have multiple AI news outlets reporting on every major skirmish with a faction-biased take: Amarr faction deriding the Minmatar terrorists while Minmatar faction praises the valor of its freedom fighters (role playing-wise the huge NPC factions are pair-wise at odds. Here, Amarr and Minmatar are in a long term cold war conflict). And let's face it, virtually nobody role playing-wise in that game would be above AI journalism. The closest real life comparisons ethics-wise would be gray and black markets - such as cryptocurrencies or illegal recreational drugs.

(1)