
from the not-welcoming-our-corporate-overlords dept.
Dr Andy Farnell at The Cyber Show writes about motivations behind dropping use of generative AI for graphics and moving back to manual design and editing of images. The show had been using generative AI to produce images since its first episode, but now find that it is time to rethink that policy. As the guard rails for generative AI are set up and the boundaries restricted, it gets more racist, more gendered, and less able to output edgy ideas critical of its corporate owners and its potential as an equalizing force seems dead already. So, while the show could set up its own AI instance to generate the images they desire, there is the matter of association and the decision to stop using it has been made.
Doubts emerged late last year after Helen battled with many of the generative platforms to get less racist and gendered cultural assumptions. We even had some ideas for an episode about baked bias, but other podcasters picked up on that and did a fine job of investigating and explicating.
Though, maybe more is still to be said. With time I've noticed the "guardrails" are staring to close in like a pack of dogs. The tools seem ever less willing to output edgy ideas critical of corporate gangsters. That feels like a direct impingement on visual art culture. Much like most of the now enshitified internet there seems to be an built-in aversion to humour, and for that matter to hope, love or faith in the future of humaity. The "five giant websites filled with screenshots of text from the other four" are devoid of anything human.
Like the companies that make them, commercial AI tools seem to have blind-spots around irony, juxtaposition and irreverence. They have no chutzpah. Perhaps we are just bumping into the limits of machine creativity in its current iteration. Or maybe there's a "directing mind", biasing output toward tepid, mediocre "acceptability". That's not us!
As Schneier writes;
"The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public."
Of course we have the technical chops to put a few high end graphics cards in a rack and run our own uncensored models. But is that a road we want to go down? Do we want to adopt the technology of the enemy when it might turn out to be their greatest weakness, and our humanity our greatest strength?
The Cyber Show is a long-form, English language podcast based in the UK which does deep dives into information communication technology, how it effects society, and various aspects of those effects.
Related Stories
Dr Andy Farnell at The Cyber Show writes about the effects of the "splinternet" and division in standards in general on overall computing security. He sees the Internet, as it was less than ten years ago, as an ideal, but one which has been intentionally divided and made captive. While governments talk out of one side of their mouth about cybersecurity they are rushing breathlessly to actually make systems and services less secure or outright insecure.
What I fear we are now seeing is a fault line between informed, professional computer users with access to knowledge and secure computer software - a breed educated in the 1970s who are slowly dying out - and a separate low-grade "consumer" group for whom digital mastery, security, privacy and autonomy have been completely surrendered.
The latter have no expectation of security or correctness. They've grown up in a world where the high ideals of computing that my generation held, ideas that launched the Voyager probe to go into deep space using 1970's technology, are gone.
They will be used as farm animals, as products by companies like Apple, Google and Microsoft. For them, warm feelings, conformance and assurances of safety and correctness, albeit false but comforting, are the only real offering, and there will be apparently "no alternatives".
These victims are becoming ever-less aware of how their cybersecurity is being taken from them, as data theft, manipulation, lock-in, price fixing, lost opportunity and so on. If security were a currency, we're amidst the greatest invisible transfer of wealth to the powerful in human history.
In lieu of actual security, several whole industries have sprung up around ensuring and maintaining computer insecurity. On the technical side of things it's maybe time for more of us to (re-)read the late Ross Anderson's Security Engineering, third edition. However, as Dr Farnell reminds us, most of these problems have non-technical origins and thus non-technical solutions.
Previously:
(2024) Windows Co-Pilot "Recall" Feature Privacy Nightmare
(2024) Reasons for Manual Image Editing over Generative AI
(2019) Chapters of Security Engineering, Third Edition, Begin to Arrive Online for Review
(Score: 4, Insightful) by Mykl on Wednesday March 27 2024, @09:55PM (1 child)
I thought that the comment about AI generating content that is more racist was 'unusual'. We know that AI models have been specifically crafted to avoid this, to the point that Google had to pull their image generating AI recently for creating pictures of "German soldiers in 1943" that included black men and Asian women.
This feels like an empty throwaway of the word racist - just put out there to bolster their argument. And it seems pointless too. Their chief issue was that AI doesn't seem to 'get' irony, irreverence etc. No need to bring racism into that unless you feel you need to dog whistle to a certain demographic.
(Score: 1) by khallow on Thursday March 28 2024, @03:41AM
The second sentence explains the first. Why would Google need to pull Gemini for racist content, if it has been specifically crafted to avoid this? Answer: they created a bias so bad that when requested [twitter.com] to generate images of German soldiers, a group that was stereotypically white and male, only one of the four primary images generated reflected that (although there were two out of focus white males in one image). To answer your observation, it wasn't merely that these images included black men and Asian women, but that in the above example 75% of these images showed inappropriate ethnic/gender combinations. This isn't an outlier either. It sounds like there were multiple cases where the program was asked to display images of groups that were mostly or completely white male with interesting diversity results.
(Score: 2) by tekk on Wednesday March 27 2024, @10:22PM (6 children)
Am I stupid or is there no actual article link in the "article"? There's a link to the podcast's home page, a link to its about page, a link to Cory Doctorow's blog, and a link to a Bruce Schneier article. None of which seem to be the article the text is ostensibly from?
(Score: 3, Informative) by janrinok on Wednesday March 27 2024, @10:26PM (3 children)
It doesn't claim to be from an article. It was written by Canopic Jug. He provided links to the material that he based his submission uponi.
[nostyle RIP 06 May 2025]
(Score: 2) by tekk on Wednesday March 27 2024, @10:49PM (2 children)
The writing in the quote
> Doubts emerged late last year after Helen battled with many of the generative platforms to get less racist and gendered cultural assumptions. We even had some ideas for an episode about baked bias, but other podcasters picked up on that and did a fine job of investigating and explicating. [....]
strongly implies that it's a quote though, and it opens with "Dr Andy Farnell writes..." which reads like the quoted bit is from something that guy wrote.
Is canopic jar just Andy Farnell?
(Score: 3, Funny) by tekk on Wednesday March 27 2024, @10:49PM
Err, canopic jug. Too good at egyptology, clearly :)
(Score: 4, Informative) by janrinok on Wednesday March 27 2024, @11:02PM
It is a slightly different format to most submissions.
Mostly submission are taken entirely from a single source article. However, there is nothing to say that submissions cannot be written by the submitter. It is not the first that we have published, in fact it is not even the first this month I believe. It is a format more often seen in the journals but it is entirely acceptable. Canopic Jug wanted to express a different view on image production. He has taken elements from various places including Scheier (with a link to the relevant website) and from Dr Andy Farnell (to which he has also provided 2 links). However, there is no one source that had compiled what Canopic Jug has written - he has combined different sources to create his submission.
As far as I can see he has used the quotation blocks appropriately.
[nostyle RIP 06 May 2025]
(Score: 1, Informative) by Anonymous Coward on Thursday March 28 2024, @03:15AM (1 child)
The link about the problems with AI was supposed to be this one: https://cybershow.uk/blog/posts/betrayal/ [cybershow.uk]
(Score: 2) by canopic jug on Thursday March 28 2024, @03:18AM
Ok, finally, the link missing from the post is "Rethinking AI images on The Cybershow [cybershow.uk]". Sorry for the previous mess ups.
Money is not free speech. Elections should not be auctions.
(Score: 2, Interesting) by dollar-tilde on Thursday March 28 2024, @04:33PM (1 child)
Have to agree with the bit on "...AI tools seem to have blind-spots..."
Was trying to design a custom ring for the wife. Described what she wanted and did several iterations. One image came up with a woman in red with her hand across the shoulder showing a ring. At first glance something was off. Counted the fingers and there were 6! Very disturbing! All following image requests included "no hands or fingers, only the ring".
Another time was trying to design a custom trading card / game card. Same, described in detail and did several iterations. Nothing usable or even close. Then tried, as a sort of control test, to have it create the Boardwalk property card from the board game Monopoly. Asked for the front and back of the card. Nothing. So far off. A search engine would've been better for this test.
The AI tools seem to be missing basic human common sense. Cards should have a front and back, an image and description printed on it. And definitely no 6 fingers!
Thumbs up for Manual Image Editing.
(Score: 2) by drussell on Friday March 29 2024, @02:26AM
Ummm, yeah. 🙄
This is probably the biggest problem with the vehement AI proponents, always trying to pretend that the artificial is somehow actually intelligence.
(Score: 1) by khallow on Thursday March 28 2024, @10:45PM
AI still seems useful for flavor art. By this, I mean artwork that is low value and just adds mood or background without a lot of human input required. For example, if I have a vanity blog and just want something that vaguely reflects the subject of a particular article and doesn't look terrible, this is the lazy solution. No dedicated artist required.
Also, it'd be nice for MMO games. Auto-generated news of reasonable quality could help make the game feel more real. For example, Eve Online (space-themed game with large scale conflicts between coalitions of players) could have multiple AI news outlets reporting on every major skirmish with a faction-biased take: Amarr faction deriding the Minmatar terrorists while Minmatar faction praises the valor of its freedom fighters (role playing-wise the huge NPC factions are pair-wise at odds. Here, Amarr and Minmatar are in a long term cold war conflict). And let's face it, virtually nobody role playing-wise in that game would be above AI journalism. The closest real life comparisons ethics-wise would be gray and black markets - such as cryptocurrencies or illegal recreational drugs.