Arthur T Knackerbracket has processed the following story:
In 1968's Star Trek episode, "The Ultimate Computer," Captain Kirk had his ship used to test M5, a new computer. A copilot, if you will, for the Starship Enterprise.
Designed to more efficiently perform the jobs of the human crew, the M5 indeed did those jobs very well yet with such a terrifying lack of understanding it had to be disabled. But not before exacting a terrible price.
Last week, Microsoft 365 Copilot, a copilot, if you will, for the technology enterprise sold as performing human tasks with more efficiency, increased its prices by 5 percent, the first of many finely judged increments in the old style. Unlike the M5, it isn't in the business of physical destruction of the enemy, instead producing commercial victory with the photon torpedo of productivity and the phaser bolts of revitalized workflow.
[...] Some time back, this columnist noted the stark disparity between the hype of the metaverse in business and the stark, soulless hyper-corporate experience. Line-of-business virtual reality has two saving graces over corporate AI. It can't just appear on the desktop overnight and poke its fingers into everything involved in the daily IT experience. Thus it can't generate millions in licensing at the tick of a box. VR is losing its backers huge amounts of money that can't be disguised or avoided, but corporate AI is far more insidious.
As is the dystopia it is creating. Look at the key features by which Microsoft 365 Copilot is being sold.
Pop up its sidebar in Loop or Teams, and it can auto-summarize what has been said. It can suggest questions, auto-populate meeting agendas. Or you can give it key points in a prompt and it will auto-generate documents, presentations, and other content. It can create clip art to spruce up those documents, PowerPoints, and content.
How is this sold? That it will make you look more intelligent by asking Copilot to suggest a really good question while doing an online presentation or a Teams meeting. What's also implied but unsaid: If you're the human at the end of this AI-smart question and want to look smart enough to answer it, who are you gonna call? Copilot.
The drive is always to abdicate the dull business of gathering data and thinking about it, and communicating the results. All can be fed as prompts to the machine, and the results presented as your own.
And so begins a science-fiction horror show of a feedback loop. Recipients of AI-generated key points will ask the AI to expand them into a document, which will itself be AI key-pointed and fed back into the human-cyborg machine a team has become. Auto-scheduled meetings will be auto-assigned, and will multiply like brainworms in the cerebellum. The number of reports, meetings, presentations, and emails will grow inexorably as they become less and less human. Is the machine working for us, or we for the machine?
Generative AI output feeding back into itself can go only one way, but Copilot in the enterprise is seemingly designed to amplify that very process. And you have to use it if you want to keep up with the perceived smartness and improved productivity of your fellow workers, and the AI-educated expectations of the corporate structure.
[...] It is taboo to say how far your heart sinks when you have to create or consume the daily diet offered up in company email, Teams, meeting agendas, and regular reports. You won't be able to say how much further it will sink when all the noise is amplified and the signal suppressed by corporate AI. Fair warning: Buy the bathysphere now.
There is an escape hatch. Refuse. Encourage refusal. When you see it going wrong, say so. A sunken heart is no platform for anything good personally, as a team, or as an organization. Listen to your humanity and use it. Oh, and seek out "The Ultimate Computer" – it's clichéd, kitsch, and cuts to the bone. The perfect antidote for vendor AI hype.
(Score: 2, Insightful) by Anonymous Coward on Sunday November 24, @05:43AM (1 child)
A brilliant way to inject half-truths into the wheels of enterprise silently with no malipulative intent, of course.
(Score: 1, Insightful) by Anonymous Coward on Sunday November 24, @05:45AM
At best it's Taylorism distilled into even more inhuman machine purity.
(Score: 5, Interesting) by pTamok on Sunday November 24, @11:16AM (2 children)
Auto-generation of content just enables compliance with Sturgeon's law [wikipedia.org] more quickly, whilst increasing the relevant percentage to well over 95%. It's false productivity, beloved of enterprises in the Soviet Union that managed to generate products that were worth less than the raw materials that went into them.
Auto-summarization sounds really, really good in theory, other than the LLM has no understanding of the text and is likely to miss subtleties. It is farcical that people will use computing resources to generate 'content' which other people will use computing resources to summarize. A waste of power.
LLMs are an expensive Potemkin village [wikipedia.org] of AI.
(Score: 5, Interesting) by krishnoid on Sunday November 24, @01:39PM (1 child)
Funny that this is an issue with AI, because it sounds exactly like what mid-to-upper management wants from their employees [schlockmercenary.com], even for things that aren't computing systems running their warships.
(Score: 1) by pTamok on Sunday November 24, @03:57PM
Modded up, mostly because of the link to Schlock Mercenary [schlockmercenary.com], which I enjoyed very much when it was being written.
(Score: 5, Insightful) by JoeMerchant on Sunday November 24, @12:59PM (5 children)
I have consistently matured toward saying less and less in business meetings. It seems to keep my coworkers happier. Also I notice that my most successful counterparts also tend to say little to nothing in meetings.
Of course, the highest meeting productivity per hour is achieved in meetings which have zero length...
🌻🌻 [google.com]
(Score: 2) by krishnoid on Sunday November 24, @01:54PM (2 children)
Introducing well-crafted questions also improves the utility of those meetings. It helps if you don't visually or verbally express a bias on the answer either way, but you might not get the answer you want.
(Score: 3, Insightful) by JoeMerchant on Sunday November 24, @02:16PM (1 child)
More often, I find myself obliged to explain to the question posers their misunderstandings of the fundamental facts which have led to their concerns.
In the common scenario: a question arises about how X will impact Y. The answer is usually: trivially or not at all, with some gentle explanations of how Y works and how X doesn't interact the way that the questioner is expressing concern over. Of course this requires considerable diplomacy when the questioner is supposed to be a subject matter expert on Y and they have dragged their boss into the meeting....
Then there are the meetings where everyone is searching for crystal ball answers for how other people will react to Z at some point months to years in the future. The worst possible outcome of those meetings is to be the outlier who correctly predicts the future, making the majority who overruled your advice look bad when it happens.
🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Monday November 25, @05:42PM
Is a close 2nd being the guy who has some tidbit that would cut off 15 minutes of pointless discussion, but being unable to introduce it because you lack the magic social ability to grab the floor without being rude? Then of course after the 15 pointless minutes, when it's now socially unacceptable to circle back to that topic you are suddenly asked about something where you have no meaningful input, and instead of coming up with BS like everybody else you're honestly flat footed. I might be on the spectrum, but whatever. It's so frustrating.
(Score: 2) by mcgrew on Sunday November 24, @09:42PM (1 child)
I always liked business meetings. They were a welcome break for my brain, which was probably overworked when I was being productive.
Our nation is in deep shit, but it's illegal to say that on TV.
(Score: 2) by JoeMerchant on Sunday November 24, @11:09PM
I prefer the work from home brain breaks that don't involve political intrigue.
🌻🌻 [google.com]
(Score: 5, Informative) by Anonymous Coward on Sunday November 24, @01:42PM (7 children)
Microsoft has given you the honor to offer up your documents and other things you type by default for their AI training. Should you somehow NOT wish to have such honor bestowed upon you, you must easily opt out of this service through a simple eight-click process:
File -> Options -> Trust Center -> Trust Center Settings -> Privacy Options -> Privacy Settings -> Optional Connected Experiences -> Uncheck box: "Turn on optional connected experiences"
For Teams you need to go to Privacy -> Privacy in Teams to find it.
I would hope this is not the default setting for enterprise installations as well in places governed by the GDPR.
(Score: 4, Insightful) by acid andy on Sunday November 24, @01:53PM (5 children)
Madness! What's the difference between an Option and a Setting? It looks like many Settings make one Option. But why under the File menu? Is it because kids these days don't even know what a file is? And how Orwellian does the Trust Center sound? *Shudders*
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(Score: 4, Insightful) by Gaaark on Sunday November 24, @03:16PM (4 children)
I can just see Microsoft giving 'Bosses' the ability to see their underlings use of Co-pilot: "Ah, George(tte) used Co-pilot to come up with that inciteful question/what have you. Gotta keep an eye on them: they don't really know what they're doing, they're just using AI to appear to know.", etc.
I'd be afraid of using it on a Corporate given computer/lappy.
Yeah: Trust Center. And who do you trust?
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 3, Insightful) by acid andy on Sunday November 24, @07:03PM
Good point. Simple surveillance of employees like key loggers and recording screen time are already very popular, so managers will love an AI that constantly watches and (pretends to) evaulate employee productivity. I predict the worst mental health crisis in the history of humanity, coming soon!
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(Score: 1, Interesting) by Anonymous Coward on Monday November 25, @01:38AM (2 children)
Incite (v. int.) [merriam-webster.com]:
Insight (noun) [merriam-webster.com]:
(Score: 2) by Gaaark on Monday November 25, @02:23AM
Ah yes... caught by Professor Pedant again! And i would have gotten away with too, if it hadn't been for those buoys and grrrls and their Shaggy dog. :)
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 2) by acid andy on Monday November 25, @02:15PM
I thought he meant the employee's question was inciting discontent or something.
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(Score: 3, Interesting) by Anonymous Coward on Sunday November 24, @03:01PM
Hmmm
Microsoft AI: User XXYYZZ has toggled the "optional" connected experiences off, better tell the NSA AI...
NSA AI: (connects via backdoor, has a look...)Boring...maybe the CIA AI might find some of it of interest...
CIA AI: (reads the NSA AI exfiltrated material) Nothing here of interest to me, but I'll bet the GCHQ AI will find some of this stuff amusing, and the IRS AI might want to have a good look at a couple of the files..
You're not so much 'opting out' as indicating that you've something to hide.
Welcome to the ePanopticon, where we'll all be under the basilisk gaze of (in the words of Mr Donald Fagen)
'A just machine to make big decisions,
Programmed by fellows with compassion and vision'
Aye, right...
(Score: 4, Insightful) by VLM on Sunday November 24, @04:55PM
This reminds me of the early desktop publishing days we were promised enormous productivity booms when office drones can generate long documents for each other using 17 different colors and 34 different fonts.
Generally, I would strongly encourage my competitors to invest heavily in non-productive time wasters like using Copilot.
(Score: 0) by Anonymous Coward on Sunday November 24, @05:40PM
Whorevelopers are going to Whorevelop!
(Score: 4, Insightful) by mrpg on Sunday November 24, @06:01PM
Bard, buddy, i want an answer that tries to be human-like but that can be recognized as AI.
Here's a response that aims to be human-like but with a subtle AI undertone:
"It's quite intriguing to see how AI is evolving, isn't it? From M5's overzealous efficiency to Copilot's more subtle, insidious integration, it's clear that technology is rapidly changing the way we work.
While it's tempting to rely on AI to automate tasks and generate content, we must be mindful of the potential consequences. Overreliance on AI can lead to a decline in critical thinking skills and a loss of human connection.
It's important to strike a balance. AI can be a powerful tool, but it should be used as a supplement, not a replacement, for human ingenuity and creativity. By understanding its limitations and using it responsibly, we can harness its potential without sacrificing our humanity."
---
This is getting boring.