Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by jelizondo on Friday January 30, @10:46AM   Printer-friendly
from the strategy.vs.reality.collide dept.

Leaders think their AI deployments are succeeding. The data tells a different story.

Apparently leaders and bosses thinks that AI is great and are improving things at their companies. Their employees are less certain. Bosses wants AI solutions. Employees not so much. As they don't produce the results that their bosses wants or thinks that it should or does.

Executives we surveyed overwhelmingly said their company has a clear AI strategy, that adoption is widespread, and that employees are encouraged to experiment and build their own solutions. The rest of the workforce disagrees.

The more experienced the staff the less confident they are in the AI solutions. The more you know the less you trust the snake oil?

Even in populations we'd expect to be ahead - tech companies and language-intensive functions - most AI use remains surface-level.

https://www.sectionai.com/ai/the-ai-proficiency-report
https://fortune.com/2026/01/21/ai-workers-toxic-relationship-trust-confidence-collapses-training-manpower-group/


Original Submission

This discussion was created by jelizondo (653) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by PiMuNu on Friday January 30, @11:20AM (10 children)

    by PiMuNu (3823) on Friday January 30, @11:20AM (#1431849)

    In my workplace, I know colleagues who have used LLMs very successfully to do technical stuff e.g. "vibe coding"; but most don't touch it.

    • (Score: 5, Insightful) by turgid on Friday January 30, @11:50AM (4 children)

      by turgid (4318) Subscriber Badge on Friday January 30, @11:50AM (#1431851) Journal

      Successful vibe coding? Is that when everyone's too stupid to see the bugs?

      • (Score: 5, Funny) by ikanreed on Friday January 30, @02:15PM

        by ikanreed (3164) on Friday January 30, @02:15PM (#1431865) Journal

        "Move fast and break things" means if it doesn't break a unit test no one notices until 6 weeks later when a tenth of your customers can't even turn on their computers anymore.

      • (Score: 3, Insightful) by lars_stefan_axelsson on Saturday January 31, @09:42AM (2 children)

        by lars_stefan_axelsson (3590) on Saturday January 31, @09:42AM (#1431963)

        There was actually research published on this very topic not long ago: https://youtu.be/b9EbCb5A408 [youtu.be]

        In short, they performed experiments to see whether maintainability suffered from having used AI coding tools. So two groups were given a Java application (that wasn't too high quality) and asked to add a feature. Half used AI, and half did it the old fashioned way.

        Then a third group were given this code and asked to do a maintenance task on it, but blind, i.e. they didn't know whether AI had been used or not in the previous step.

        Result; No significant difference between results in the second step. I.e. AI didn't affect code quality (measured this way) to any discernible degree.

        And AI use in the first step saved time; from 10%-55% or so, from my faulty memory. The more experienced the programmer the more time was saved using AI. But this wasn't really a surprising result at this point in time.

        So, as always, this is a tool that makes experts better. It doesn't turn regular monkeys into experts.

        Farley does point out that in the longer term there is the risk of skills atrophy; if you don't use them regularly and at a high level, they will no longer be there when you need them. (Queue all the results from airline safety that point to jus this problem and how much hard and repetitive work it takes to counteract this effect.)

        Here's the paper the video is based on: https://arxiv.org/abs/2507.00788 [arxiv.org]

        This also mirrors my own (limited) experience. About the only programming I do these days is for myself; small(ish) hardware projects on e.g. the Arduino (still love it), Raspberry Pi Pico (using the Arduino IDE) and the like. ChatGPT 5.2 (not even the coding agent) is a real help here. I can upload a photograph of some display I've found in the drawers bought years ago and long forgotten, and get answers to "what is this?", "what are the best libraries to use?", "any known gotchas?", "give me a code example to do this" etc. etc. For the small and rather simple (structurally) programmes I write it also works very well for analysis and tips and tricks.

        Only yesterday I tried to do I2C from a Raspberry Pi Pico W to a display using an Arduino library. Didn't work. Suspected that I needed more initialisation as the library was "automatic". I just input the code and gave the symptoms and ChatGPT gave me four suggestions, one of which was setting up the pin mapping with code. Just pasted that code in and hey presto, the application worked. (It even found a clash, due to old code I had forgotten to remove.)

        Now, I was already on the right track here and could have googled it, but this was much faster by a substantial amount. Instead of digging through different sources of docs (often half missing) and forum posts I had ChatGPT do that for me.

        In my other work I've found that ChatGPT makes one real mistake about one time in ten, to one in twenty, for reasonably complex tasks.

        So, it's a tool. It's great for some tasks, marginal on other, and useless on a few. The trick is to know how to use it (prompting), and when to use it. No different from a compiler really. I'm old enough to remember when people were very sceptical of those, and insisted on writing assembly/machine code instead. Of course, now the results are in, and those of us who used the compiler as much as possible and only dropped down to assembly when necessary won by a considerable margin.

        But, to echo Farley above, I must admit that my assembly skills have atrophied considerably, and all but died on the vine. There are so few instances where its needed now, that those skills aren't used much. Compilers improved by orders of magnitude. I don't even read the compiler output much these days.

        --
        Stefan Axelsson
        • (Score: 2) by turgid on Saturday January 31, @11:25AM (1 child)

          by turgid (4318) Subscriber Badge on Saturday January 31, @11:25AM (#1431966) Journal

          Yes, I see these AIs as a kind of super search engine. I am always wary about code examples and never copy-and-paste. I've spend enough time on Stack Overflow to think very critically about these pieces of code. Even when they are good, they never fit my codebase and need to be rewritten.

          Skills atrophy is a very real problem and I am painfully aware of how mine have become rusty over the years. I have forgotten a load of terminology that I need to brush up on and I dare say I'd struggle to write any Java source from scratch nowadays.

          I'm always sceptical too of these "studies" regarding programming projects. Ideally, you would want repeatable experiments with statistically significant results. A couple of teams nominally working on the same thing is very wishy-washy. It's not even Sociology.

          • (Score: 3, Insightful) by lars_stefan_axelsson on Monday February 02, @10:50AM

            by lars_stefan_axelsson (3590) on Monday February 02, @10:50AM (#1432173)

            I don't really disagree with anything you say, apart from perhaps the "sociology" comment. This is IMHO one step up. It's not a purely observational study (what sociology most often amounts to *), but a controlled experiment. That's a higher form of research.

            Now, does that mean that this is the final word on the matter? Of course not, far from it. As you say, doing these experiments and managing to capture enough of the "real world" to be able to say something about it is surprisingly difficult. And we often fail.

            It is however an argument in the debate, and, again IMHO, a much better one than the "I don't really have much evidence nor experience but I don't think it's true and therefore I'll will posit that loudly as the final word on the matter" type of argument that these discussions often amount to.

            And in this respect current LLMs aren't really different from earlier technological breakthrough, weather they be the steam engine (it turns out that people can survive travelling faster than 30 mph, the air won't be sucked out of the carriages), or the steel pen (no, the cold hard steel will not lead to deformed hands in contrast to the soft quills we're used to). They improve some things (sometimes drastically), worsen others (also sometimes drastically), are useful in some situations, and useless in others, but in the end we find a new equilibrium and learn to live with the benefits and drawbacks.

            *) It's unclear whether Sir Ernest Rutherford ever said it, but here it is anyway: "The only possible interpretation of any research whatever in the 'social sciences' is: some do, some don't."

            --
            Stefan Axelsson
    • (Score: 4, Informative) by VLM on Friday January 30, @01:13PM (3 children)

      by VLM (445) Subscriber Badge on Friday January 30, @01:13PM (#1431863)

      I've had awesome results with being lazy. As an example, I know what a 3-d array is in C. I know what it should look like and how to use it. I can probably google an example and modify it to fit. Its 100x faster to ask for one than to type all that stuff in. So many commas and tabs ugh.

      This works with AI as its not a terribly difficult request, just tedious. I was tempted to include an example here in this comment but ya'all probably have seen a 3-d array in C before and it tripped the "Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition." alert. People whine about LISPs having a lot of parenthesis thus looking ugly but even old-fashioned C looks messy when you're doing nested data structures like a big multidimensional array. Even with IDE help it takes time to type all that stuff in and its easy to make a typo. If perfectly valid C language code looks like spam to automated detectors thats a C problem its just as bad if not worse than LISP in its own way. In summary: If you're producing something that looks like automated spam by hand, you should have an AI do it.

      • (Score: 2) by JoeMerchant on Friday January 30, @02:35PM

        by JoeMerchant (3937) on Friday January 30, @02:35PM (#1431867)

        AI today is still not for everything, there are some niche cases where it's awesomely better than the previously available tools - as you say: simple but tedious stuff, also translations, and especially things where AI agents have, and use, tools to check themselves. LLMs write code that doesn't compile properly on the first few iterations, AI agents do those iterations for you so what you get at least compiles, and passes the unit tests if you told it to write them.

        AI a year ago was much less useful. Agents hallucinated too often, forward progress was hard to achieve with the error rates.

        Will AI next year make as much progress as it did last year? I don't believe anybody who says they know the answer, I do believe the ones who say it is possible that it does - like the article due to drop in the Subs Queue in about 6 hours: "Mainpage 01/30 15:22 A Look at Potential Problems with Future AI"

        --
        🌻🌻🌻🌻 [google.com]
      • (Score: 1, Interesting) by Anonymous Coward on Friday January 30, @05:02PM (1 child)

        by Anonymous Coward on Friday January 30, @05:02PM (#1431887)

        A few years ago before AI came along I sat down and wrote myself a C library for doing arrays with arbitrary numbers of dimensions. It's all parameterized. It's also surprisingly small. I might open source it one day/

        • (Score: 0) by Anonymous Coward on Saturday January 31, @12:23AM

          by Anonymous Coward on Saturday January 31, @12:23AM (#1431936)

          How does your array tool compare with those in Matlab/Octave?

    • (Score: 2) by driverless on Saturday January 31, @07:36AM

      by driverless (4770) on Saturday January 31, @07:36AM (#1431960)

      It's almost like it's the blockchain all over again. In fact in the summary above you could replace "AI" with "blockchain", set the date back about 5-10 years, and it'd be just as topical.

  • (Score: 5, Insightful) by ledow on Friday January 30, @11:29AM (4 children)

    by ledow (5567) on Friday January 30, @11:29AM (#1431850) Homepage

    It's the AI equivalent of the Arthur C Clarke test:

    "Any sufficiently complex technology is indistinguishable from magic."

    But in this case if you're an idiot, it doesn't even need to be sufficiently complex to make you think it's magic.

    The more you know a subject, the more educated, trained and skilled you are, the more AI looks like it's churning out shite? Gosh, I wonder why that would be.

    • (Score: 3, Interesting) by tannasg on Friday January 30, @11:58AM (3 children)

      by tannasg (5446) on Friday January 30, @11:58AM (#1431852)

      The more you know a subject, the more educated, trained and skilled you are, the more AI looks like it's churning out shite?

      You don't even have to know a lot about a subject to spot that the 'AI' responses are a wee bit on the 'Polly wants a cracker' side.

      I've been delving into local history a bit, I've thrown various questions at chatgpt about it, and as a consequence I've developed an Environmental Control Robot's keen sense for the appearance of the literary equivalent of a ham actor in a risible alien outfit (sometimes even to th

      It's the chicken and egg thing, did the writer of the article first get the egregious nonsense from the AI?, did the AI filtch it from the writer's blog?, who made up the nonsense in the first place?

      • (Score: 3, Interesting) by tannasg on Friday January 30, @01:48PM (2 children)

        by tannasg (5446) on Friday January 30, @01:48PM (#1431864)

        Where the hell did the rest of the text go?

        I know I've had phone touchscreen issues recently, but those alone shouldn't have wiped out most of what I'd written, time to switch primary browsers and check to see if my usual one, Opera, has developed a sudden infestation of grues.

        The gist of what I originally wrote

        Having experienced the issues with AI parroting whatever nonsense it finds online then making outré conclusions based on it, I personally wouldn't trust it for code generation, especially code of a critical nature.

        I've seen examples of both AI generated Python code and associated circuit schematics which, if I still worked in industry and had been presented with them to prototype/produce/use I'd be reaching for the nearest LART.

        When I hear about AI replacing skilled workers, I'm reminded about a certain ECAD package and it's costly 'Intelligent' Autorouter, an IA as it were, an add-on purchased by my then PHB to speed up design times, and which caused the 'us' I worked for then a lot of issues as it would, on a whim, decide maybe to not route maybe one or two nets fully, or maybe it would route them but maybe ignore the design rules, maybe use the wrong ones...It took us a while to figure out what the bugger was doing. I stuck with manually routing my microcontroller boards.

        Now, off to deal with this fsking phone.

        • (Score: 5, Insightful) by JoeMerchant on Friday January 30, @03:45PM (1 child)

          by JoeMerchant (3937) on Friday January 30, @03:45PM (#1431872)

          >wouldn't trust it for code generation, especially code of a critical nature.

          The analogy I have put out to our team:

          When you review your colleagues' code, you trust that they know what they're doing - particularly in their area of specialty. I review my colleagues' code most thoroughly where they're "coloring outside the lines" making changes in places they don't usually work. Even when you approve a pull request, there's that assurance that the author of the change is probably going to be working with us for the next 5 years or more - we can pick their brains in the event of any surfacing submarine problems.

          When you review AI written code (and review it you MUST), you should be treating it like it was handed to you by some stranger on the street - who you will never see again. The context window that wrote that code is ephemeral, you're lucky if you can get it to explain the thinking behind a particular structure if you've still got the code authoring context window fresh, after it explains one or two things thoroughly, it has already forgotten the rest and will be looking at it de-novo, with no more authors' insight than a fresh set of eyes seeing it for the first time. Worse, if you're in a stale context, it may well have distorted old snippets of the reasoning stuck in there that send you off on merry chases of many wild geese...

          AI can write good code, but it won't do it every time. If you use it as a tool, it can help you. If you use it as a minion, it will comically screw everything up.

          --
          🌻🌻🌻🌻 [google.com]
          • (Score: 3, Funny) by krishnoid on Friday January 30, @06:57PM

            by krishnoid (1156) on Friday January 30, @06:57PM (#1431900)

            If you use it as a minion, it will comically screw everything up.

            Well, I know what i'm adding to my prompt for my next coding coding project.

  • (Score: 5, Insightful) by Thexalon on Friday January 30, @12:29PM (23 children)

    by Thexalon (636) on Friday January 30, @12:29PM (#1431853)

    All jobs look simple to people who don't have to do them, and complex to people who do. Yes, including jobs like janitorial work, fast food, garbage collection, and other "lowly" professions.

    Executives, especially executives who did not work their way up the corporate ladder but instead went into the high school->business school->management pipeline, have no clue how to do the jobs they're supposedly managing. Also, according to a wide variety of studies on the subject, they aren't particularly smart, with IQs somewhere around 100-110 versus professions like doctors and scientists who really do have to be smart. So it's easy for a not-terribly-smart AI to fool them into thinking that what they're producing is the same thing as what the expert who has been doing it for 40 years produces. And since AI, expensive as it is, is a lot cheaper than experts with 40 years of experience, they dream of firing the expert, despite the fact that the expert is right and the machine is wrong, because they can be fairly confident nobody will discover that the machine is wrong until after they've moved on to do something else and thus will experience zero consequences for the ensuing disaster.

    The only solution to this is to let the executives be wrong.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    • (Score: 2) by VLM on Friday January 30, @12:49PM (15 children)

      by VLM (445) Subscriber Badge on Friday January 30, @12:49PM (#1431855)

      they can be fairly confident nobody will discover that the machine is wrong

      To be slightly more precise, they don't understand their employees jobs, so they only need to fool their even less knowledgeable boss and peers.

      However, they can't fool "the market" or the experts at other companies etc.

      I think the implication is we will see more AI in non-capitalist non-competitive markets. Government, peacetime militaries, major non-competitive corporations, etc. I would predict a lot less AI in jobs where thing actually have to get done, small businesses, commodity producers, wartime militaries, etc.

      • (Score: 5, Insightful) by Thexalon on Friday January 30, @01:11PM (14 children)

        by Thexalon (636) on Friday January 30, @01:11PM (#1431862)

        However, they can't fool "the market"

        Sam Altman has definitely proven that idea wrong. And before that, people like Elizabeth Holmes and Bernie Madoff.

        "The market" is just people. People can be fooled. Ergo, the market can be fooled, and is, regularly, all the time.

        --
        "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
        • (Score: 1) by khallow on Friday January 30, @04:02PM (13 children)

          by khallow (3766) Subscriber Badge on Friday January 30, @04:02PM (#1431875) Journal

          Sam Altman has definitely proven that idea wrong. And before that, people like Elizabeth Holmes and Bernie Madoff.

          Sure, a market can be wrong for years. A government can be wrong for generations. Clean up is way quicker too.

          Consider your examples. ChatGPT has been out since 2022. We're only on year four of crazy train. Timing of the Theranos fraud is uncertain, but the non-working prototype was named in 2007 [refinery29.com] and the fraud exposed in 2015. Bernie Madoff ran a longer con than these. He supposedly turned his investment fund into a Ponzi scheme in the early 1990s though he was faking trades [reuters.com] (to do such things as obtain larger bank loans) since the 1970s. The market corrections take a few months.

          Meanwhile, let's consider the Soviet Union. It came into being in 1918, died in 1991 (73 years later), and we're still dealing with the fallout almost 35 years later (the growing Putin tyranny and the Ukrainian war). A full century of failure.

          • (Score: 3, Insightful) by JoeMerchant on Friday January 30, @04:15PM (8 children)

            by JoeMerchant (3937) on Friday January 30, @04:15PM (#1431877)

            > we're still dealing with the fallout almost 35 years later (the growing Putin tyranny and the Ukrainian war).

            Interesting pattern: VICTORY in WWI - so we penalize the losers to "teach them to never do it again" and, not long thereafter: WWII - with a significant chance of the WWI losers coming out on top.

            After WWII we helped the losers instead of punishing them - but ended up enemies with our biggest ally and fought decades of cold war - and when it was over I think we (Bush, Thatcher wasn't it?, and the rest) basically stood back, pointed and laughed at the loser, leaving their people to continue suffering, instead of helping them build an economy with major participation by the majority of the population, like we did Germany and Japan...

            --
            🌻🌻🌻🌻 [google.com]
            • (Score: 1) by khallow on Friday January 30, @04:23PM (7 children)

              by khallow (3766) Subscriber Badge on Friday January 30, @04:23PM (#1431878) Journal

              Interesting pattern: VICTORY in WWI - so we penalize the losers to "teach them to never do it again" and, not long thereafter: WWII - with a significant chance of the WWI losers coming out on top.

              Obvious market failure, amirite?

              After WWII we helped the losers instead of punishing them - but ended up enemies with our biggest ally and fought decades of cold war - and when it was over I think we (Bush, Thatcher wasn't it?, and the rest) basically stood back, pointed and laughed at the loser, leaving their people to continue suffering, instead of helping them build an economy with major participation by the majority of the population, like we did Germany and Japan...

              Clinton actually did some significant though very flawed economy building, and no "laughing" as I recall. The problem is that Russia then slid into a kleptocracy (with eager help from the West) and hasn't left that state yet.

              • (Score: 2) by JoeMerchant on Friday January 30, @04:36PM (6 children)

                by JoeMerchant (3937) on Friday January 30, @04:36PM (#1431880)

                >Clinton actually did some significant though very flawed economy building, and no "laughing" as I recall. The problem is that Russia then slid into a kleptocracy

                In broad strokes: blues don't (openly) laugh at the losers, that's the reds.

                Clinton -> kleptocracy, peas & carrots, Forrest & Jenny - corruption has no party boundaries, though the reds have moved to more open practice of it.

                I believe the west was too caught up in the .com frenzy to seriously worry about "those losers over there" and that was a major foreign aid screwup.

                --
                🌻🌻🌻🌻 [google.com]
                • (Score: 1) by khallow on Friday January 30, @04:44PM (5 children)

                  by khallow (3766) Subscriber Badge on Friday January 30, @04:44PM (#1431883) Journal

                  In broad strokes: blues don't (openly) laugh at the losers, that's the reds.

                  Indeed. No laughing as advertised.

                  I believe the west was too caught up in the .com frenzy to seriously worry about "those losers over there" and that was a major foreign aid screwup.

                  So what? There was economic assistance (and Russia wasn't as devastated as post-war Germany and Japan). At some point, we do have to recognize that Russia made its bed and now has to sleep in it.

                  • (Score: 2) by JoeMerchant on Friday January 30, @05:04PM (3 children)

                    by JoeMerchant (3937) on Friday January 30, @05:04PM (#1431888)

                    >So what? There was economic assistance (and Russia wasn't as devastated as post-war Germany and Japan).

                    As you said, it was ineffective. Just because I throw a beggar a quarter does not mean he won't be burglarizing somebody's home to get food to eat later tonight.

                    >At some point, we do have to recognize that Russia made its bed and now has to sleep in it.

                    I would say that point should come after they are disarmed of all nuclear weapons.

                    --
                    🌻🌻🌻🌻 [google.com]
                    • (Score: 2, Informative) by khallow on Friday January 30, @06:19PM (2 children)

                      by khallow (3766) Subscriber Badge on Friday January 30, @06:19PM (#1431895) Journal

                      As you said, it was ineffective. Just because I throw a beggar a quarter does not mean he won't be burglarizing somebody's home to get food to eat later tonight.

                      Russia wasn't a beggar and it wasn't a quarter [fpif.org]. That link (year 1998) indicates BTW what went wrong: "The chief beneficiary of these reforms has been a small clique of political and economic powerbrokers." The surest way to screw up aid is to give it to the wrong people. You're not throwing quarters to beggars, you're throwing billions of dollars to a clique of crime lords on the public rationalization that they'll help the beggars.

                      I would say that point should come after they are disarmed of all nuclear weapons.

                      So you're just going to take them away [wikipedia.org]? Nuclear disarmament worked in Ukraine for three reasons: Ukraine couldn't use the nuclear weapons they had (it would have required considerable resources to deploy those nukes effectively); Ukraine had a desperate need to become independent of the former USSR (Western aid and defense "assurances" provided that); and there was a balance of power between two nuclear forces (US and Russia). That balance is a large part of why they remain sovereign today. Russia can use its nukes; it wasn't as economically devastated (being the core of the USSR); and there was no nuclear-armed counterparty to keep the US honest, if Russia relinquished its nukes.

                      • (Score: 2) by JoeMerchant on Friday January 30, @07:40PM (1 child)

                        by JoeMerchant (3937) on Friday January 30, @07:40PM (#1431907)

                        >you're throwing billions of dollars to a clique of crime lords on the public rationalization that they'll help the beggars.

                        Not me, what did you expect from Slick Willie?

                        Ask Melinda Gates about the costs of administration of grants and aid, it's a significant overhead even without elements of corruption, which are inevitably present (and cost money to guard against) when hundreds of millions of dollars are on the move.

                        If Russia is politically stable, they can keep control of their WMD. I'm calling out lack of oversight through the post fall of Berlin Wall period. Instead of immediately collecting the peace dividend like we did, it should have been redirected into promoting long term stability.

                        --
                        🌻🌻🌻🌻 [google.com]
                        • (Score: 1) by khallow on Friday January 30, @11:47PM

                          by khallow (3766) Subscriber Badge on Friday January 30, @11:47PM (#1431933) Journal

                          Not me, what did you expect from Slick Willie?

                          I expect narratives that reflect reality.

                  • (Score: 1) by pTamok on Friday January 30, @07:30PM

                    by pTamok (3042) on Friday January 30, @07:30PM (#1431904)

                    At some point, we do have to recognize that Russia made its bed and now has to sleep in it.

                    If we see that somebody is bad at making beds, it might be better to incentivize them to learn how to make better beds. Although I don't see what could have been done better for Russia, given that sovereignty means you can't just go and take over government for a while. Centuries of experience with colonialism, foreign-backed coups, and invasions show that that tends to produce unstable institutions.

                    The problems come when countries export their toxic national politics internationally - leaving Russia to fester in a quagmire of its own making just tends to export the quagmire. I don't know what kind of magic external engagement can transform countries into well-governed economically-successful entities. The Marshall Plan after WWII seemed to work, but later opinion/analysis of it seems rather mixed, leaving what? Some post-Soviet Union countries have done well, but what could or should have been done that could have given Russia a future better than it is now? I have no idea. Compare and contrast Poland and Byelorussia; and Slovenia and Albania.

          • (Score: 4, Interesting) by Thexalon on Friday January 30, @09:18PM (3 children)

            by Thexalon (636) on Friday January 30, @09:18PM (#1431915)

            Meanwhile, let's consider the Soviet Union. It came into being in 1918, died in 1991 (73 years later), and we're still dealing with the fallout almost 35 years later (the growing Putin tyranny and the Ukrainian war). A full century of failure.

            Kinda sorta.

            You have to contrast where they started from with where they ended. And where they started from was absolutely abysmal: A large percentage of the population were living the life of medieval peasants, with not quite enough land to survive on. The Russian Czarist government had through complete and utter incompetence managed to be the first major power wrecked in The Great War. Infrastructure of any kind was pretty minimal. Basically, the country that Lenin et al took over was not all that radically different from the country that Napoleon had invaded a century earlier.

            The apartment blocks? That was a major upgrade to the standards of living for the people who lived in them. They were popular.
            The industrialization? That was a big improvement too.
            The cars and trains and roads to drive them on? Sure, they looked quaint by Western standards, but were brilliant by the standards of someone who was used to moving at horse speeds on a good day.
            Militarily, they had to deal with that whole Nazi problem. And the fact is they were more responsible for the defeat of the Nazis than anybody else, by a wide margin.
            The scientific efforts? Self-serving? Absolutely. Propaganda? Often. But for a Russian to go from growing rye and potatoes using hand tools to working on Earth-orbiting rockets in the space of 30-40 years was pretty ridiculous.

            Greatest country ever? Heck no. The atrocities were horrendous. The political shenanigans were awful. But the average Russian probably did significantly better than, say, the average North Korean or Cambodian under the Khmer Rouge. Which is why the USSR had the level of true believers it had for a long time, and also why there are lots of Russians today who are nostalgic for it. I think there's a good argument that the more liberal Soviet leaders were probably easier to live under than Vladimir Putin is.

            --
            "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
            • (Score: 1) by khallow on Friday January 30, @10:56PM (2 children)

              by khallow (3766) Subscriber Badge on Friday January 30, @10:56PM (#1431922) Journal

              But the average Russian probably did significantly better than, say, the average North Korean or Cambodian under the Khmer Rouge.

              That's an exceptionally low bar. And there was a period of time between the end of the Czar and the start of the USSR where a democracy was trying to take root. I think that would have been better.

              • (Score: 1, Interesting) by Anonymous Coward on Friday January 30, @11:22PM (1 child)

                by Anonymous Coward on Friday January 30, @11:22PM (#1431930)

                But the average Russian probably did significantly better than, say, the average North Korean or Cambodian under the Khmer Rouge.

                That's an exceptionally low bar.

                It might be a low bar, but every study out there shows that what matters is the direction of change, not the absolute level of a society. Going from medieval peasant to 1800s basic industry was a big improvement. Maintaining that positivity and direction of change would have been easier and resulted in a much better world than what we have achieved by successfully putting the USSR down.

                • (Score: 1) by khallow on Friday January 30, @11:49PM

                  by khallow (3766) Subscriber Badge on Friday January 30, @11:49PM (#1431934) Journal

                  It might be a low bar, but every study out there shows that what matters is the direction of change, not the absolute level of a society.

                  Do any of these "every" studies exist? And if they do, do they get the sign of the change right?

    • (Score: 2) by aafcac on Friday January 30, @03:14PM (1 child)

      by aafcac (17646) on Friday January 30, @03:14PM (#1431871)

      They also may have done the job at some point in the distant past before a bunch of bullshit got added to the job so that somebody could have something for the resume and it adds up over time.. I remember a retail job I had had stuff being added quickly enough that in the years I did the job I was never able to do my entire job because they were adding new tasks that quickly and everything was top priority with nothing being removed.

      • (Score: 3, Touché) by JoeMerchant on Friday January 30, @03:59PM

        by JoeMerchant (3937) on Friday January 30, @03:59PM (#1431874)

        > stuff being added quickly enough that in the years I did the job I was never able to do my entire job

        My first boss was a real piece of work. This wasn't just between him and me, he really tried for the underhanded back stab whenever he could get one in whether it was competitors in the market, employees he hired, students he taught...

        Anyway, a few months after hiring me it became apparent that the company (mostly him) hired too many people and needed to cut back. Instead of manning up and potentially paying a few bucks to unemployment for firing me, he started putting me on a rug-pull treadmill: "Here, work on this..." then when I'd show him some progress, he'd wait until I was about 80-90% done and "O.K. - put that on the shelf, we have a higher priority now, work on this..." He continued this bullshit for 6 weeks or so, then called me in for a "performance review" with a "witness" who spoke such broken english that most of us understood him less than half the time. His major complaint in the review "never finishes projects..." It seemed a total setup to fire me "with cause" - for whatever that was worth to him. The other shoe never fell. A couple of weeks later I was given a (nicer than his) office on the other side of the building, out of "his area." I started reporting directly to his boss in practice. A couple of years later I was given his job title and he got "a lateral move." Took him about another year to "move on to other opportunities."

        --
        🌻🌻🌻🌻 [google.com]
    • (Score: 2) by JoeMerchant on Friday January 30, @03:50PM (2 children)

      by JoeMerchant (3937) on Friday January 30, @03:50PM (#1431873)

      >they can be fairly confident nobody will discover that the machine is wrong until after they've moved on to do something else and thus will experience zero consequences for the ensuing disaster.

      I haven't seen any AI screwups that lay hidden for even 3 months, they're usually visible the first time anybody looks at them. And that's a path forward for "productive" AI - procedural development that tests for screwups and iterates until they're worked out - just like we've been doing for humans, forever, but more structurally procedurally under ISO9000 and friends for 30+ years.

      --
      🌻🌻🌻🌻 [google.com]
      • (Score: 2) by Thexalon on Friday January 30, @08:57PM (1 child)

        by Thexalon (636) on Friday January 30, @08:57PM (#1431914)

        I haven't seen any AI screwups that lay hidden for even 3 months, they're usually visible the first time anybody looks at them.

        I would guess it's more "the first time anybody who knows what they're doing looks at them". Like, for AI code output, a developer or QA analyst can tell that it doesn't work, or a good admin can tell that it's ridiculously inefficient in its memory usage.

        However, their target audience is other executives who also don't know what they're doing. So just hold off testing your code with any kind of rigor for long enough, and you can let stuff remain hidden.

        The other kind of thing that's very easy to have lurking is something that's technically correct (the best kind of correct!) in that it runs and finishes and produces some kind of output that bears some kind of relationship with the input, but the relationship between the inputs and outputs is wrong. And that's really hard to test for, because probabilities and stuff like that can get in the way of generating the sorts of scenarios that trigger the bad behavior.

        --
        "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
        • (Score: 2) by JoeMerchant on Friday January 30, @10:16PM

          by JoeMerchant (3937) on Friday January 30, @10:16PM (#1431919)

          >their target audience is other executives who also don't know what they're doing.

          I don't really see any flak out of Anthropic or similar places saying "CEOs: fire all your developers, you can do this yourself!" what I do see are developers advocating "developers: think more like a CEO, delegating specific tasks to different AI agents and have them work as a team on the problem."

          >the relationship between the inputs and outputs is wrong. And that's really hard to test for

          This is where 'expert' code review comes in. If you have a spec with examples like: output = input squared, e.g. 1 -> 1, 2 -> 4, 3 ->9, ... 7 -> 49. you should see some math in the code doing the implementation, not a lookup table for the example cases. AI doesn't always take such shortcuts, but it does once in a while...

          --
          🌻🌻🌻🌻 [google.com]
    • (Score: 4, Insightful) by krishnoid on Friday January 30, @07:02PM

      by krishnoid (1156) on Friday January 30, @07:02PM (#1431902)

      So it's easy for a not-terribly-smart AI to fool them into thinking that what they're producing is the same thing as what the expert who has been doing it for 40 years produces.

      And even when it can, "We've got some 40-year software guy who spends very little time coding, but mostly talks to other coders and does code reviews. He's quiet in a lot of meetings until he says something out of left field describing why we shouldn't do something. Sure, that sometimes turns out to be exactly what happens a quarter later, but what is this person producing by the *numbers* on a regular basis? We lay off people like that all the time. I mean, I'm never staying at a company long enough to see the fallout from that, but how useful is he?"

    • (Score: 0) by Anonymous Coward on Saturday January 31, @03:38AM

      by Anonymous Coward on Saturday January 31, @03:38AM (#1431949)
      You can see from some of the results.

      Lots of people notice when the Windows 11 updates are crap.

      If those are not produced with the help of AI then Microsoft should stop shoving AI onto everyone else.

      If those are produced with the help of AI then obviously AI isn't good enough for such stuff and Microsoft should stop shoving AI onto everyone else.
  • (Score: 2) by VLM on Friday January 30, @12:51PM (2 children)

    by VLM (445) Subscriber Badge on Friday January 30, @12:51PM (#1431857)

    My experience at big companies most likely to have eyes gloss over when demanding a silver bullet to fix everything, they're also simultaneously the LEAST likely type of company to encourage

    employees are encouraged to experiment and build their own solutions

    So they're gonna end up frustrated, unproductive, and unhappy.

    • (Score: 5, Interesting) by JoeMerchant on Friday January 30, @04:50PM (1 child)

      by JoeMerchant (3937) on Friday January 30, @04:50PM (#1431884)

      employees are encouraged to experiment and build their own solutions

      So they're gonna end up frustrated, unproductive, and unhappy.

      Not always. We don't do a lot of log file analysis, so our log file analysis tools are rather basic - mostly opening the files in a text editor and visually scanning (which is how the log files are designed to be used...)

      However, we do get the occasional question, accompanied by a folder full of dozens to hundreds of files ranging from 10K to 100K of text each... so, when that happens, it would be nice to have some tools to help analyze the log files.

      WRONG APPROACH: feed AI agent the e-mail asking for analysis of the log files plus the folder full of files.

      RIGHT APPROACH: analyze the problem, determine how best to filter the given data, ask AI to write a small parser / filter app that shows just those parts of the data that are relevant to the question du-jour.

      AI writes that kind of parser (stupid stuff like: convert this time in YYYY-MM-DD-HH:mm:SS.mmm format into Unix time in milliseconds... only pass lines that are within 10 seconds after a line that contains this string...) in no-time, and after two or three iterations it's working like you want - not only a 10x speedup over hand coding, also avoids having to lookup all the drudge details about various library functions that do the things you need.

      Having lowered the "barrier to entry" of parser/filters for the log data, now you can review the 1% most relevant content by hand using the AI written parser/filter output, instead of slogging through maybe 5% of the unfiltered content and drawing conclusions based on the first few instances of interesting events you see.

      The AI helps the humans do a better job, and faster than if they just did it all by hand.

      On the other hand, we have a team in India that has been working on "improving our logging capabilities for the next generation product" for over 2 man-years now. They're almost up-to-par with our existing logging capabilities, but yay! they did it in Rust!!!

      --
      🌻🌻🌻🌻 [google.com]
      • (Score: 3, Interesting) by krishnoid on Friday January 30, @08:12PM

        by krishnoid (1156) on Friday January 30, @08:12PM (#1431911)

        Those "drudge details" can be so drudgey, I even feel bad when delegating those sorts of things to a coding AI. They're capable of writing working code that gets those nitty-gritty things right, so I have to remind myself that they don't feel bad [youtu.be] when I request date format conversions or log extractions instead of implementing recently-discovered algorithms, improving data visualization, or even fixing bugs on obsolete platforms.

        On a related note [soylentnews.org], here's a detailed response [google.com] (using "Deep Research") to that request, with a link to the generated code [google.com] itself. I haven't tested it, but ... it probably works?

  • (Score: 4, Interesting) by VLM on Friday January 30, @01:03PM (2 children)

    by VLM (445) Subscriber Badge on Friday January 30, @01:03PM (#1431860)

    This might be similar to the "Open Office". Not the software, the floor plan architecture.

    Some folks REALLY fetishize a humiliation ritual of forcing people to be nonproductive while simultaneously demanding they be productive, sort of a "jump thru flaming hoops for my amusement".

    I can see how people in a W-2 job might resent being told they can't do their jobs anymore, they have to ask the AI to do it for them, the AI will screw it up enough resulting in minimal labor gain or improvement, but whats REALLY important is showing who's boss by forcing interference in formerly successful work processes. With a side dish of "we will fire you all once AI stops screwing up" which is believed with religious fervor will happen in the next model revision which never seems to happen.

    Compare to the open office, "we want you to concentrate and think really hard so here's a loud and disruptive adult daycare for you to sit in the middle of" also "we're family at $workplace and nothing says family like sterile unfriendly and unhappy sardine-like conditions"

    I don't have a dog in the fight either way. Probably the best way for the feudal employees/serfs to fight back is passively. Lower IQ "number goes up" leaders demand increasing AI metrics, its not a problem to provide prompts like "Repeat after me... printf("Hello World");" and if treated like grammar checker or linter, sometimes it'll even find stuff. Probably not often IRL. But, AI won't lower productivity much while also making the meaningless metrics go up. I think you'll see this strategy grow in the future.

    • (Score: 2) by JoeMerchant on Friday January 30, @02:42PM

      by JoeMerchant (3937) on Friday January 30, @02:42PM (#1431868)

      > resent being told they can't do their jobs anymore, they have to ask the AI to do it for them, the AI will screw it up enough resulting in minimal labor gain or improvement, but whats REALLY important is showing who's boss by forcing interference in formerly successful work processes.

      The administrative golden carrot I see here is: proceduralization and increased tracking / visibility of process execution.

      Right now you've got all those unruly cube trolls out there just writing e-mails free-text from their heads, that's such a pain to trawl through and analyze (though AI is making it somewhat easier to get a (mildly flawed) overview...)

      If you train the trolls to use the tool to write the drafts for them, the tool can give better visibility into procedural compliance, flag "off script" trolls faster, keep everybody more focused on the company mission!

      It's early days, but by forcing the trolls to use AI for ALL THE THINGS, they're identifying the things that AI produces value in faster... not that any of this is for the benefit of the cube farmed trolls.

      --
      🌻🌻🌻🌻 [google.com]
    • (Score: 3, Insightful) by JoeMerchant on Friday January 30, @02:45PM

      by JoeMerchant (3937) on Friday January 30, @02:45PM (#1431869)

      Try this prompt in Claude Opus 4.5:

      "Review this code for technical debt, report" followed by the code, or preferably in a tool like Claude Code or Cursor which has access to the whole folder and git history... it does some impressive stuff trawling through the commit history to find how things evolved... not a hard thing to do, but undeniably tedious.

      --
      🌻🌻🌻🌻 [google.com]
  • (Score: 4, Informative) by turgid on Friday January 30, @01:10PM (5 children)

    by turgid (4318) Subscriber Badge on Friday January 30, @01:10PM (#1431861) Journal

    Majority of CEOs report zero payoff from AI splurge [theregister.com]

    More than half of CEOs report seeing neither increased revenue nor decreased costs from AI, despite massive investments in the technology, according to a PwC survey of 4,454 business leaders.

    AI hasn't delivered the profits it was hyped for, says Deloitte [theregister.com]

    According to Deloitte's "State of AI in the Enterprise" report [PDF], 74 percent of organizations want their AI initiatives to grow revenue, but only 20 percent have seen that happen.

    More popcorn required.

    • (Score: 4, Informative) by HiThere on Friday January 30, @02:24PM (3 children)

      by HiThere (866) on Friday January 30, @02:24PM (#1431866) Journal

      But "74 percent of organizations want their AI initiatives to grow revenue, but only 20 percent have seen that happen" is what one should expect with a radically new technology. I think the early adoption of computers was worse than that. But it's the successes that will endure and be remembered.

      OTOH, AIs are currently changing so fast that current approaches are almost certainly only temporary. So don't tie your company or career to any of them.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Interesting) by JoeMerchant on Friday January 30, @04:55PM (2 children)

        by JoeMerchant (3937) on Friday January 30, @04:55PM (#1431885)

        >AIs are currently changing so fast that current approaches are almost certainly only temporary

        Indeed, "techniques" I was developing in Nov2025 to improve output usefulness (pretty obvious stuff that the current models just weren't doing) were somewhat built into the next gen models released in Dec2025.

        Still, I think the early system experience is good. Later systems are likely to fix up many of the complaints we have today, but many of those fixes are going to be superficial to some degree - understanding how to deal with the underlying problems (limited effective context window capacity being the biggest IMO) is going to be a skill with durable value.

        --
        🌻🌻🌻🌻 [google.com]
        • (Score: 1) by pTamok on Friday January 30, @08:50PM (1 child)

          by pTamok (3042) on Friday January 30, @08:50PM (#1431913)

          Strikes me that the limited effective context window is one of the things that distinguishes between 'AI' and human intelligence. Human context windows are indefinitely long, but variable in accuracy. Similarly, humans can, and do, update their training corpus and resultant model in near-real time (for some things), and can immediately benefit from that, whereas current methods for 'AI' don't have that to the same extent. Currently re-training is slow and expensive. It is too for humans (retraining a motor mechanic into an expert in Egyptian antiquities would be long, difficult, and expensive), but motor mechanics will learn the idiosyncrasies of a new car very quickly.

          'AI's that can continuously update their models with low overhead and also have context windows as large as necessary would be a great advance in their utility. You would still need to check their output, though.

          • (Score: 3, Informative) by JoeMerchant on Friday January 30, @10:06PM

            by JoeMerchant (3937) on Friday January 30, @10:06PM (#1431917)

            > Human context windows are indefinitely long, but variable in accuracy.

            Also prone to large and important "memory holes" - and AI is mimicking human behavior around those holes: when you don't know, fake it.

            > current methods for 'AI' don't have that to the same extent.

            Others may disagree, but I feel that the "context window" is that opportunity for retraining: 200,000 tokens, 200 tokens for the problem statement, and 199,000 tokens for the specific instructions about how to properly implement the solution.

            > motor mechanics will learn the idiosyncrasies of a new car very quickly.

            It's kind of the same for AI - except how it's different. RAG - research augmenteted generation, enables models to do things like reference the tech manual for the particular car you're working on at the moment...

            > and also have context windows as large as necessary

            I don't understand the why, I don't know if the people working in the field understand the why, but the what of it is: you can provide all the resources for larger context windows - that doesn't really make them perform better. Back with Sonnet 4.5 (200K tokens) and Opus 4.1 (1M tokens) Opus 4.1 wasn't significantly better than Sonnet 4.5 at a given task. Opus 4.1 tended more to "color outside the lines" cooking up custom solutions to problems whereas Sonnet 4.5 was more inclined to use the standard blocks out of the toolbox and stick them together for the solution... but Opus 4.1 would sometimes start hallucinating harder and faster than Sonnet 4.5... I notice with Opus 4.5 they've moved back to a 200K context window, which makes it faster and cheaper to use - on par with Sonnet 4.5, but a bit better at programming all around...

            > You would still need to check their output, though.

            Same goes for people, regulated industries (safety concerns) require documented validation testing - and more...

            --
            🌻🌻🌻🌻 [google.com]
    • (Score: 3, Interesting) by Anonymous Coward on Friday January 30, @06:36PM

      by Anonymous Coward on Friday January 30, @06:36PM (#1431898)

      So, RMS is right on the mark (again). It's not AI, it's PI [Pretend Intelligence].

  • (Score: 2) by jb on Saturday January 31, @07:07AM

    by jb (338) on Saturday January 31, @07:07AM (#1431956)

    Leaders think their AI deployments are succeeding

    ...but...

    Executives we surveyed overwhelmingly said...

    Don't confuse leaders with executives. The vast majority of leaders never hold executive roles; and the vast majority of executives couldn't lead their way out of a wet paper bag (at least today; there's some evidence to suggest that things were not always quite so bad).

    See the thing is, a good leader does not lie; whereas many (not all, but probably most) executives do so most of the time ... and claiming any sort of nett benefit from LLMs (which is what TFA clearly means by "AI", even though we know that LLMs are not AI at all) is clearly a lie.

(1)