Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday August 16 2024, @06:11AM   Printer-friendly

A recent study has found that 94% of spreadsheets used in business decision-making contain errors, posing serious risks for financial losses and operational mistakes. This finding highlights the need for better quality assurance practices.

The study, led by Prof. Pak-Lok Poon in collaboration with Central Queensland University, Swinburne University of Technology, City University of Hong Kong, and The Royal Victorian Eye and Ear Hospital, shows that most spreadsheets used in important business applications have errors that can affect decision-making processes. "The high rate of errors in these spreadsheets is concerning," says Prof. Poon.

Errors in spreadsheets can lead to poor decisions, resulting in financial losses, pricing mistakes, and operational problems in fields like health care and nuclear operations. "These mistakes can cause major issues in various sectors," adds Prof. Poon.

More information:Pak-Lok Poon et al, Spreadsheet quality assurance: a literature review, Frontiers of Computer Science (2024). DOI: 10.1007/s11704-023-2384-6

PHYS.ORG

[Also Covered By]: news wise


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by Anonymous Coward on Friday August 16 2024, @07:21AM (6 children)

    by Anonymous Coward on Friday August 16 2024, @07:21AM (#1368794)

    30 years later finds the error in the spreadsheet - not gonna give that Nobel Prize in economics back.

    • (Score: 3, Informative) by RamiK on Friday August 16 2024, @08:59AM (5 children)

      by RamiK (1813) on Friday August 16 2024, @08:59AM (#1368802)
      • (Score: 2, Interesting) by khallow on Friday August 16 2024, @11:48PM (4 children)

        by khallow (3766) Subscriber Badge on Friday August 16 2024, @11:48PM (#1368931) Journal
        I'd go with Nordhaus against those other guys. I doubt he mispriced emissions much, if at all.

        And the elephant in the room is the huge growth in human prosperity from those emissions. So a 1 C rise in CO2 might result as claimed above in a low double digit decline in global GDP, but global GDP would have doubled or more in the process - improving the lives of billions of people despite whatever environmental harm, including climate change, has occurred.

        Let's actually look at this in detail. Global GDP adjusted for inflation was roughly $10 trillion in 1950 to $135 trillion in 2021 (the last year on the graph). That corresponds to a growth rate of roughly 3.7% per year. CO2 emissions increase at roughly 1% per year - let's suppose that is exponential too. If climate sensitivity is the advertised 3 C per doubling, then to get to 1 C, one would have to increase CO2 by about 26% (1.26^3 ~ 2). That takes about 23 years at the above exponential rate for emissions (at 1% per year). So what did GDP, growing at 3.7% per year do? Increase by 230% minus the alleged decline. The decline would have to be on the order of 50+% for a 1 C increase in order to stay still GDP-wise - which it's not.

        This is particularly a remarkable dysfunction when you consider one of the big claims of the research of the first link:

        The conclusion they reach when analyzing this dataset is that each extra ton of CO2 emitted into the atmosphere carries with it a real cost to civilization of $1,056 per metric ton. They also contend that were the earth not to have already experienced fifty years of warming, our society would have been 37% more well off than it is today.

        This is part of the scam of climate change. In the real world, an almost complete disinterest in climate change has coincided with the greatest improvement of the human condition ever. While climate change mitigation beyond the level of planting trees and land conservation has been a dumpster fire [soylentnews.org]. What will you believe, our sweet computer models or your lying eyes?

        It's interesting how convenient climate change research is. When there's a serious problem with it, rather than adapting the science to the problem, like science does in other fields, they adapt reality to the science. Somehow the conclusions never change no matter how bad the research is: be it the hockey stick, extreme weather, the endless litany of things for which climate change is a vague factor (such as four ways [soylentnews.org] climate change makes you fat), or the scary economics (which completely ignore factors like the cheapness of adaptation).

        • (Score: 2) by RamiK on Saturday August 17 2024, @03:51PM (3 children)

          by RamiK (1813) on Saturday August 17 2024, @03:51PM (#1368998)

          I doubt he mispriced emissions much, if at all.

          His model was rerun using up-to-date datasets in 2020 and was found to be off by 1.5c: https://en.wikipedia.org/wiki/DICE_model#2020_rework [wikipedia.org]

          To be clear, that's using his own methods and the same sources, only with updated figures for the ~10 years span. Conversely, the paper discussed in the Forbes article corrects the various selective data biases (being the four points covered in the introductory...).

          And the elephant in the room is the huge growth in human prosperity from those emissions...

          It's all done under the "social cost of carbon" assessments which are at the heart of the climate-economics papers. And this paper in particular covers roughly 120 years of GDP and emissions figures to specifically price the "cons and pros" as it were.

          CO2 emissions increase at roughly 1% per year - let's suppose that is exponential too...

          There's no need to guesstimate anything. Atmospheric CO2 is trivial to measure using ice core samples: https://climate.mit.edu/ask-mit/how-do-we-know-how-much-co2-was-atmosphere-hundreds-years-ago [mit.edu]

          It looks roughly like this: https://www.climate.gov/news-features/understanding-climate/climate-change-atmospheric-carbon-dioxide [climate.gov]

          If climate sensitivity is the advertised 3 C per doubling...

          We've had this discussion before but it comes down to delayed response and it was expected to be a delayed response from day 1: https://en.wikipedia.org/wiki/Climate_sensitivity#Equilibrium_climate_sensitivity [wikipedia.org]

          Btw, this paper isn't from my usual non-orthodox economics and/or "fringe" climate science picks. It's a Northwestern-Harvard neoclassical economics dept. paper from the free-markets people. The economists and climate scientists I follow would add a couple of zeros on top of this ~$1000 SCC figure since they predict human extinction / global nuclear war and winter.

          --
          compiling...
          • (Score: 1) by khallow on Saturday August 17 2024, @11:16PM (2 children)

            by khallow (3766) Subscriber Badge on Saturday August 17 2024, @11:16PM (#1369050) Journal

            His model was rerun using up-to-date datasets in 2020 and was found to be off by 1.5c:

            But was it actually off? The rerun was using data that is frankly garbage. Consider in particular:

            The PIK team employed current understandings of the climate system and more modern social discount rates.

            I doubt "current understandings" of the above is better than Nordhaus's. "Social discount rates" loudly signals garbage. It's just discount rates applied to valuation of social projects. The missing part is to realize that most social projects and policies are negative value in the first place and thus, going to be massively overvalued in any sort of economic projection that treats them at an ideological valuation rather than a real world valuation.

            It's all done under the "social cost of carbon" assessments which are at the heart of the climate-economics papers. And this paper in particular covers roughly 120 years of GDP and emissions figures to specifically price the "cons and pros" as it were.

            We don't need models to analyze the "cons and pros". We can just look at 120 years of history to see that a) we have massive growth and benefit to humanity from fossil fuel use over the past 120 years, and b) very little in the way of climate change impact. It is just barely visible as a matter of fact. Huge benefit versus slight impact. There is an assertion here that this will somehow change rapidly as we pass some minor thresholds in the next future, but no evidence to support those assertions.

            There's no need to guesstimate anything.

            My argument gained or lost nothing by that observation. I'd rather you focus on arguments that actually challenge mine.

            If climate sensitivity is the advertised 3 C per doubling...

            We've had this discussion before but it comes down to delayed response and it was expected to be a delayed response from day 1: https://en.wikipedia.org/wiki/Climate_sensitivity#Equilibrium_climate_sensitivity [wikipedia.org] [wikipedia.org]

            Btw, this paper isn't from my usual non-orthodox economics and/or "fringe" climate science picks. It's a Northwestern-Harvard neoclassical economics dept. paper from the free-markets people. The economists and climate scientists I follow would add a couple of zeros on top of this ~$1000 SCC figure since they predict human extinction / global nuclear war and winter.

            Way to argue from authority. Remember if the response is delayed enough then it doesn't actually happen. That's a real problem with these projections since the alleged response beyond about 2 C per doubling has yet to happen! And sounds like you might need to find a better class of economists and climate scientists. After all, predicting human extinction/global nuclear war/winter from zero evidence is not very sciency.

            • (Score: 3, Interesting) by RamiK on Sunday August 18 2024, @07:57AM (1 child)

              by RamiK (1813) on Sunday August 18 2024, @07:57AM (#1369097)

              I doubt "current understandings" of the above is better than Nordhaus's.

              Current climate models get a better p score than previous ones against the same datasets whether updated or not. The economic papers that use climate models similarly get a better p scores against past and existing datasets.

              In general, keep in mind that climate models are used by civil engineers and actuaries in the real world for real physical decisions that have in-our-lifetimes consequences (how to build a port... where to build roads... size of water reservoirs and conduits... valuations...) so the most predictive ones are the ones that survive.

              It's why orthodox economic models are such a mess: They make predictions about growth and when they fail they say how that "there were external factors" like forms of scarcity or conflict as if there aren't any tools in their field to evaluate them.

              Anyhow, as the paper's intro covers in those 4 points, Nordhaus' model is particularly problematic for picking and choosing arbitrary data points from existing climate datasets and models while also scoring poorly in predictability.

              We don't need models to analyze the "cons and pros". We can just look at 120 years of history...

              The paper's model is the product of them looking at 120 years of history, finding the data points and equations that (more) successfully predict actual real world results (both climate and economic) and then applying those equations to future predictions. You closing your eyes to the data and methods of their analysis doesn't change this.

              My argument gained or lost nothing by that observation. I'd rather you focus on arguments that actually challenge mine...Way to argue from authority.

              You made those same arguments in our last discussion and I replied to you by pointing out the first 19th century papers on climate change already addressed what's wrong with your assertions. Your reply was to reject 100+ years of climate science and you're keeping up with your wilful ignorance in this post so I'm choosing not to respond directly but instead point out to how there's both scientific and economic consensus around my claims - which in our last discussion were only in partial consensus for a few of my claims.

              That's what my "fringe" quotation marks were about: Just a few months ago some (minority) of my stance was based on out-of-consensus climate change and economics papers. Now, however, it's firmly within the neoclassical economics mainstream and almost (3/4? Maybe 4/5 of the climate issues I can think of that the paper mentioned...) within the climate change models.

              Anyhow, this thread started with me naming Nordhaus as an example of an Economics Nobel Prize winner that was awarded for making a prediction that turned out to be false within a very short time based on methodologies that were rejected on both their scientific validity and predictive outcomes.

              --
              compiling...
              • (Score: 1) by khallow on Sunday August 18 2024, @09:20AM

                by khallow (3766) Subscriber Badge on Sunday August 18 2024, @09:20AM (#1369106) Journal

                Current climate models get a better p score than previous ones against the same datasets whether updated or not. The economic papers that use climate models similarly get a better p scores against past and existing datasets.

                So they p-hack better? You do realize the problems with extrapolation from a model with a huge number of parameters, right? They can easily get great p scores and still generate whatever outcome you want.

                In general, keep in mind that climate models are used by civil engineers and actuaries in the real world for real physical decisions that have in-our-lifetimes consequences (how to build a port... where to build roads... size of water reservoirs and conduits... valuations...) so the most predictive ones are the ones that survive.

                In other words, it'll be interesting to see what the failure modes are for engineering reliant on models that have been around for less time than the basic time scale of climate. 30 years right?

                The paper's model is the product of them looking at 120 years of history, finding the data points and equations that (more) successfully predict actual real world results (both climate and economic) and then applying those equations to future predictions. You closing your eyes to the data and methods of their analysis doesn't change this.

                So is my observation. And I don't p-hack.

                You made those same arguments in our last discussion and I replied to you by pointing out the first 19th century papers on climate change already addressed what's wrong with your assertions. Your reply was to reject 100+ years of climate science and you're keeping up with your wilful ignorance in this post so I'm choosing not to respond directly but instead point out to how there's both scientific and economic consensus around my claims - which in our last discussion were only in partial consensus for a few of my claims.

                Since the argument still works well, there's no point to changing it. No one has actually shown a significant effect, much less a significant cost from climate change to date. Certainly, not the existential threats you bandied about earlier.

                The models get more complex, the heavy breathing bullshit gets thicker. Yet the real world just isn't keeping up. My take is that we're a decade or two away from needing a complete rebuild of climatology. Perhaps even anything dependent on it like that engineering you spoke of. We don't need models that merely fit data well - with the reliability of the data being just as poorly understood as the models. We need accurate, understandable models of climate and the harm caused by said climate change. Hopefully we'll get that sooner rather than much later.

                As to the alleged pointing out of 19th century papers, keep in mind that those papers predict half or less the long term warming of modern models. You're glossing over a lot of backloaded, unproven, feedback effects of modern climate models.

                My take is that modern climatology is an evolving example of the failings of Eisenhower's "scientific-technological elite" [soylentnews.org]. We have a vicious cycle of science chasing the funding combined with massive public funding of climate change-related stuff justified by that biased science. I think here that it's an enormous danger that the only people who can contest the minutia of this research are all purely funded by the same corrupted sources. In particular, it remains telling that we get rhetorical rebuttals of Nordhaus's work than actual engagement and that these rebuttals conveniently arrive at the same hysterical conclusions.

                Anyhow, this thread started with me naming Nordhaus as an example of an Economics Nobel Prize winner that was awarded for making a prediction that turned out to be false within a very short time based on methodologies that were rejected on both their scientific validity and predictive outcomes.

                And ended with you not actually showing that the prediction or the methodologies were false or invalid. What makes your citations more relevant than citing Nordhaus? It's a religious assertion with rituals of citation and rebuttal that have nothing to do with reason or science.

                What I can say is that the real world doesn't show the alleged harm of climate change to this date. But the study you linked claims we should be seeing large GDP hits from climate change right now. Well, where are they?

  • (Score: 3, Insightful) by PiMuNu on Friday August 16 2024, @07:27AM (15 children)

    by PiMuNu (3823) on Friday August 16 2024, @07:27AM (#1368795)

    Note that it's not just finance that use spreadsheets. A shocking number of people use spreadsheets for data analysis of technical data.

    • (Score: 3, Interesting) by gawdonblue on Friday August 16 2024, @07:41AM (14 children)

      by gawdonblue (412) on Friday August 16 2024, @07:41AM (#1368796)

      Engineers love Excel. It can bring in lots of data and create good reports. The problem is that they also create complex workflows based on it.
      And then they panic call IT to fix when it inevitably goes FUBAR.
      We've finally told them that since they love Excel so much that they can support that POS themselves. If they want proper applications (and reports) then we'll provide and support those.

      • (Score: 5, Interesting) by Snospar on Friday August 16 2024, @09:52AM (7 children)

        by Snospar (5366) Subscriber Badge on Friday August 16 2024, @09:52AM (#1368804)

        No, Engineers fucking hate Excel. The problem is our IT colleagues refuse to provide us with any proper tools to do any sort of data analysis and reporting. Excel doesn't scale, is slow with large data, still mangles CSV when it decides a field might be a date, relies on VBA for scripting/automation... the list goes on. Even some of its charting capabilities are dubious but something is better than nothing and anything in that regard is better than Python + matplotlib (IMHO).

        --
        Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
        • (Score: 3, Insightful) by looorg on Friday August 16 2024, @11:04AM (6 children)

          by looorg (578) on Friday August 16 2024, @11:04AM (#1368810)

          I actually find Excel to be somewhat snappy when it comes to large data sets compared to a lot of the other "real" statistical packages. The only thing snappier is to write a little script in perl or something such to do some cleaning tasks. But they all now try to emulate or look like Excel but they are doing a worse job of it. In some regard the two, Excel and Statistical packages (SAS, SPSS, Minitab, etc etc), are in some regard starting to overlap and merge. Excel is getting more graphs, plots and formulas and the packages are trying to be more like Excel in how you manage the data. Perhaps not merge in that regard, more like Excel is trying to become like the packages and do all the things they do while also doing what it does.

          But yes Excel have a lot of disadvantages in that regard. The import function does leave things to be desired. You have to turn off that crap that tries to automatically identify or set things for field content. VBA is VBA, but in the latest versions of Excel they know allow for Python so I'm not sure that is helping or just making it worse.

          • (Score: 3, Informative) by shrewdsheep on Friday August 16 2024, @12:39PM (5 children)

            by shrewdsheep (5215) Subscriber Badge on Friday August 16 2024, @12:39PM (#1368824)

            The real statistical package nowadays is R and you will find it to be much more snappy than Excel (no GUI).

            • (Score: 3, Interesting) by nostyle on Friday August 16 2024, @03:20PM

              by nostyle (11497) Subscriber Badge on Friday August 16 2024, @03:20PM (#1368849) Journal

              Back in the day, _real_ statisticians used the "S" language to do statistical analysis. S-Plus [wikipedia.org] was a commercially supported implementation of this language - eventually acquired by TIBCO. "R" was a follow-along effort (late '90s to early '00s) to implement a FOSS version of S-Plus.

              AFAICT, R has pretty well matched S-Plus in power and features, so, indeed, it is probably the industry standard these days for PhD level statistical analysis.

              All that said, no matter which language/platform one does analysis on, it is impossible to avoid making mistakes in coding one's data analysis, and those mistakes can go unnoticed for decades, so "bad spreadsheets" is not exclusively an Excel problem.

              [disclosure] I wrote code for S-Plus - late '90s to early '00s [/disclosure]

              --

              I used to be a heart beatin' for someone
              But the times have changed
              The less I say, the more my work gets done

              -Elton John, Philadelphia Freedom

            • (Score: 3, Interesting) by looorg on Friday August 16 2024, @06:17PM (3 children)

              by looorg (578) on Friday August 16 2024, @06:17PM (#1368882)

              The real statistical package nowadays is R and you will find it to be much more snappy than Excel (no GUI).

              It's quite common. That said it has not removed the need for SAS, STATA, SPSS, whatever you use. They seem to like different packages for different subjects and different universities. They all swear by the thing that they use. The thing is you can teach a monkey, or lower-tier student, to point and click in Excel or whatever package they want to use. Not much training required. Try teaching non-math students to code. It's like herding cats. So sure R, Python, Julia or whatever is nice niche for some of them students. The others need that office equivalent that are borderline WYSIWYG in function. After all they are poor at math, they are poor at statistics and they can't program or would run screaming if forced to. Even if or when we do it they rarely venture outside of doing exactly what we thought them, so they might as well just click their way in whatever program was available.

              I like a good perl script as much as the next guy for my pre-data-cleaning. Getting it ready to be imported into some other application. After all I'm not the only user of the data so you sort of have to make it usable by at least a handful of people. With varying degree of technical skills, most of them have none.

              But I found a thing that I really like to use Excel for. As far as I know it's the only one that I can do it in, none of the big other packages do it and programming something for it is not really a thing either. The thing I like is that I can colour code all the cells based on the content of said cell. This way I can get a great instant overview of all the data, just scrolling thru it I can directly see anything that falls out of line as the colours have changed. Or at least I know if it's in acceptable values. I find that invaluable for a lot of projects and early data management and cleaning. Instead of just having that black and white wall of digits. Things get lost in that sea.

              As far as I know Excel is the only package that allows me to do that. Otherwise I would have to program a script to generate a report or HTML file or something with all the content and then if I update or change something I have to generate a new one etc. So much simpler when I do that in Excel. For each task there is a tool, it might not be the best for everything. But some of them have at least a tool for the task.

               

              • (Score: 1) by shrewdsheep on Friday August 16 2024, @08:28PM

                by shrewdsheep (5215) Subscriber Badge on Friday August 16 2024, @08:28PM (#1368912)
              • (Score: 1) by khallow on Friday August 16 2024, @11:52PM (1 child)

                by khallow (3766) Subscriber Badge on Friday August 16 2024, @11:52PM (#1368932) Journal

                Try teaching non-math students to code. It's like herding cats.

                It's more like teaching a dog to untangle its leash. I'm sure that there's plenty of dogs out there that have figured out the mysteries of leash entanglement - those would be the math-capable students. But there's plenty of dogs with the attitude "human will fix it".

                • (Score: 1, Funny) by Anonymous Coward on Saturday August 17 2024, @12:58PM

                  by Anonymous Coward on Saturday August 17 2024, @12:58PM (#1368978)

                  IANADO (not a dog owner), but have walked friend's dogs and see plenty in the neighborhood. Have never seen one untangle their own leash. Do you have a citation for this claim?

                  Personally, once I use up the "herding cats" idiom, I switch to car analogies.

      • (Score: 1, Interesting) by Anonymous Coward on Friday August 16 2024, @10:12AM (1 child)

        by Anonymous Coward on Friday August 16 2024, @10:12AM (#1368805)

        Confession time: I’m on a voluntary rehab plan.

        Excel (on my work PC - or LibreOffice Calc on my home PC) is my go-to tool to write down a few numbers. To me, it’s basically a notebook and point-and-click calculator with virtually infinite but most importantly visual memory: The answer to "what was that number?" is not "$variable" but a mouse scroll and click away which suits my brain quite well. My right hand mainly switches from mouse to numpad and arrow keys.

        The problem is, it’s an addictive and slippery slope. A typical example:
        1. What’s my solar cell power generation ? This cell B2.
        2. What about at a different sun angle ? OK, formula in cell C2.
        3. What about at a different temperature ? OK, formula in B3, drag to C3.
        4. What about a many different angles and temperatures ? OK, change increments and drag over multiple columns and rows.

        Only at:
        5. What about for a full panel ? What about for a full array of panels? What about the voltage and current in different string configurations?
        do I start to think about using a tool better suited to multiple dimensions.

        So I’ve been forcing myself to do more python+pandas. But it’s still not an automatism. I feel like the mental overhead in editing and running a python script in 2 different windows (or worse, working directly in the python terminal) is much greater than opening Excel or Calc and immediately inputting stuff on the blank spreadsheet.

        • (Score: 3, Insightful) by looorg on Friday August 16 2024, @11:10AM

          by looorg (578) on Friday August 16 2024, @11:10AM (#1368814)

          This is how all the behemoth Excel sheets start. It's something small and then it just grows and grows and grows until in the end the entire project is a multi-page excel monstrosity and you sort of just lost control of what everything is or does or it's actual location. You have a tab or two for the data, one for the formulas, one for the calculations, one that does the presentation. Somewhere in there is bad data. Or you think you change the formulae but you didn't cause one of the cells somewhere made a call to another place and that is just now an error -- be it a zero, a reference to another cell or something such. As you sit there trying to track it down ...

          I have seen entire research projects exist in one Excel file. People running their entire small business in a sheet, orders, calculations, bookkeeping, finances or all sorts. They think it's handy. Which it is. But it's also a nightmare. That said I'm guilty of this to. It's just so easy to start and then it just keeps growing, you know you should transition to something else, but you don't. After all it works now so why waste time with that.

      • (Score: 4, Interesting) by r_a_trip on Friday August 16 2024, @11:05AM (2 children)

        by r_a_trip (5276) on Friday August 16 2024, @11:05AM (#1368811)

        "If they want proper applications (and reports) then we'll provide and support those."

        Not an engineer, I am in finance, but the above gave me a chuckle. IT is usually belligerent, unhelpful, averse to any change and slow as molasses. Getting a new report takes over a year.

        The need for a new report in our department arises. In a fit of insanity we ask IT. First we need a needs assessment. Then we need a nebulous process owner (all processes are owned, but no one can tell you who the owners are). The process owner needs to approve the initial request for the new report.

        After approval IT will try and make the report as company-wide as possible, so requests comments for improvements, additions and any other thing. Since the scope is now becoming much larger, we need a Steering Committee. The Committee will establish a Working Group. The Working Group now restructures and alters the initial request for a simple report. The Steering Committee approves the work of the Working Group. The Working Group hands a functional design to IT. IT selects some godawful webplatform to build a highly inflexible report based on the functional design.

        After a year the "new and improved" report is introduced with great fanfare. Our department finds out the "new and improved" report has everything we need, except one crucially essential item that was removed by the Working Group. The report is now useless for our purposes. We ask for the "additional" feature to be added to the report through a change request. IT declines, stating a lack of budget, man-hours available and the difficulty to make changes in the godawful webplatform.

        My boss asks me if we can whip something up in Excel so we can at least get the report we wanted. A week later a new Excel sheet is in use and is an important part of the workflow. Some of these Excel reports have been in use for over a decade.

        • (Score: 3, Funny) by Snospar on Friday August 16 2024, @11:32AM

          by Snospar (5366) Subscriber Badge on Friday August 16 2024, @11:32AM (#1368816)

          It sounds like you work down the corridor from me, too many scary similarities!

          --
          Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
        • (Score: 5, Interesting) by datapharmer on Friday August 16 2024, @11:34AM

          by datapharmer (2702) on Friday August 16 2024, @11:34AM (#1368817)

          The problem is you are asking for a report from IT instead of a bi platform. IT should be in charge of implementing and maintaining the bi platform. If you need a new model or change to a view you call IT. If you need a report you ask a business analyst. It sounds like your IT department got a little crazy on the governance juice and failed to ask the first question “is this an IT request.”

          If you already have a BI platform then maybe your company needs to hire a business analyst? If you aren’t large enough for that you also aren’t large enough for an IT governance process with that much overhead.

      • (Score: 1, Interesting) by Anonymous Coward on Friday August 16 2024, @11:30AM

        by Anonymous Coward on Friday August 16 2024, @11:30AM (#1368815)

        Origin [originlab.com] was the 'go-to' spreadsheet thingy the engineers used in my last full-time IT gig 20odd years ago, manglement, probably having just seen a copy of Bloch's 'Excel for engineers and scientists' tried to persuade them that Excel was 'the thing' (as they hated paying for the site licenses for Origin and the other data analysis&visualisation software our engineers and scientists used for the majority of their henious data fuckery needs...).

        Needless to say, manglement got laughed at, ISTR the chief critic on the engineering side making a snide comment at one meeting along the lines of 'Excel might just be ok for tracking your paperclip expenditure, just...' (though he liberally peppered it with more expletives, as per the Scottish manner)

  • (Score: 5, Insightful) by Thexalon on Friday August 16 2024, @10:54AM (1 child)

    by Thexalon (636) on Friday August 16 2024, @10:54AM (#1368808)

    Spreadsheets are beloved by MBA types. But even if the formulas are designed perfectly, it doesn't matter, because a huge amount of the time, most of the numbers that were inputted into that spreadsheet are of questionable truth value anyways, and GIGO still applies.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    • (Score: 2) by VLM on Friday August 16 2024, @07:52PM

      by VLM (445) Subscriber Badge on Friday August 16 2024, @07:52PM (#1368906)

      Adding to your comments, even if they're used, they were just busywork to begin with. No actionable item just needed to report a number at a meeting that means nothing and influences less.
      I've worked at plenty of big companies in the past and they're uniformly self-destructive.

  • (Score: 2) by Gaaark on Friday August 16 2024, @09:53PM

    by Gaaark (41) on Friday August 16 2024, @09:53PM (#1368925) Journal

    Study Finds 94% of Business Spreadsheets Have Critical Errors

    How did they determine this?
    How many companies did they 'study'?
    How many spreadsheets did they 'study'?

    ---we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences

    ---Our extensive and systematic search of the literature involved a total of 32 target journals plus three reputable professional magazines

    Comprehensive, no? eye-roll...

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
(1)