Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by cmn32480 on Saturday August 06 2016, @11:24PM   Printer-friendly
from the tread-carefully dept.

Arthur T Knackerbracket has found the following story:

Gartner defines Bimodal IT as: “the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed”.

I find myself more than a little bemused by the concept. First of all, why would I want to manage two separate modes of activity? That means that either I have to employ different people with specialisms in different approaches (expensive) or I have to take on people who are skilled in both areas (which by definition means they're not going to be best-of-breed in either).

Second, I have a strange liking for the concepts that Gartner mentions in Mode 1: safety and accuracy. I find it useful that my IT systems don't kill people; here in Jersey, for example, it's frowned upon if too many employees die in tragic business systems accidents. And in my experience, the CFO tends to be quite irritable if the month-end numbers don't add up. I also find security and integrity fairly useful too, along with availability – all things that can suffer in Gartner's so-called Mode 2 at the expense of agility and speed.

Although the term “bimodal IT” is relatively new, the concept isn't. Back in the 1990s I worked in an environment with two distinct approaches to IT: one slow, steady and methodical, and the other fast-moving and bleeding edge. Did the latter break more than the former? No, actually it didn't – but only because it was done by a small number of very technical people who could respond quickly to issues. Did it bring advantages? Yes: it was doing IP-based wide area stuff long before the other part of the IT world.

Would I go back to that setup tomorrow? Not on your nelly – it put a group of techies out on a limb, largely unsupportable by the other team and hence permanently lumbered with supporting bleeding-edge technology whenever it threw a tantrum and interfacing it tenuously into the core systems in the face of reluctant sighs from the core support group.

I had another of these “parallel” examples more recently, when a new senior techie decided that he would spin up cloud-based servers seemingly at random alongside the company's well-managed, well-documented and extremely stable infrastructure. He took exception, for some reason, when someone called him a “f**king cowboy”.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by physicsmajor on Saturday August 06 2016, @11:56PM

    by physicsmajor (1471) on Saturday August 06 2016, @11:56PM (#384851)

    I'm involved with the scientific Python community, which is an excellent example of where this kind of development is needed. There are core projects which provide important math/science routines that move slowly, then as you move farther away from those you get to the cutting edge where development is fast and furious. This works fantastic for science, though it's worth noting that having actual separate projects for all of this is probably why it works. That plus true package managers like Conda/Canopy to handle inter-dependencies.

    The core - NumPy, SciPy, Pandas, Jupyter, Nose, and Sympy - moves slowly with extreme emphasis on stability. It wasn't always that way; NumPy itself is only a little over a decade old. Early on development was fast and looser than it is now. However, today these projects are depended on so heavily by the rest of the ecosystem that they've become very much Mode 1.

    The edges though? That's active research territory. The scikits have an intermediate development velocity, and beyond that projects are very agile in nature.

    • (Score: 0) by Anonymous Coward on Sunday August 07 2016, @03:14AM

      by Anonymous Coward on Sunday August 07 2016, @03:14AM (#384869)

      Works fine for research. Doesn't work at all for business.
      For one thing, this is paying to develop the same thing twice, which is usually frowned upon in a place where profits are valued. For another, when something works "well enough" in a business environment, it's generally delivered. Sure time could be spent fixing up that which they've already finished, making it better or more robust or trimming out that rare bug, but that time could instead be used to produce 2.0. Businesses prefer to work on new profitable things rather than perfecting what they've already delivered.
      In business practice, the 'slow but sure' stream is quickly left behind unless it's the sole focus.

  • (Score: 1, Insightful) by Anonymous Coward on Sunday August 07 2016, @12:04AM

    by Anonymous Coward on Sunday August 07 2016, @12:04AM (#384853)

    There is *NO* pure Agile development.
    There is *NO* pure Waterfall development. (expect maybe the software on space shuttle)

    IT has always been a hybrid.
    A VP walks into the IT and asks for report by 5pm. Are you going to write specs and send it oversee and wait 3 more weeks? No you are going to get it done now. That is Agile. AT worst on demand :).
    The same VP wants to know what is in the next release and has Legal and Accounting signed off, for all those groups to part of process, there are specs and details. Aging Credits (legal in some countries and not in the others), Tax law changes that are in acted on Jan 1 or Jul 1. That is Waterfall.

    If a Garther report is just now asking... Well Garther is out of touch talking to his freinds to post the artile in the first place. Stop reading them and save the money. Get back to work making everyone happy.

    PS: with the money saved by dumping Garther, take your staff to lunch, to ball game and buy the beer.

    • (Score: 0) by Anonymous Coward on Sunday August 07 2016, @12:10AM

      by Anonymous Coward on Sunday August 07 2016, @12:10AM (#384856)

      It's Gartner, not Garther.

    • (Score: 0) by Anonymous Coward on Monday August 08 2016, @12:03AM

      by Anonymous Coward on Monday August 08 2016, @12:03AM (#385100)

      Being agile in the sense of flexibility and dynamic isn't the same thing as the agile development methodology. They are completely different things and far too many people, including you, confuse them.

  • (Score: 2) by archfeld on Sunday August 07 2016, @01:34AM

    by archfeld (4650) <treboreel@live.com> on Sunday August 07 2016, @01:34AM (#384861) Journal

    I worked for a large financial organization and we supported both the Day to Day stuff and R&D. The day to day stuff stayed the same with the same procedures/and equipment until it HAD to change, and the R&D stuff was built for week long tests of the newest breed of hardware on loan from some vendor trying get the door opened. It kept the job interesting and gave you a varied task-list, plus it was a good excuse to muscle training $'s.

    --
    For the NSA : Explosives, guns, assassination, conspiracy, primers, detonators, initiators, main charge, nuclear charge
  • (Score: 1) by khallow on Sunday August 07 2016, @02:30AM

    by khallow (3766) Subscriber Badge on Sunday August 07 2016, @02:30AM (#384864) Journal
    I find it weird that the writer forgot about testing. Even in a safe, accurate environment you will on occasion need to upgrade (say because your machine is dying or because key parts of your infrastructure are no longer supported). No sane upgrade approach ignores aggressive and thorough testing of the new stuff and how it will behave with respect to the rest of your systems (needless to say, we've all seen the insane approaches). That testing in turn uses the "agility and speed" approach. You want to find problems now, not in a few years when it is safe to do so in some weird sense.
    • (Score: 2) by c0lo on Sunday August 07 2016, @03:01AM

      by c0lo (156) Subscriber Badge on Sunday August 07 2016, @03:01AM (#384867) Journal

      ok, where does "safe" come from?

      From Quality Assurance at all stages of the development process, not just a Quality Check at the end of a stage/the process.

      Doesn't matter if your product passes the tests (whos tests?) if the architecture is brittle or your algos are poorly chosen or your code looks like an entry in an obfuscation context or your hardware is top class but your customer's is just average.
      If it passes those tests either your test set is incomplete**, or you just got lucky this time around - do you want to bet you'll continue to be lucky forever, release after release?

      I find it weird that the writer forgot about testing.

      My point? Testing is part of the QA and QA is part of the development of any safe system - why should testing get a special mention?

      ---
      ** Except for "hello-world" types of products, nobody is going to have all the all the money and time to thoroughly test all the combinations possible solely at the end of the cycle.

      --
      https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 1) by khallow on Sunday August 07 2016, @03:35AM

        by khallow (3766) Subscriber Badge on Sunday August 07 2016, @03:35AM (#384875) Journal

        I find it weird that the writer forgot about testing.

        My point? Testing is part of the QA and QA is part of the development of any safe system - why should testing get a special mention?

        Because it was a place where the so-called "agile" development is routinely used. After all, why have all those incomplete test which only cover some small portion of your testing space when you can just have the right one which completely tests your program correctly? After all, how hard could that test be to build? \sarc

        • (Score: 2) by c0lo on Sunday August 07 2016, @09:16AM

          by c0lo (156) Subscriber Badge on Sunday August 07 2016, @09:16AM (#384926) Journal

          when you can just have the right one which completely tests your program correctly?

          You can't. There's no such a thing as a "complete test plan". That's an ideal never to be reached (and that's true no matter the chosen dev life cycle)

          A system can go correctly in a limited number of ways (what the programmers do is to "carve" a reduced set of state/transitions from the whole possible set of all programs; they are implementing what the reduced model of reality is required).
          However, for every "correct" ways of doing things there are more ways in which the things may go wrong; test partitioning or not, there is no way in which you can test all the wrong conditions and see the program/system is "safe to use under adverse conditions". Why do you think fuzz testing [wikipedia.org] has a place in the toolset even if passing such a "test" offers no warranties of correctness, completeness, robustness of the tested product.

          Want another example? How many times did you see anyone testing the installer of a program against installing in a wrong location? How many "wrong locations" can you imagine?
          Does you list include installation on a removable device? Maybe a "write once" one mounted there?
          What about the "trash bin"? No?
          How many other cases of "wrong places" may exist that you (or anyone) cannot imagine now?

          --
          https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 1) by khallow on Sunday August 07 2016, @12:08PM

            by khallow (3766) Subscriber Badge on Sunday August 07 2016, @12:08PM (#384947) Journal
            Fine, but I still get this impression that you think there is something wrong with what I said in the first place. Are tests not lightweight? Are they not intended to be easy to add, remove, or change with far less rigor to them than the code they test?
            • (Score: 2) by c0lo on Sunday August 07 2016, @02:06PM

              by c0lo (156) Subscriber Badge on Sunday August 07 2016, @02:06PM (#384967) Journal

              Fine, but I still get this impression that you think there is something wrong with what I said in the first place.

              If my understanding is right, you asked "Why there is no mention to testing when it comes to software safety?".

              And I said: "Testing is a part of development, as is coding, software arch/design and all the rest. Why do you think testing is worth a special mention, when all the others contribute equally to the safety?"

              --
              https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 1) by khallow on Sunday August 07 2016, @03:00PM

                by khallow (3766) Subscriber Badge on Sunday August 07 2016, @03:00PM (#384977) Journal

                And I said: "Testing is a part of development, as is coding, software arch/design and all the rest. Why do you think testing is worth a special mention, when all the others contribute equally to the safety?"

                I think testing warrants special mention unlike the other parts you mention because it was an important example of flexibility and "agility". For the most part, tests are lightweight. They're meant to be low effort to construct (otherwise you're not going to make many of them), rapid to deploy and modify. If a new issue shows up, you don't want the tests coming out a few years later.

  • (Score: 3, Insightful) by bradley13 on Sunday August 07 2016, @07:14AM

    by bradley13 (3053) on Sunday August 07 2016, @07:14AM (#384906) Homepage Journal

    Somehow, on first reading, I missed this critical intro: "Gartner defines Bimodal IT as..."

    Gartner likes to identify things that have become common practice, slap a label on them, and then charge you thousands to read a report about the label.

    I've lived through a few too many trends. Hire good people, give them competent management that shields them from outside interference and otherwise gets out of the way. They will use techniques appropriate to the project, whatever they may be called, and the project will be successful.

    Bad people or incompetent management? Agile development within a bimodal IT framework using devops to provide continuous deployment won't save the project.

    Bi-modal IT? As one article writes: This type of oversimplified and stilted approach has been failing to save innovation-hostile companies since Fred Brooks wrote about the infamous Silver Bullet. And this model will also fade into obscurity. [cioinsight.com]

    --
    Everyone is somebody else's weirdo.