Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Thursday January 15 2015, @08:40AM   Printer-friendly
from the big-iron dept.

The death of the mainframe has been predicted many times over the years but it has prevailed because it has been overhauled time and again. Now Steve Lohr reports that IBM has just released the z13, a new mainframe engineered to cope with the huge volume of data and transactions generated by people using smartphones and tablets. “This is a mainframe for the mobile digital economy,” says Tom Rosamilia. “It’s a computer for the bow wave of mobile transactions coming our way.” IBM claims the z13 mainframe is the first system able to process 2.5 billion transactions a day and has a host of technical improvements over its predecessor, including three times the memory, faster processing and greater data-handling capability. IBM spent $1 billion to develop the z13, and that research generated 500 new patents, including some for encryption intended to improve the security of mobile computing. Much of the new technology is designed for real-time analysis in business. For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone. Another example would be providing shoppers with personalized offers while they are in a store, by tracking their locations and tapping data on their preferences, mainly from their previous buying patterns at that retailer.

IBM brings out a new mainframe about every three years, and the success of this one is critical to the company’s business. Mainframes alone account for only about 3 percent of IBM’s sales. But when mainframe-related software, services, and storage are included, the business as a whole contributes 25 percent of IBM’s revenue and 35 percent of its operating profit. Ronald J. Peri, chief executive of Radixx International was an early advocate in the 1980s of moving off mainframes and onto networks of personal computers. Today Peri is shifting the back-end computing engine in the Radixx data center from a cluster of industry-standard servers to a new IBM mainframe and estimates the total cost of ownership including hardware, software, and labor will be 50 percent less with a mainframe. “We kind of rediscovered the mainframe,” says Peri.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by PizzaRollPlinkett on Thursday January 15 2015, @12:24PM

    by PizzaRollPlinkett (4512) on Thursday January 15 2015, @12:24PM (#135079)

    The "death of mainframe" meme was started by people trying to sell alternatives to the mainframe in the "client/server" days. Managers ate it up, because a commodity Intel server box was cheaper than a mainframe, but projects to migrate off of the mainframe were typically abysmal failures. The media took over "death of mainframe" as a storyline because journalists didn't know diddly-squat about mainframes and it sounded good. Even in 2015, this same story is being used because journalists still don't know diddly-squat about mainframes.

    The mainframe isn't dead, and has never been dying. Once an organization's data processing gets to a certain scale, there's simply no other hardware that can handle the transaction volumes of these big companies on Fortune's lists of big companies. And these companies have invested decades into writing business process workflows using mainframes, so they're not about to throw this stuff out. They'll augment what they have with newer technologies, but they aren't going to rewrite core business logic. Your mobile app purchase is probably going through a CICS batch transaction.

    What's happening is that application development is moving from the native mainframe (COBOL + CICS) to things like Websphere (J2EE), and the mainframe itself is becoming a big transaction processing engine using DB2 as the database. There's still nothing at all that can scale to the level of a mainframe for transactions. Commodity PC servers aren't going to cut it.

    So the mainframe has not gone anywhere, and won't go anywhere until some other hardware can match the mainframe's transaction processing abilities.

    Every few years, IBM tries to "brand" the mainframe with whatever buzzwords are in business computing trade magazines. Apparently mobile is the new thing. This doesn't really change anything, it's just marketing stuff.

    BTW... another meme you'll hear is that we need more COBOL programmers. NO ONE WANTS COBOL PROGRAMMERS. They want people with CICS programming experience, which happens to use COBOL as its language, but if you are not a CICS expert, learning COBOL is a waste of time. All the COBOL programming you're likely to see is maintenance work on CICS transactions. (If you don't know, CICS is a hideous thing. You will have nightmares if you even see CICS code. If you actually understand CICS, you'll evolve into a hideous Guild navigator from David Lynch's Dune movie and spend the rest of your life in a tank breathing spice capsules.)

    --
    (E-mail me if you want a pizza roll!)
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: -1, Offtopic) by Anonymous Coward on Thursday January 15 2015, @01:05PM

    by Anonymous Coward on Thursday January 15 2015, @01:05PM (#135090)

    Actually CISC is dead, died 1997 time frame. Now IBM uses RISC, still supports all CISC instructions or "re-compiles" the object to new machine's instruction set. Most all of our objects have "real" code (source or P-code) and OS version number in them so the Just-in-Time compiling is done at runtime, once and saved for new version. This even handles changes to OS that do not change hardware but maybe a software call better performance. Very handy when you are supporting multiple machines with different upgrade schedules, one object to install and system adjust themselves.

    • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @04:58PM

      by Anonymous Coward on Thursday January 15 2015, @04:58PM (#135172)

      CICS(Customer Information Control System) is not CISC(Complex instruction set computing)

      Here is a metaphore to guide you:
      CICS nowadays is like chtullu
      CISC nowadays is like an aggressive transvestite

      One is pure evil, the other is only bothering...

      • (Score: 2) by Marand on Thursday January 15 2015, @11:00PM

        by Marand (1081) on Thursday January 15 2015, @11:00PM (#135235) Journal

        One is pure evil, the other is only bothering...

        Hey, don't leave us hanging! Which is which?

  • (Score: 2) by Nerdfest on Thursday January 15 2015, @01:36PM

    by Nerdfest (80) on Thursday January 15 2015, @01:36PM (#135097)

    Pretty much everything is better at handling the transaction volumes large companies need, especially commodity hardware, and they do ita at a small fraction of the cost. Do you see Google, FaceBook, etc using *any* mainframes? No, mainframes are not powerful, they are simply reliable, assuming you eliminate human error from the equation, which in my opinion is much higher in mainframe environments having active development done on them because of an extremely poor tool set. They are extraordinarily expensive on a per transaction basis. In environments requiring high availability you're generally better off using redundant commodity hardware. Some environments like banking make this difficult to implement though because of required transactionality inherent in the business, but even then it can still be done.

    • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @02:27PM

      by Anonymous Coward on Thursday January 15 2015, @02:27PM (#135119)
      Did you just use Facebook as an example of how to do something well with computers? That's risky man, that's risky.
    • (Score: 2) by tibman on Thursday January 15 2015, @04:43PM

      by tibman (134) Subscriber Badge on Thursday January 15 2015, @04:43PM (#135170)

      I think the key difference is that some processes scale out and some can only scale up. You can design the processes to be either way. Most are not designed in any way at all and are forced into the "throw more hardware at it" category.

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 3, Interesting) by Nerdfest on Thursday January 15 2015, @08:54PM

        by Nerdfest (80) on Thursday January 15 2015, @08:54PM (#135215)

        The problem is that mainframes don't scale up or out well. They traditionally had exceptional IO bandwidth, but even that is not that impressive these days. IBM is getting by almost exclusively on existing customers that can't afford to replace their software. The hardware is kept just within acceptable levels for them. The "MIPS" charges for actually using the stupid things comes out of a different budget in most places as well, so it help make them look less expensive than they really are.

        I would guess lots are looking at them every year and thinking "I wish I'd started replacing those systems last year". This is why it sucks having tightly coupled software (and hardware) that needs to be replaced as an "all or nothing" operation.

    • (Score: 3, Insightful) by maxwell demon on Thursday January 15 2015, @07:40PM

      by maxwell demon (1608) on Thursday January 15 2015, @07:40PM (#135205) Journal

      Of course you cannot really compare Google and Facebook with a bank. Nobody is going to sue Facebook if a single Facebook post gets lost due to a hiccup. A single bank transaction getting lost due to a hiccup is a completely different story.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by tibman on Thursday January 15 2015, @09:32PM

        by tibman (134) Subscriber Badge on Thursday January 15 2015, @09:32PM (#135224)

        Suggesting that a cluster is more prone to hiccups than a mainframe?

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 2) by Nerdfest on Friday January 16 2015, @04:53AM

          by Nerdfest (80) on Friday January 16 2015, @04:53AM (#135286)

          Well, it is. With a cluster, you need to handle the hiccups, fall-overs, etc. It's do-able, but in general you need to do it yourself. With the mainframe it's handled for you, but at an obnoxious price, and with a very poor price/performance ratio. Pick your poison.

          • (Score: 2) by tibman on Friday January 16 2015, @03:29PM

            by tibman (134) Subscriber Badge on Friday January 16 2015, @03:29PM (#135385)

            I don't think a cluster could be a cluster if it didn't handle failing nodes. But you are right that it must be dealt with.

            --
            SN won't survive on lurkers alone. Write comments.
            • (Score: 2) by sjames on Saturday January 17 2015, @09:28AM

              by sjames (2882) on Saturday January 17 2015, @09:28AM (#135621) Journal

              It depends on the application. Many clusters handle it by re-starting the job from a checkpoint. In some cases there is no checkpoint and they just restart the job from scratch.

              It adds a fair bit to the complexity of the software to handle anything more fine grained than that.

              Mainframes tend to use expensive and exotic hardware such that more than one CPU performs the same operations in lockstep and they vote on the answer. That allows them to detect errors immediately and shut down the CPU that loses the vote. It makes them fantastically expensive and slows them down but makes them very reliable.