Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by LaminatorX on Thursday January 15 2015, @08:40AM   Printer-friendly
from the big-iron dept.

The death of the mainframe has been predicted many times over the years but it has prevailed because it has been overhauled time and again. Now Steve Lohr reports that IBM has just released the z13, a new mainframe engineered to cope with the huge volume of data and transactions generated by people using smartphones and tablets. “This is a mainframe for the mobile digital economy,” says Tom Rosamilia. “It’s a computer for the bow wave of mobile transactions coming our way.” IBM claims the z13 mainframe is the first system able to process 2.5 billion transactions a day and has a host of technical improvements over its predecessor, including three times the memory, faster processing and greater data-handling capability. IBM spent $1 billion to develop the z13, and that research generated 500 new patents, including some for encryption intended to improve the security of mobile computing. Much of the new technology is designed for real-time analysis in business. For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone. Another example would be providing shoppers with personalized offers while they are in a store, by tracking their locations and tapping data on their preferences, mainly from their previous buying patterns at that retailer.

IBM brings out a new mainframe about every three years, and the success of this one is critical to the company’s business. Mainframes alone account for only about 3 percent of IBM’s sales. But when mainframe-related software, services, and storage are included, the business as a whole contributes 25 percent of IBM’s revenue and 35 percent of its operating profit. Ronald J. Peri, chief executive of Radixx International was an early advocate in the 1980s of moving off mainframes and onto networks of personal computers. Today Peri is shifting the back-end computing engine in the Radixx data center from a cluster of industry-standard servers to a new IBM mainframe and estimates the total cost of ownership including hardware, software, and labor will be 50 percent less with a mainframe. “We kind of rediscovered the mainframe,” says Peri.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Synonymous Homonym on Thursday January 15 2015, @08:45AM

    by Synonymous Homonym (4857) on Thursday January 15 2015, @08:45AM (#135038) Homepage

    Everything old is new again.

    It used to be that users connected to the network using their phones.

    And they do so today.

    It used to be that thin clients served as terminals to mainframes.

    And today browsers are terminals to the cloud.

    • (Score: 3, Funny) by davester666 on Thursday January 15 2015, @09:40AM

      by davester666 (155) on Thursday January 15 2015, @09:40AM (#135049)

      The cloud is totally different than the old days connecting from a terminal to the mainframe.

      • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @09:51AM

        by Anonymous Coward on Thursday January 15 2015, @09:51AM (#135054)

        :s/the cloud/someone else\'s machine/g

      • (Score: 2) by pnkwarhall on Thursday January 15 2015, @06:17PM

        by pnkwarhall (4558) on Thursday January 15 2015, @06:17PM (#135189)

        Please back-up your statement with some kind of reasoning.

        --
        Lift Yr Skinny Fists Like Antennas to Heaven
    • (Score: 2) by mendax on Thursday January 15 2015, @10:16AM

      by mendax (2840) on Thursday January 15 2015, @10:16AM (#135060)

      I was at my local HMO today for a physical. The poor office assistant complained about the computer—a mainframe—lagging. I told her that 30 years ago when I was in school the university mainframe was often slow as molasses and that even though computers (especially mainframes) have gone through a couple of orders of magnitude in speed improvements and capabilities the applications running on them can still be very slow.

      --
      It's really quite a simple choice: Life, Death, or Los Angeles.
      • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @12:26PM

        by Anonymous Coward on Thursday January 15 2015, @12:26PM (#135080)

        Sounds like the computing equivalent of Jevon paradox.

        https://en.wikipedia.org/wiki/Jevons_paradox [wikipedia.org]

        In essence, as cost pr compute cycle drops we come up with more ways to use them. This then quickly outstrips any gains in efficiencies for old tasks.

        • (Score: 2) by LoRdTAW on Thursday January 15 2015, @09:39PM

          by LoRdTAW (3755) on Thursday January 15 2015, @09:39PM (#135225) Journal

          In essence, as cost pr compute cycle drops we come up with more ways to use them.

          Javascript.

      • (Score: 2) by mechanicjay on Thursday January 15 2015, @04:26PM

        by mechanicjay (7) <reversethis-{gro ... a} {yajcinahcem}> on Thursday January 15 2015, @04:26PM (#135164) Homepage Journal

        "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry."

        -Henry Petroski

        --
        My VMS box beat up your Windows box.
    • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @12:28PM

      by Anonymous Coward on Thursday January 15 2015, @12:28PM (#135081)

      Yep. If one take a step back and look at the server rack as a computer, it very much resembles a mainframe.

      Server = CPU, NAS = IO Controller, etc.

      And web services quite often resemble X sessions...

  • (Score: 2) by Rosco P. Coltrane on Thursday January 15 2015, @09:43AM

    by Rosco P. Coltrane (4757) on Thursday January 15 2015, @09:43AM (#135050)

    yet my money transfers still takes 3 business days to complete. Fucking banks make a fortune on interests with that...

    • (Score: 1) by radu on Thursday January 15 2015, @10:25AM

      by radu (1919) on Thursday January 15 2015, @10:25AM (#135062)

      yet my money transfers still takes 3 business days to complete

      You know that doesn't have anything to do with computing power, don't you?

      • (Score: 2) by Nerdfest on Thursday January 15 2015, @11:17AM

        by Nerdfest (80) on Thursday January 15 2015, @11:17AM (#135070)

        It has to do with the mainframe "batch" mentality. MQ Series is kind of helpingthe old folks get past it, but many even run stuff over MQ in batch.

        • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @02:35PM

          by Anonymous Coward on Thursday January 15 2015, @02:35PM (#135124)

          Nah, it has to do with banking protectionism. Money transfers happen instantly in many countries that are not the USA.

        • (Score: 1) by TheRealMike on Thursday January 15 2015, @03:09PM

          by TheRealMike (4989) on Thursday January 15 2015, @03:09PM (#135139)
          More specifically, banks have a lot of very complicated algorithms and code bases that inherently assume batch processing. The calculation of interest makes everything insanely complicated. Interest calculations and payments can have knock-on effects throughout the banks entire set of accounts. For example maybe an account is relying on the interest payment to avoid going into overdraft that day. You have to ensure that everything happens in a specific order and the databases aren't changing whilst all these calculations are in flight. Making all that 1970's era COBOL work in a fully concurrent environment that works when the database is fully live is ... hard. Means a rewrite, pretty much.

          But it gets worse. Banks can roll back transactions, for example due to a court order from a bankruptcy court. Let's say a company makes a payment to a supplier and that supplier then gets paid the interest on the size of that payment in their account. Then a few days later a bankruptcy court decides that the company went bankrupt as of a week ago, and the payment to the supplier is to be rolled back in order to pay creditors instead. Then the interest payments must also be rolled back ... recursively! It's madness.

          And finally, all the people who wrote the insanely complex business logic for this, wrote it in long since obsolete languages that didn't place a premium on good documentation and style, 90% of them are retired and the only ones left who had a clue were then outsourced to India to save money. So these banks often cannot evolve their IT infrastructure at all.
          • (Score: 2) by Nerdfest on Thursday January 15 2015, @04:27PM

            by Nerdfest (80) on Thursday January 15 2015, @04:27PM (#135165)

            That's it. These systems need to be rewritten with this stuff in mind up front, in modern languages, and in a maintainable way. At this point people are terrified of touching stuff.

  • (Score: 2) by E_NOENT on Thursday January 15 2015, @09:54AM

    by E_NOENT (630) on Thursday January 15 2015, @09:54AM (#135056) Journal

    If one were, say, a burned-out mid-career developer interested instead in spending their last 25 years in the industry getting into something like this, how would one make the switch? Anyone ever tried this, or know someone who has?

    Not like you can take a coursera, download a mainframe VM, mess with it for a few months, do some interviews, and become a "mainframe guy..."

    Just asking for a friend, here.

    --
    I'm not in the business... I *am* the business.
    • (Score: 2) by WizardFusion on Thursday January 15 2015, @10:42AM

      by WizardFusion (498) on Thursday January 15 2015, @10:42AM (#135068) Journal

      Don't. Find a nice hobby instead. Unless you are willing to join a company that already has one, and you are also willing to enter as a mainframe novice on that wage, then this is not for you.

    • (Score: 2) by Nerdfest on Thursday January 15 2015, @11:20AM

      by Nerdfest (80) on Thursday January 15 2015, @11:20AM (#135071)

      Unless they're only running zLinux, stay away. The OS and tools are an absolute nightmare to use.

    • (Score: 4, Insightful) by PizzaRollPlinkett on Thursday January 15 2015, @12:14PM

      by PizzaRollPlinkett (4512) on Thursday January 15 2015, @12:14PM (#135076)

      You ---do--- ---not--- want to get into mainframes. Too many unemployed people with 20+ years' experience. Forget being hired as an entry level person. People with experience can't even find jobs. It's a race to the bottom. Not much development is being done on mainframes these days. Maintenance work and stuff, but very little of it, and there's a glut of people with 20+ years of CICS programming experience. In terms of systems programming, mainframes don't require much care and feeding these days. A lot of the systems programming stuff of yesterday is now automated. So there's not much demand for people who know how to configure and keep mainframes running. There's a lot of mainframe outsourcing going on today. One company can run umpteen different mainframes with a small staff. Trust me on this one, run away from mainframes as fast as you can and never look back.

      --
      (E-mail me if you want a pizza roll!)
      • (Score: 2) by E_NOENT on Thursday January 15 2015, @01:55PM

        by E_NOENT (630) on Thursday January 15 2015, @01:55PM (#135103) Journal

        Thanks for the input, folks.

        I remembered being pretty impressed by a system called Linux/390 (I think) about fifteen years ago. It seemed crazy and powerful at the time, but I'm guessing that whole mess is generally a dead end.

        Appreciate being brought back to earth :D

        --
        I'm not in the business... I *am* the business.
        • (Score: 2) by Common Joe on Thursday January 15 2015, @03:20PM

          by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Thursday January 15 2015, @03:20PM (#135144) Journal

          If you hadn't asked the question, I might have. That means that there's at least two people on here appreciate of the answers.

          TL;DR: Thank for the question and answers.

        • (Score: 2) by PizzaRollPlinkett on Thursday January 15 2015, @04:39PM

          by PizzaRollPlinkett (4512) on Thursday January 15 2015, @04:39PM (#135168)

          IBM chases fads. Right now, they're chasing mobile transactions for some reason. A few years ago, they were chasing virtualization when it was big. They ported Linux to the mainframe architecture and got it running under VM, their virtual machine manager. (It's like VirtualBox, without an easy to use interface, easy configuration, or anything that would make it easy to use. Imagine a lot of little tables mapping actual hardware to virtual hardware and a bizarre shell-like scripting environment that actually has commands that have start parens with no close parens. It's that awful.) The idea was you could run a bazillion Linux instances on your mainframe. Well, the whole effort went nowhere because people wanted cloud computing, and didn't want their own mainframes. They wanted instances on demand without managing hardware. And a nice web interface for management. (As a bonus, with mainframe Linux, you had to port any code you had to the mainframe architecture, unless it was Java or something.)

          Believe it or not, the mainframe is POSIX compliant and has what they call Unix Systems Services, a complete shell and utilities. The environment is called OMVS. The motivation was to be able to run TCP/IP style servers easily, and run Java and Websphere on the mainframe. OMVS has been much more successful than virtualized Linux. Most of the systems programming stuff comes via OMVS now. In fact, you basically can't even run a mainframe without OMVS active, because the networking support has hooks deep into OMVS.

          --
          (E-mail me if you want a pizza roll!)
  • (Score: 4, Insightful) by PizzaRollPlinkett on Thursday January 15 2015, @12:24PM

    by PizzaRollPlinkett (4512) on Thursday January 15 2015, @12:24PM (#135079)

    The "death of mainframe" meme was started by people trying to sell alternatives to the mainframe in the "client/server" days. Managers ate it up, because a commodity Intel server box was cheaper than a mainframe, but projects to migrate off of the mainframe were typically abysmal failures. The media took over "death of mainframe" as a storyline because journalists didn't know diddly-squat about mainframes and it sounded good. Even in 2015, this same story is being used because journalists still don't know diddly-squat about mainframes.

    The mainframe isn't dead, and has never been dying. Once an organization's data processing gets to a certain scale, there's simply no other hardware that can handle the transaction volumes of these big companies on Fortune's lists of big companies. And these companies have invested decades into writing business process workflows using mainframes, so they're not about to throw this stuff out. They'll augment what they have with newer technologies, but they aren't going to rewrite core business logic. Your mobile app purchase is probably going through a CICS batch transaction.

    What's happening is that application development is moving from the native mainframe (COBOL + CICS) to things like Websphere (J2EE), and the mainframe itself is becoming a big transaction processing engine using DB2 as the database. There's still nothing at all that can scale to the level of a mainframe for transactions. Commodity PC servers aren't going to cut it.

    So the mainframe has not gone anywhere, and won't go anywhere until some other hardware can match the mainframe's transaction processing abilities.

    Every few years, IBM tries to "brand" the mainframe with whatever buzzwords are in business computing trade magazines. Apparently mobile is the new thing. This doesn't really change anything, it's just marketing stuff.

    BTW... another meme you'll hear is that we need more COBOL programmers. NO ONE WANTS COBOL PROGRAMMERS. They want people with CICS programming experience, which happens to use COBOL as its language, but if you are not a CICS expert, learning COBOL is a waste of time. All the COBOL programming you're likely to see is maintenance work on CICS transactions. (If you don't know, CICS is a hideous thing. You will have nightmares if you even see CICS code. If you actually understand CICS, you'll evolve into a hideous Guild navigator from David Lynch's Dune movie and spend the rest of your life in a tank breathing spice capsules.)

    --
    (E-mail me if you want a pizza roll!)
    • (Score: -1, Offtopic) by Anonymous Coward on Thursday January 15 2015, @01:05PM

      by Anonymous Coward on Thursday January 15 2015, @01:05PM (#135090)

      Actually CISC is dead, died 1997 time frame. Now IBM uses RISC, still supports all CISC instructions or "re-compiles" the object to new machine's instruction set. Most all of our objects have "real" code (source or P-code) and OS version number in them so the Just-in-Time compiling is done at runtime, once and saved for new version. This even handles changes to OS that do not change hardware but maybe a software call better performance. Very handy when you are supporting multiple machines with different upgrade schedules, one object to install and system adjust themselves.

      • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @04:58PM

        by Anonymous Coward on Thursday January 15 2015, @04:58PM (#135172)

        CICS(Customer Information Control System) is not CISC(Complex instruction set computing)

        Here is a metaphore to guide you:
        CICS nowadays is like chtullu
        CISC nowadays is like an aggressive transvestite

        One is pure evil, the other is only bothering...

        • (Score: 2) by Marand on Thursday January 15 2015, @11:00PM

          by Marand (1081) on Thursday January 15 2015, @11:00PM (#135235) Journal

          One is pure evil, the other is only bothering...

          Hey, don't leave us hanging! Which is which?

    • (Score: 2) by Nerdfest on Thursday January 15 2015, @01:36PM

      by Nerdfest (80) on Thursday January 15 2015, @01:36PM (#135097)

      Pretty much everything is better at handling the transaction volumes large companies need, especially commodity hardware, and they do ita at a small fraction of the cost. Do you see Google, FaceBook, etc using *any* mainframes? No, mainframes are not powerful, they are simply reliable, assuming you eliminate human error from the equation, which in my opinion is much higher in mainframe environments having active development done on them because of an extremely poor tool set. They are extraordinarily expensive on a per transaction basis. In environments requiring high availability you're generally better off using redundant commodity hardware. Some environments like banking make this difficult to implement though because of required transactionality inherent in the business, but even then it can still be done.

      • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @02:27PM

        by Anonymous Coward on Thursday January 15 2015, @02:27PM (#135119)
        Did you just use Facebook as an example of how to do something well with computers? That's risky man, that's risky.
      • (Score: 2) by tibman on Thursday January 15 2015, @04:43PM

        by tibman (134) Subscriber Badge on Thursday January 15 2015, @04:43PM (#135170)

        I think the key difference is that some processes scale out and some can only scale up. You can design the processes to be either way. Most are not designed in any way at all and are forced into the "throw more hardware at it" category.

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 3, Interesting) by Nerdfest on Thursday January 15 2015, @08:54PM

          by Nerdfest (80) on Thursday January 15 2015, @08:54PM (#135215)

          The problem is that mainframes don't scale up or out well. They traditionally had exceptional IO bandwidth, but even that is not that impressive these days. IBM is getting by almost exclusively on existing customers that can't afford to replace their software. The hardware is kept just within acceptable levels for them. The "MIPS" charges for actually using the stupid things comes out of a different budget in most places as well, so it help make them look less expensive than they really are.

          I would guess lots are looking at them every year and thinking "I wish I'd started replacing those systems last year". This is why it sucks having tightly coupled software (and hardware) that needs to be replaced as an "all or nothing" operation.

      • (Score: 3, Insightful) by maxwell demon on Thursday January 15 2015, @07:40PM

        by maxwell demon (1608) on Thursday January 15 2015, @07:40PM (#135205) Journal

        Of course you cannot really compare Google and Facebook with a bank. Nobody is going to sue Facebook if a single Facebook post gets lost due to a hiccup. A single bank transaction getting lost due to a hiccup is a completely different story.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by tibman on Thursday January 15 2015, @09:32PM

          by tibman (134) Subscriber Badge on Thursday January 15 2015, @09:32PM (#135224)

          Suggesting that a cluster is more prone to hiccups than a mainframe?

          --
          SN won't survive on lurkers alone. Write comments.
          • (Score: 2) by Nerdfest on Friday January 16 2015, @04:53AM

            by Nerdfest (80) on Friday January 16 2015, @04:53AM (#135286)

            Well, it is. With a cluster, you need to handle the hiccups, fall-overs, etc. It's do-able, but in general you need to do it yourself. With the mainframe it's handled for you, but at an obnoxious price, and with a very poor price/performance ratio. Pick your poison.

            • (Score: 2) by tibman on Friday January 16 2015, @03:29PM

              by tibman (134) Subscriber Badge on Friday January 16 2015, @03:29PM (#135385)

              I don't think a cluster could be a cluster if it didn't handle failing nodes. But you are right that it must be dealt with.

              --
              SN won't survive on lurkers alone. Write comments.
              • (Score: 2) by sjames on Saturday January 17 2015, @09:28AM

                by sjames (2882) on Saturday January 17 2015, @09:28AM (#135621) Journal

                It depends on the application. Many clusters handle it by re-starting the job from a checkpoint. In some cases there is no checkpoint and they just restart the job from scratch.

                It adds a fair bit to the complexity of the software to handle anything more fine grained than that.

                Mainframes tend to use expensive and exotic hardware such that more than one CPU performs the same operations in lockstep and they vote on the answer. That allows them to detect errors immediately and shut down the CPU that loses the vote. It makes them fantastically expensive and slows them down but makes them very reliable.

  • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @02:49PM

    by Anonymous Coward on Thursday January 15 2015, @02:49PM (#135129)

    Take this Wired shit to Slashdot.

  • (Score: 2) by FatPhil on Thursday January 15 2015, @06:17PM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday January 15 2015, @06:17PM (#135188) Homepage
    EIther wired, or IBM, are more interested in bullshit that getting facts across.
    Nobody ever has mentioned transactions per day. You do that kind of thing if you want to make the numeric term appear bigger, which, if you're the vendor, normally implies you're actually ashamed of the number.

    My cock's over a billion angstroms long!

    2.5 billion transactions per day isn't even 2 million tpm. People have been hitting multi-mega-tpm for ages in TPC-C, the non-specialised transaction benchmark, that's a factor of 6 higher than IBM's figure. But not all 't' are equal. Which transactions are being measured? There's at least half a dozen to chose from. Not naming the benchmark that is actually being measured is again more bullshit, from either Wired or IBM.

    My 5 year old phone measured 14.8 - beat that!

    Me: Eat facts!
    Dr. Teeth: No, no, no! *Need* facts. *Need* facts.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 0) by Anonymous Coward on Thursday January 15 2015, @06:31PM

    by Anonymous Coward on Thursday January 15 2015, @06:31PM (#135196)

    Mainframes are needed by companies and institutions that already had been using them, and had requirements for absolutely rock solid reliability-availability-security-concurrency, not just nice shiny whitepapers promising the same. IBM has provided all this for many decades, albeit at premium cost.

  • (Score: 2) by TheRaven on Friday January 16 2015, @09:45AM

    by TheRaven (270) on Friday January 16 2015, @09:45AM (#135321) Journal
    The team that gets to design the z series CPUs makes everyone jealous. They're basically the only CPU design team around for whom cost is not one of the constraints. You can do some pretty fun things if you have customers willing to pay whatever it costs at the end (oh, these might fail. Let's stick on three and compare the results, and abort all in-flight instructions and trap if they disagree. ECC in all registers? Sure, why not...)
    --
    sudo mod me up
  • (Score: 0) by Anonymous Coward on Friday January 16 2015, @01:29PM

    by Anonymous Coward on Friday January 16 2015, @01:29PM (#135356)

    Instead CIO's go conferences and bring back buzzwords for either a server-side or client-side tech. If status quo is client-side, everyone starts selling server-side as the next big thing...and vice-versa.

    It's just a never ending time and money waste.