Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by requerdanos on Sunday September 05 2021, @11:05PM   Printer-friendly
from the virtually-indestructible dept.

A brief overview of IBM's new 7 nm Telum mainframe CPU:

From the perspective of a traditional x86 computing enthusiast—or professional—mainframes are strange, archaic beasts. They're physically enormous, power-hungry, and expensive by comparison to more traditional data-center gear, generally offering less compute per rack at a higher cost.

This raises the question, "Why keep using mainframes, then?" Once you hand-wave the cynical answers that boil down to "because that's how we've always done it," the practical answers largely come down to reliability and consistency. As AnandTech's Ian Cutress points out in a speculative piece focused on the Telum's redesigned cache, "downtime of these [IBM Z] systems is measured in milliseconds per year." (If true, that's at least seven nines.)

IBM's own announcement of the Telum hints at just how different mainframe and commodity computing's priorities are. It casually describes Telum's memory interface as "capable of tolerating complete channel or DIMM failures, and designed to transparently recover data without impact to response time."

When you pull a DIMM from a live, running x86 server, that server does not "transparently recover data"—it simply crashes.

Telum is designed to be something of a one-chip-to-rule-them-all for mainframes, replacing a much more heterogeneous setup in earlier IBM mainframes.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday September 06 2021, @03:17PM (2 children)

    by Anonymous Coward on Monday September 06 2021, @03:17PM (#1174973)

    On a conceptual level, this is entirely possible.

    Consider ... oh, let's call it a stack of Raspberry Pi, all plugged together with ethernet. Set it up so that multiples can gang together and literally duplicate each other's workload (thus offering redundancy with failure tolerance, analogous to the old Tandem systems). Have everything done with asynchronous message-passing so that any one failure doesn't cascade to locking up other units. Have them all able to do specialised jobs such as storage, transaction handling, UI, blahblahblah...

    You could even make it out of COTS units that one could plug together ad hoc to resize or repurpose your installation. You could have multiple parallel back ends that are all hotpluggable, so that losing your ethernet switch wouldn't kill it all ... and so on.

    And in the end it wouldn't cost that much. The real expense would be in getting the software done, tested and verified.

  • (Score: 0) by Anonymous Coward on Monday September 06 2021, @04:24PM (1 child)

    by Anonymous Coward on Monday September 06 2021, @04:24PM (#1175017)

    "Watch me pull a rabbit out of my ass"
    "But Raspfairy Pye Shit Never Works"
    "This time FOR SURE"

    PRESTO!!!

    • (Score: 0) by Anonymous Coward on Tuesday September 07 2021, @03:58AM

      by Anonymous Coward on Tuesday September 07 2021, @03:58AM (#1175212)

      That's not a rabbit!