Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Saturday May 11 2024, @01:02AM   Printer-friendly
from the old-space-heater dept.

Someone purchased the eight year old Cheyenne supercomputer for $480k. Failing hardware. Leaking water system. What would it be good for? Selling for parts would flood the market. Testing the parts would take forever. They also have to pay for transport from it's current location. Originally built by SGI.

https://gsaauctions.gov/auctions/preview/282996
https://www.popsci.com/technology/for-sale-government-supercomputer-heavily-used/
https://www.tomshardware.com/tech-industry/supercomputers/multi-million-dollar-cheyenne-supercomputer-auction-ends-with-480085-bid

Cheyenne Supercomputer - Water Cooling System

Components of the Cheyenne Supercomputer

Installed Configuration: SGI ICEā„¢ XA.

E-Cells: 14 units weighing 1500 lbs. each.

E-Racks: 28 units, all water-cooled

Nodes: 4,032 dual socket units configured as quad-node blades

Processors: 8,064 units of E5-2697v4 (18-core, 2.3 GHz base frequency, Turbo up to 3.6GHz, 145W TDP)

Total Cores: 145,152

Memory: DDR4-2400 ECC single-rank, 64 GB per node, with 3 High Memory E-Cells having 128GB per node, totaling 313,344 GB

Topology: EDR Enhanced Hypercube

IB Switches: 224 units

Moving this system necessitates the engagement of a professional moving company. Please note the four (4) attached documents detailing the facility requirements and specifications will be provided. Due to their considerable weight, the racks require experienced movers equipped with proper Professional Protection Equipment (PPE) to ensure safe handling. The purchaser assumes responsibility for transferring the racks from the facility onto trucks using their equipment.

Please note that fiber optic and CAT5/6 cabling are excluded from the resale package.

The internal DAC cables within each cell, although removed, will be meticulously labeled, and packaged in boxes, facilitating potential future reinstallation.

Any ideas (serious or otherwise) of suitable uses for this hardware?


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by sgleysti on Saturday May 11 2024, @04:31PM (1 child)

    by sgleysti (56) Subscriber Badge on Saturday May 11 2024, @04:31PM (#1356555)

    I briefly contemplated making a small cluster. What seemed to give the best FLOPs / $ was the highest end consumer grade Ryzen CPUs. For the interconnect, I figured I could do a 5-node star topology with 4-port 10G ethernet cards and give each node a direct connection to all the others. It didn't look too expensive because I wouldn't have needed a switch and could have used direct attach cables. I would have used the 1G motherboard ethernet and a switch for management.

    In the end, I figured that would have been a lot of money, time, and effort for something I didn't really need. Any modern processor these days is incredibly fast if you use a language that compiles to machine code and takes advantage of the vector units in the CPU.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Interesting) by VLM on Saturday May 11 2024, @05:07PM

    by VLM (445) on Saturday May 11 2024, @05:07PM (#1356560)

    something I didn't really need

    Yeah I wanted the experience of messing with very interesting infrastructure software and I want reliable "production" storage (using proxmox/CEPH right now), I was not motivated by FLOPS. Some are, which is fine. It's a big hobby, plenty of space.

    4-port 10G ethernet cards

    The economics is constantly changing. The 10G copper switch market right now seems to run "meh" $70/port (fiber usually more), and PCI cards run $50 to $100 for single ports and $200 to $300 for multiports like your 4-port.

    So for five nodes your 4-port mesh would cost about $1250 just in cards alone whereas using 1 port cards and a switch would cost 5 times $50 plus "about" $500 for a very small switch so figure maybe $750. Cable cost can add up, 5 cables vs 20-ish cables.

    Strictly economically you'd do better with the switch design in 2024, although the economics have varied widely in the past.

    I'm using dual 10G copper ports as LAGs to each node and there are load balancing issues but I "can" run up to 20G in theory.

    Of course if you want the fun of running FRR or whatever routing platform on your partial mesh then you kind of have to run multi-port if the point is having fun with routing protocols. Although with a full mesh I guess you could just use static routing? I'd want to set up OSPF just for the LOLs "because I can" but not everyone enjoys routing as a hobby...

    The main rule of home lab is have fun, so here I am looking at ebay search results and daydreaming while writing this. So today on ebay I can get all the used Infiniband cards I want for about $50/each and probably at least half of them actually work, and switches that might or might not work run around $100 to $200 so for somewhat under a kilobuck, between the cost of your 10G multiport cards and my 10G switch design, I could play with Infiniband at home; how much is fun and bragging rights worth? I guess I could start small with one switch and two cards and see what happens...

    Another fun feature of InfiniBand IIRC is there's a wide range of semi-compatible port speeds where the slowest is 10GB and the highest is unaffordable but something like 400GB, and secondly tuning Linux to use all the bandwidth under TCP/IP is an art form which makes it "fun". At least I think it would be fun. Very few people can push 400GB/s in their basement, maybe I will be one of the first. Note that consumer SSDs crap out around "sixteen GB/s" or so because that's as fast as PCIe4 can run, this means I will have to do weird things with RAID arrays.

    I've always wanted to have Infiniband in my basement so this might end up my summer project; we'll see. I'll probably do it "someday" anyway.

    So many other things I could do to fill my infinite spare time. Such as run 10G fiber from my basement "datacenter" to my office and put 10G in my office so I would have very fast access to my storage...