Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Sunday March 03 2019, @02:06PM   Printer-friendly
from the Robbie-the-Robot-had-solved-this-problem-in-1956 dept.

Back in 2017 two high-powered GNU/Linux computers were sent into orbit and are still running. They are long overdue for retrieval but are, more than 530 days later, still working. The goal of the project was to test the durability of such systems in preparation for travel to Mars, where data must be processed on site because of the delay in sending it to Earth and then transmitting the results back to Mars. So far autonomous management software has handled all of the hardware problems.

The servers were placed in an airtight box with a radiator that is hooked up to the ISS water-cooling system. Hot air from the computers is guided through the radiator to cool down and than circulated back.

Mr Kasbergen said there had been problems with the redundancy power supply as well as some of the redundant solid-state drives.

But he said the failures were handled by the autonomous management software that was part of the experiment.

The devices will need to be inspected back on Earth to find out what went wrong.

Earlier on SN:
Supercomputer on ISS will soon be Available for Science Experiments (2018)
HPE "Supercomputer" on the ISS Survives for 340 Days and Counting (2018)
HPE Supercomputer to be Sent to the ISS (2017)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by physicsmajor on Sunday March 03 2019, @03:34PM (5 children)

    by physicsmajor (1471) on Sunday March 03 2019, @03:34PM (#809428)

    Not sure if you're being serious, but the real reason for this is that really advanced technology - especially modern RAM, to a lesser extent the CPU/GPU dies and SSDs - are highly susceptible to radiation. And there's a lot of that in orbit, with way more beyond the Van Allen belts (so any mission to the Moon or Mars will have a bunch of it to deal with). Up until now, the actual flight computers have been incredibly simple and based on OLD tech, both slower but also larger parts are less susceptible. This was to determine if we can properly harden more modern stuff.

    Starting Score:    1  point
    Moderation   +3  
       Informative=3, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2, Disagree) by Runaway1956 on Sunday March 03 2019, @04:05PM (4 children)

    by Runaway1956 (2926) Subscriber Badge on Sunday March 03 2019, @04:05PM (#809440) Journal

    I was less than half serious - but there is some serious there. For the most serious part, the environment means nothing at all to the OS. Linux just doesn't care if some part of the hardware dies a fiery death, so long as the most vital portions of the hardware survive. The rest of the seriousness stems from the fact that a vital piece of hardware, such as navigation computers, can certainly be contained within a shielded bit of the spaceship. It's pointless and stupid to suppose that a ship can be swathed in a quarter inch of lead, but they can certainly install a lead box large enough to shield a typical EATX server board. For the rest, it's just snark.

    • (Score: 4, Informative) by physicsmajor on Sunday March 03 2019, @04:17PM (2 children)

      by physicsmajor (1471) on Sunday March 03 2019, @04:17PM (#809443)

      There's only so much you can do in terms of shielding. Alpha and beta particles you can shield against, but the worst of them emit secondary photons which can be difficult. High-energy gamma bursts are frequent enough to be considered common and must be designed for - there is no way to shield against that in any practical method we have available (you'd need thousands of tons of lead surrounding everything, and even then would merely reduce not eliminate. Then we have neutrons. Those penetrate most stuff rather well with water the easiest shield we've got. It's even less feasible to have both huge water and huge lead shields.

      So short of perhaps lassoing a big asteroid and putting your stuff in the very middle before you leave, which isn't feasible given current tech, these events are going to bake your computers.

      • (Score: 2, Informative) by pTamok on Monday March 04 2019, @08:41AM (1 child)

        by pTamok (3042) on Monday March 04 2019, @08:41AM (#809723)

        There's more to cosmic rays [wikipedia.org] than just alpha, beta, and gamma rays (Helium nuclei, electrons, and high-energy photons), and they can have energies far higher than radioactive decay events (there is a nice graph of cosmic flux against particle energy iin the article), which means that what you might think of as adequate shielding on Earth against typical radioactive decay produced 'rays' isn't anywhere near enough when beyond the atmosphere, which by its very bulk shields us from a lot of nasties - that and the magnetosphere.
        As an illustration, there are ultra-high energy cosmic rays and extreme-energy cosmic rays with energies exceeding the "so-called Greisen–Zatsepin–Kuzmin limit (GZK limit). This limit should be the maximum energy of cosmic ray protons that have traveled long distances (about 160 million light years), since higher-energy protons would have lost energy over that distance due to scattering from photons in the cosmic microwave background (CMB). [wikipedia.org]"
        A well known exemplar is the Oh-My-God particle [wikipedia.org]:

        In October of 1991, The FE1 detector observed an air shower with an energy of 3.2x1020 eV. This corresponds to ~50 joules or ~12 calories, or roughly the kinetic energy of a well-pitched baseball.

        If one of those hits your CPU or DRAM, it's not going to do a lot of good. If you want to design for seriously long uptime in space, your architecture needs to be able to cope with high-energy cosmic radiation, probably by having a certain minimum size of critical features, and also duplicating features so they can run in resilient/redundant groups such that knocking one out of operation doesn't stop things from working. You also need to think about how data is protected when in use - memory buses and other internal data transfer components are not magically immune from the effects of cosmic radiation, so having the means to detect errors while it is in memory, in the cpu, in long term storage and at all points in-between is necessary. If your life depends on a bit not being flipped when it shouldn't be by a capricious cosmic ray, a whole system approach is needed.

    • (Score: 0) by Anonymous Coward on Sunday March 03 2019, @04:46PM

      by Anonymous Coward on Sunday March 03 2019, @04:46PM (#809452)

      Linux just doesn't care if some part of the hardware dies a fiery death, so long as the most vital portions of the hardware survive.

      I've had a soundcard disappear from a mobo. Linux didn't give a damn, just stopped loading the corresponding sound modules automatically. :)

      Easier than plug&play. It was a dying mobo of an old laptop that sometimes froze the system. Otherwise it worked fine outside first the freezing and then the soundcard getting fried. It was a HP laptop so it was full of dust and impossible to clean, the fan was literally the last component to come off in disassembly. 18 different types of screws if memory serves...