Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Friday January 15 2016, @03:21PM   Printer-friendly
from the live-long-and-prosper dept.

El Reg reports

A chap named Ross, says he "Just switched off our longest running server".

Ross says the box was "Built and brought into service in early 1997" and has "been running 24/7 for 18 years and 10 months".

"In its day, it was a reasonable machine: 200MHz Pentium, 32MB RAM, 4GB SCSI-2 drive", Ross writes. "And up until recently, it was doing its job fine." Of late, however the "hard drive finally started throwing errors, it was time to retire it before it gave up the ghost!" The drive's a Seagate, for those of looking to avoid drives that can't deliver more than 19 years of error-free operations.

The FreeBSD 2.2.1 box "collected user session (connection) data summaries, held copies of invoices, generated warning messages about data and call usage (rates and actual data against limits), let them do real-time account [inquiries] etc".

[...] All the original code was so tightly bound to the operating system itself, that later versions of the OS would have (and ultimately, did) require substantial rework.

[...] Ross reckons the server lived so long due to "a combination of good quality hardware to start with, conservatively used (not flogging itself to death), a nice environment (temperature around 18C and very stable), nicely conditioned power, no vibration, hardly ever had anyone in the server room".

A fan dedicated to keeping the disk drive cool helped things along, as did regular checks of its filters.

[...] Who made the server? [...] The box was a custom job.

[...] Has one of your servers beaten Ross' long-lived machine?

I'm reminded of the the Novell server that worked flawlessly despite being sealed behind drywall for 4 years.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by SomeGuy on Friday January 15 2016, @04:37PM

    by SomeGuy (5632) on Friday January 15 2016, @04:37PM (#289934)

    The sad thing is most kids today will either laugh at this like it is not needed or be totally amazed that it is actually possible.

    From the perspective of consumer devices it might not matter so much, but when a big business spends huge bucks to put a big system in place it is perfectly reasonable and desirable for that system to remain in operation for perhaps 30 years.

    In the realm of computers, we got a bit deceived by the rapid POSITIVE changes as parts got smaller and faster, it was impractical to keep the old stuff in place because the new stuff offered so much more. Devices that could have lasted a long time appeared to get "old" quickly. But now that things have plateaued we are beginning to witness technological regressions as vendors sabotage things to bring in new sales, and for some reason people accept this.

    But when a device happens to meet everything that is needed of it, an nothing "new" has any advantage over it, then there is no reason it should not stay in place for many years.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by opinionated_science on Friday January 15 2016, @04:51PM

    by opinionated_science (4031) on Friday January 15 2016, @04:51PM (#289943)

    The thing about continuity, is that it allows the accumulation of project ideas over long periods of time.

    This is why I have not touched a M$ machine in a decade. My data is on NFS and ZFS (rolling systems), and I never have to worry about losing access to data because of some arbitrary corporate enforced obsolescence.

    And yes, keeping a bootable old image for virtualisation is an amazing advance.

    I cannot wait until the 3D Flash gets us all onto *really* low power storage, instead of spinning rust....

  • (Score: 2) by isostatic on Friday January 15 2016, @04:55PM

    by isostatic (365) on Friday January 15 2016, @04:55PM (#289945) Journal

    That's right, but the problem is that maintaining that device becomes an issue - if you can't get spare parts for it anymore because time has moved on, you'll need to replace it eventually, preferably at a time when the loss of a device will not cause a adverse business event (imagine your sales system breaking on black friday, or just before christmas, and needing 5 days to rebuild)

    That's not to say 20 years is a long time for a device to perform a function, but at some point something bad will happen, and what do you do then?

    My laptop is 6 years old, and I don't expect to replace it for another year or two, however if it breaks (Perhaps I knock it off the balcony), I know I'm offline for a couple of days while I buy a new laptop from the shop and spend some time installing the OS and programs, and restoring critical stuff from backups.

    If I have a critical program that relies on the specific thinkpad I use though, I'd be screwed, and hoping ebay will solve the problem.

    My oldest PC in day-to-day usage was a few NT4 machines, they were finally removed in 2012, hadn't been turned off for over 5 years, we were worried about them coming back. We've got an ATM router running a link from the UK to Moscow, and we had to move the UK end from one building to another. We moved it onto a portable UPS and shifted it across town in a van, continuously powered, because we were concerned it wouldn't start up.

    All things break. Some things can have the service they provide be replaced easilly (break a window, replace it with a new one), but often in computing that's not that easy to do, but it never has been. One cost of the shuttle use 8086s, which were great in 1981. 20 years later though they were hard to get [nytimes.com].

  • (Score: 0) by Anonymous Coward on Friday January 15 2016, @08:11PM

    by Anonymous Coward on Friday January 15 2016, @08:11PM (#290012)

    I started my career in 1996 at a nuclear power station, one of the oldest commercial ones in the world. It was commissioned in 1962 and was very much the cutting edge of 1950s technology. The primary reactor temperature monitoring computer was a Honeywell 316 which was commissioned in 1972, the year my parents were married. It monitored temperatures in both reactors in real time. There were green screen ASCII displays for each reactor and a real TTY for the console. It had 32k of magnetic core store, a built-in 160k hard disk and a paper tape drive for bootstrap. The old Control and Instrumentation guys had a funny chant and a dance to help them remember the toggle switch pattern/sequence (16 switches on the front panel) for entering the boot loader. It was immune to the Y2K bug because it didn't care how many days there were in the month or what year it was. There was no filesystem on the disk, You accessed it with an octal monitor program, a sector at a time. There was nothing wrong with it other than the disk started to die in summer 2000 and it was replaced by a pair of PDP-11/70s running RSX-11M which had been the backup system since the early 1980s.