Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by cmn32480 on Friday January 15 2016, @03:21PM   Printer-friendly
from the live-long-and-prosper dept.

El Reg reports

A chap named Ross, says he "Just switched off our longest running server".

Ross says the box was "Built and brought into service in early 1997" and has "been running 24/7 for 18 years and 10 months".

"In its day, it was a reasonable machine: 200MHz Pentium, 32MB RAM, 4GB SCSI-2 drive", Ross writes. "And up until recently, it was doing its job fine." Of late, however the "hard drive finally started throwing errors, it was time to retire it before it gave up the ghost!" The drive's a Seagate, for those of looking to avoid drives that can't deliver more than 19 years of error-free operations.

The FreeBSD 2.2.1 box "collected user session (connection) data summaries, held copies of invoices, generated warning messages about data and call usage (rates and actual data against limits), let them do real-time account [inquiries] etc".

[...] All the original code was so tightly bound to the operating system itself, that later versions of the OS would have (and ultimately, did) require substantial rework.

[...] Ross reckons the server lived so long due to "a combination of good quality hardware to start with, conservatively used (not flogging itself to death), a nice environment (temperature around 18C and very stable), nicely conditioned power, no vibration, hardly ever had anyone in the server room".

A fan dedicated to keeping the disk drive cool helped things along, as did regular checks of its filters.

[...] Who made the server? [...] The box was a custom job.

[...] Has one of your servers beaten Ross' long-lived machine?

I'm reminded of the the Novell server that worked flawlessly despite being sealed behind drywall for 4 years.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Friday January 15 2016, @03:27PM

    by Anonymous Coward on Friday January 15 2016, @03:27PM (#289908)

    (Warning: I may be tangential here) We live in a world where even the same products are not consistent from one batch tot he next. Why would a brand produce same quality of product as they did nearly 2 decades ago? Making products that last forever is a sure fire way to put yourself out of business. You can just make an inferior product, pump money into marketing to get a decent market share, than out compete the people with quality product because your customers have to replace your product all the time. Win win win.

    • (Score: 0) by Anonymous Coward on Friday January 15 2016, @03:35PM

      by Anonymous Coward on Friday January 15 2016, @03:35PM (#289909)

      In this case, the brand was Ross Chap XL.

    • (Score: 2) by bart9h on Friday January 15 2016, @04:44PM

      by bart9h (767) on Friday January 15 2016, @04:44PM (#289937)

      that, and Seagate today is not the same Seagate two decades ago.

      I always buy hard drive from the same brand, but that "favorite" brand sometimes change. I remember it was Seagate in the past, now it's Western Digital.

      • (Score: 2) by RedGreen on Friday January 15 2016, @05:06PM

        by RedGreen (888) on Friday January 15 2016, @05:06PM (#289946)

        Indeed I gave up on Seagate after the 7200.11 nightmare I have one drive of the seven left from that fiasco and it was the only one that failed in warranty so it was newer replacement model I have. Actually now I think about it the external 1.5tb that I gave to my brother for a Time Machine backup is still alive and kicking, it was bought at the same time so two left out of eight...

        --
        "I modded down, down, down, and the flames went higher." -- Sven Olsen
    • (Score: 1, Funny) by Anonymous Coward on Friday January 15 2016, @05:06PM

      by Anonymous Coward on Friday January 15 2016, @05:06PM (#289947)

      Soylent News: Slashdot's Yesterday News... Today!!!.

      • (Score: 0) by Anonymous Coward on Friday January 15 2016, @11:31PM

        by Anonymous Coward on Friday January 15 2016, @11:31PM (#290061)

        Here we can avoid the posts by Nerval's Lobster. This is much better, no?

    • (Score: 2) by http on Saturday January 16 2016, @12:31AM

      by http (1920) on Saturday January 16 2016, @12:31AM (#290073)

      I have to say this explictily in case anyone misses it: you're ignorant and part of the problem.

      Making a product that lasts forever is a surefire way to drive everyone else out of the market. That many manufacturers are buying in to Walmart's drivel that the purchase price is the only thing that matters doesn't negate that reality.

      --
      I browse at -1 when I have mod points. It's unsettling.
  • (Score: 5, Interesting) by opinionated_science on Friday January 15 2016, @03:41PM

    by opinionated_science (4031) on Friday January 15 2016, @03:41PM (#289911)

    at my dept as a grad student, there was an i286 box named after the man who built it ("Fred", say). It had software on it to work with an experiment constructed for it on a microscope, and perform data capture and analysis.

    That machine might well still be there.

    This is why Microsoft is so desperate to get everyone onto a subscription. We simply don't need to upgrade a machine once it starts working. There is forced obsolescence via crippled software.

    So long as a machine is not constructed out of inferior components (I bet we all had a machine with those faulty capacitors - I had one monitor, a TV and a PSU!!), a good chance that 10 years is not a radical length of time.

    But you'll need a UPS....poor power, ruins computers...

    • (Score: 2) by dyingtolive on Friday January 15 2016, @03:53PM

      by dyingtolive (952) on Friday January 15 2016, @03:53PM (#289916)

      Did you go to your local electronics shop for the replacement capacitor to repair the tv and monitor? Those are hilariously braindead easy to fix since they're usually in the internal power supply of the device, which is generally a very simple board as far as the soldering skills required. I made some easy money over the last 10 years finding dead monitors, spending a buck on a capacitor, and then selling them for 30-40 bucks. Keeps that stuff out of landfills too, at least, for a while.

      I didn't try PSUs. I would much rather just replace one of those than risk taking out a motherboard due to poor soldering skills on my part. The few motherboards I tried I never had any luck repairing, but then again, that's to be expected. There's just too much going on there.

      --
      Don't blame me, I voted for moose wang!
      • (Score: 2) by opinionated_science on Friday January 15 2016, @04:22PM

        by opinionated_science (4031) on Friday January 15 2016, @04:22PM (#289925)

        I did something socially responsible. I printed up a note, and donated the monitor and TV, pointing out the fault. I repaired the PSU, but it failed for another reason.

        That was 5 years ago and I am *still* getting emails from Amazing trying to sell me another PSU.

        They were old, and I guess I was lucky to get some use out of them.

        Properly made hardware will last a long time, if it works for the first month!

  • (Score: 3, Interesting) by PizzaRollPlinkett on Friday January 15 2016, @03:45PM

    by PizzaRollPlinkett (4512) on Friday January 15 2016, @03:45PM (#289913)

    You couldn't do that now. Last year, I lost half a dozen hard disks, burning through a supply I thought would last me for years. They would literally shake apart and fail almost instantly. They were all Seagate, and I switched to WD, but I don't know if that will help. A gigabyte switch with no moving parts that had been running for years suddenly failed. I had a power supply blow up and fry a motherboard. Two consecutive replacement Gigabyte motherboards were DOA, and I finally got an ASUS one that would work. I've never seen anything like this in my entire life. Stuff is just junk now. I'd pay more for quality, but all the hard disk vendors I used to use like Fujitsu and Maxtor are gone, and Gigabyte was one of the quality mobo brands.

    --
    (E-mail me if you want a pizza roll!)
    • (Score: 2) by goodie on Friday January 15 2016, @04:07PM

      by goodie (1877) on Friday January 15 2016, @04:07PM (#289922) Journal

      Interesting, I've actually always had horrible reliability from Fujitsu drives...

    • (Score: 2) by dyingtolive on Friday January 15 2016, @04:36PM

      by dyingtolive (952) on Friday January 15 2016, @04:36PM (#289933)

      It's always really weird for me to hear stories like this. I normally get at least 5-6 years out of a drive, even nowadays. I retire them early due to capacity before they go bad at this point. I had a 500gb that's about 8 years old that I gave to my brother and to the best of my knowledge, he's still using it. I mean, I HAVE had drives go bad on me. Just not at the rates described here. 2010 and 2011 were the last hard drives I bought, and they're still *happily churning away. It's not that I doubt you, I just haven't seen similar results to what everyone talks about. Maybe I've just been lucky.

      * Well, the 2010 (seagate, interestingly) is disturbingly noisy, but periodic sector checks don't show any issues. I'm keeping an eye on it, but if it goes, I'm not losing anything important.

      --
      Don't blame me, I voted for moose wang!
      • (Score: 3, Insightful) by PizzaRollPlinkett on Friday January 15 2016, @04:49PM

        by PizzaRollPlinkett (4512) on Friday January 15 2016, @04:49PM (#289939)

        Weird is right. I lost more hard disks last year than I have in my entire life. These drives had a few bad sectors, but then fell apart sometimes in a day or two. All drives have bad sectors, and if the count starts getting high, it's time to replace them, but these drives would go from having a few bad sectors to being unusable in a few days or even hours. I have never seen anything like it. I had several spare drives, and went through half a dozen or so in a few months. They were all Seagate, which I started getting when they bought Maxtor. After that string of bad drives, I switched to WD Black, and have not had a catastrophic failure like this in several months.

        In fairness to Seagate, the Seagate drive in the box where the power supply blew up did survive. I actually put it in another box and kept using it a few months, even though it acted flaky.

        --
        (E-mail me if you want a pizza roll!)
        • (Score: 0) by Anonymous Coward on Friday January 15 2016, @05:59PM

          by Anonymous Coward on Friday January 15 2016, @05:59PM (#289968)

          All drives have bad sectors, and if the count starts getting high, it's time to replace them, but these drives would go from having a few bad sectors to being unusable in a few days or even hours.

          A while back, Google did some study of their hard drives, compared with SMART data and whatnot. One of their conclusions was that any drive should be replaced if it reports more than zero reallocated sectors.

      • (Score: 2, Insightful) by Anonymous Coward on Friday January 15 2016, @04:51PM

        by Anonymous Coward on Friday January 15 2016, @04:51PM (#289942)

        What most people are seeing now is survivor bias.

        Most computer equipment seems to have 3 different life strategies.

        1) burns out in under a month
        2) burns out just after warranty
        3) lasts 15 years
              3a) retired because it no longer serves purpose
              3b) eats itself because of some other issue (usually power)

        This has mostly held true for as long as I have messed around with computers. That flooding in the south pacific really did a number on WD and especially Seagates reliability of 1.5/3TB drives. Seagate at one point having a 40% fail rate. It seems right now HGST is where the reliability stats seem to be leaning (1-2%). At least until WD finishes eating them.

      • (Score: 2) by RedGreen on Friday January 15 2016, @05:16PM

        by RedGreen (888) on Friday January 15 2016, @05:16PM (#289954)

        I have 400gb Seagate that dates from 2008 or so if the dead (just died over christmas) WD 500gb date beside me is any indication it had the dreaded click of death since I got it still runs fine to this day, oldest I have in use is 160gb Seagate from god knows when still runs in one of my backup servers shows no sign of going titups any time soon.

        --
        "I modded down, down, down, and the flames went higher." -- Sven Olsen
        • (Score: 2) by dyingtolive on Friday January 15 2016, @05:34PM

          by dyingtolive (952) on Friday January 15 2016, @05:34PM (#289961)

          I used to do end user data recovery at a mom and pop computer shop that had a service department (think geek squad). Standing policy was that if there was a failed drive, we'd ask if they wanted us to attempt recovery, and offered no warranties or guarantee on the drive or the data.

          We did some really weird things to help increase chances of recovery. Freezing, replacing the little circuit board on the bottom of the drive. It all depended on what the issue was. Since we had no visibility into the drives, I have no idea what actually worked and what didn't. Eventually we figured out that when a drive was had the clicks, you could sometimes stress it just right (read: rubber mallet taps while running) to keep it going just a little while longer. I wouldn't recommend it, but it did work for us at least a couple times. Note that we only did that after we'd tried everything else, including calling back and offering to send it to a clean room recovery company that we knew of, but weren't directly affiliated with, but no one really wanted to pay their kinds of costs. There were probably at least three or four drives we managed to get usable data from with the mallet.

          --
          Don't blame me, I voted for moose wang!
          • (Score: 2) by RedGreen on Friday January 15 2016, @07:02PM

            by RedGreen (888) on Friday January 15 2016, @07:02PM (#289996)

            Never heard of the mallet before except when I got ready to vent some frustration and destroy the mofo, but did do the freezing thing few times. Some times it worked some times not it was the luck of the draw there.

            --
            "I modded down, down, down, and the flames went higher." -- Sven Olsen
            • (Score: 0) by Anonymous Coward on Friday January 15 2016, @08:05PM

              by Anonymous Coward on Friday January 15 2016, @08:05PM (#290011)

              The associated phenomenon even has a name: Sticktion [google.com]

              -- OriginalOwner_ [soylentnews.org]

          • (Score: 2) by Kromagv0 on Friday January 15 2016, @07:28PM

            by Kromagv0 (1825) on Friday January 15 2016, @07:28PM (#290001) Homepage

            I remember doing the drop from chair on to some berber carpet as a last ditch effort to do data recovery. Sometime those shocks are enough to get things loose enough to work for a bit. Sometime is worked but most of the time it didn't. A similar thing can be done with car starters where sometime if you wack them with a screw driver of non marring hammer you can get them to work another couple of times so you can get to the parts store and buy a new one.

            --
            T-Shirts and bumper stickers [zazzle.com] to offend someone
    • (Score: 0) by Anonymous Coward on Friday January 15 2016, @08:26PM

      by Anonymous Coward on Friday January 15 2016, @08:26PM (#290016)

      Head Crashes, Firmware corrupting the disk, and thermal failure of the controller boards.

      How do I know this? i have had 5 seagate devices in the past ~8 years. Of those 4 are still operating, and of those 3 will get either go offline with errors, or silently corrupt to disk if adequate ventilation/cooling is not applied (Sometimes even if it is, I've had disks running ~42C that exihibited these problems, likely due to heat buildup between the board and the drive casing. Stick a fan blowing between them and all of a sudden the drive works flawlessly.)

      That said, I've migrated to all WDs this iteration. HGST actually has a better reputation now, but not enough to offset the price difference between WD Green Desktop drives and the bottom tier HGST drives. As long as you eliminate the idle spindown timeout on the WD Green drives they have excellent reliability and performance.

      Maybe in another generation or two I will go back to Seagate when their reputation has improved, but given that similiar issues appear endemic across 2-3 generations of drives, I will be reluctant to buy new ones for a while.

      • (Score: 2) by J053 on Friday January 15 2016, @10:39PM

        by J053 (3532) <{dakine} {at} {shangri-la.cx}> on Friday January 15 2016, @10:39PM (#290046) Homepage
        I don't know why my experience differs so greatly from others', but...
        I have over 50 2TB or larger drives in operation right now. Over the past 3-4 years, I've had at least 8 WD drives fail - and 1 Seagate. Most of the Seagates are their Barracuda consumer drives, although in the last 2-3 years we've been buying the Constellation series. The WDs have all been "Enterprise" drives. Don't even get me started about Maxtor - 5 or 6 years ago we were buying them by the carton because we had so many failures.
    • (Score: 3, Informative) by chewbacon on Saturday January 16 2016, @08:09PM

      by chewbacon (1032) on Saturday January 16 2016, @08:09PM (#290416)

      I can say the same. I used to use Seagate exclusively but after the third disk failure in 2 years I backed away. Even when under warranty, the replacement process was a hassle and service sucked. I've been using WD and, other than a drive enclosure short thanks to Icy Dock, have had far less failures.

  • (Score: 4, Informative) by ThePhilips on Friday January 15 2016, @03:49PM

    by ThePhilips (5677) on Friday January 15 2016, @03:49PM (#289914)

    The drive's a Seagate, for those of looking to avoid drives that can't deliver more than 19 years of error-free operations.

    20 years ago, Seagate was one of the top innovative manufacturers.

    Today, Seagate is just a shadow of its former self: yeah, they have some R&D and do something new, but overall they are just bunch of factories which were either sold off by other manufacturers or were acquired because ex-competitors have left the HDD business altogether.

    The internet is full of accounts of not only how bad the Seagate drives are today, but most disturbingly, how unequal the quality/performance of the drives which, though bear the very same model number, are made at different factories.

    I personally always avoided the Seagate. 20 years ago their drives were noisy (the signature scratching noise). But today their drives are just junk. Two new Seagate drives which went through my hands in the last 5 years, all had non-zero S.M.A.R.T. relocation counters already after copying the initial images onto them!

  • (Score: 2) by isostatic on Friday January 15 2016, @03:55PM

    by isostatic (365) on Friday January 15 2016, @03:55PM (#289917) Journal

    The power to keep the machine running would have been about $100 a year, so barely worth replacing, however what was the plan if the room caught fire? Did they really run 18 years without a backup system?

    • (Score: 2) by goodie on Friday January 15 2016, @04:12PM

      by goodie (1877) on Friday January 15 2016, @04:12PM (#289923) Journal

      Depends on the type of business... Many small shops don't have the means (or the idea) to implement redundancy. It's just the way it is. Dunno if anybody knows Steve's the music store around here. When I went there a couple of years ago, they were still running on monochrome terminals holding with duct tape and printing on dot-matrix. It worked well mind you. Anyway last year I went back and it was the same system but running on thin clients (vmware terms I think) from what I could see. Staff didn't seem to happy, they were having printing issues etc.

      • (Score: 2) by isostatic on Friday January 15 2016, @04:25PM

        by isostatic (365) on Friday January 15 2016, @04:25PM (#289928) Journal

        But in those cases

        You don't neccersarilly need instant redundancy - but some idea about how to rebuild the machine to fulfill the business function is important. In your example "they were still running on monochrome terminals holding with duct tape and printing on dot-matrix"

        If one broke, they'd be able to use another. Worst case they could use a pen, paper and mental arithmatic^W^W a calculator app

        • (Score: 3, Informative) by goodie on Friday January 15 2016, @04:49PM

          by goodie (1877) on Friday January 15 2016, @04:49PM (#289938) Journal

          ABsolutely. And it's one of the hallmarks of music stores in most places I've visited in North America. No matter how recent is the store, a lot of the sales process is still very manual and involves you paying for bill at one place, carrying your bill to another to get it stamped, then bringing it back so that somebody calls to get the gear from the warehouse. And it's the same process whether you're buying a $100 Fender knockoff or a $2000 Gibson.

          My example was just also meant to show that we lack some context for this. I mean we get the idea as to what the machine was used for but given its processing power, I doubt it was for a telco or something similar ;). It would be interesting to see the type of company that had it running. Where I used to work (small software company), there was 1 old machine (7+ years): the one handling the fax system, which is only fitting given the age of that tech :).

          • (Score: 4, Informative) by isostatic on Friday January 15 2016, @05:32PM

            by isostatic (365) on Friday January 15 2016, @05:32PM (#289958) Journal

            I work at a global broadcaster with several 24 hour news channels, I won't say which one. About half of the news video that's broadcast comes in from correspondents around the world, and it relies on one of three boxes - one c.10 year old HP server and two c.8 year old HP servers. If one of them breaks, I just drop a new machine in place, job done.

            That's fine, but if the software relied on a specific HP DL360 G5 feature, I'd be shitting myself.

  • (Score: 3, Interesting) by SomeGuy on Friday January 15 2016, @04:37PM

    by SomeGuy (5632) on Friday January 15 2016, @04:37PM (#289934)

    The sad thing is most kids today will either laugh at this like it is not needed or be totally amazed that it is actually possible.

    From the perspective of consumer devices it might not matter so much, but when a big business spends huge bucks to put a big system in place it is perfectly reasonable and desirable for that system to remain in operation for perhaps 30 years.

    In the realm of computers, we got a bit deceived by the rapid POSITIVE changes as parts got smaller and faster, it was impractical to keep the old stuff in place because the new stuff offered so much more. Devices that could have lasted a long time appeared to get "old" quickly. But now that things have plateaued we are beginning to witness technological regressions as vendors sabotage things to bring in new sales, and for some reason people accept this.

    But when a device happens to meet everything that is needed of it, an nothing "new" has any advantage over it, then there is no reason it should not stay in place for many years.

    • (Score: 2) by opinionated_science on Friday January 15 2016, @04:51PM

      by opinionated_science (4031) on Friday January 15 2016, @04:51PM (#289943)

      The thing about continuity, is that it allows the accumulation of project ideas over long periods of time.

      This is why I have not touched a M$ machine in a decade. My data is on NFS and ZFS (rolling systems), and I never have to worry about losing access to data because of some arbitrary corporate enforced obsolescence.

      And yes, keeping a bootable old image for virtualisation is an amazing advance.

      I cannot wait until the 3D Flash gets us all onto *really* low power storage, instead of spinning rust....

    • (Score: 2) by isostatic on Friday January 15 2016, @04:55PM

      by isostatic (365) on Friday January 15 2016, @04:55PM (#289945) Journal

      That's right, but the problem is that maintaining that device becomes an issue - if you can't get spare parts for it anymore because time has moved on, you'll need to replace it eventually, preferably at a time when the loss of a device will not cause a adverse business event (imagine your sales system breaking on black friday, or just before christmas, and needing 5 days to rebuild)

      That's not to say 20 years is a long time for a device to perform a function, but at some point something bad will happen, and what do you do then?

      My laptop is 6 years old, and I don't expect to replace it for another year or two, however if it breaks (Perhaps I knock it off the balcony), I know I'm offline for a couple of days while I buy a new laptop from the shop and spend some time installing the OS and programs, and restoring critical stuff from backups.

      If I have a critical program that relies on the specific thinkpad I use though, I'd be screwed, and hoping ebay will solve the problem.

      My oldest PC in day-to-day usage was a few NT4 machines, they were finally removed in 2012, hadn't been turned off for over 5 years, we were worried about them coming back. We've got an ATM router running a link from the UK to Moscow, and we had to move the UK end from one building to another. We moved it onto a portable UPS and shifted it across town in a van, continuously powered, because we were concerned it wouldn't start up.

      All things break. Some things can have the service they provide be replaced easilly (break a window, replace it with a new one), but often in computing that's not that easy to do, but it never has been. One cost of the shuttle use 8086s, which were great in 1981. 20 years later though they were hard to get [nytimes.com].

    • (Score: 0) by Anonymous Coward on Friday January 15 2016, @08:11PM

      by Anonymous Coward on Friday January 15 2016, @08:11PM (#290012)

      I started my career in 1996 at a nuclear power station, one of the oldest commercial ones in the world. It was commissioned in 1962 and was very much the cutting edge of 1950s technology. The primary reactor temperature monitoring computer was a Honeywell 316 which was commissioned in 1972, the year my parents were married. It monitored temperatures in both reactors in real time. There were green screen ASCII displays for each reactor and a real TTY for the console. It had 32k of magnetic core store, a built-in 160k hard disk and a paper tape drive for bootstrap. The old Control and Instrumentation guys had a funny chant and a dance to help them remember the toggle switch pattern/sequence (16 switches on the front panel) for entering the boot loader. It was immune to the Y2K bug because it didn't care how many days there were in the month or what year it was. There was no filesystem on the disk, You accessed it with an octal monitor program, a sector at a time. There was nothing wrong with it other than the disk started to die in summer 2000 and it was replaced by a pair of PDP-11/70s running RSX-11M which had been the backup system since the early 1980s.

  • (Score: 3, Interesting) by tempest on Friday January 15 2016, @04:44PM

    by tempest (3050) on Friday January 15 2016, @04:44PM (#289936)

    I retired a box around 2009 that was used for employee check-ins, although I'm not sure it qualifies as a server. No one knows how old the box is, but it ran on a 486 of some sort. It's main task was running a DOS application on top of Windows 3.1.

    As far as I know it never died as long as it had electricity, but it suffered a lot of power outages. There were a few of these computers at the company originally, all in very dirty conditions, but even the "clean" PCs at our company didn't have working fans in the 90s due to our extremely high dust environment, yet they still ran. The battery on the board probably died after a decade causing the clock to drift, so the fix was to set the PC clock each time the punch-ins were read (once a day). There was a bit of head scratching with our remote locations when our network formally adopted TCP and our frame relay was to go away as none of them had IP networking installed. This was in the early 2000s mind you but thankfully the Win3.1 ip stack was still available and installed without a problem.

    Originally there were 4 of these PCs, but one disappeared (can't remember why), and two died around 2003-2005. When I pulled the last one from service it was still working fine. It only had to be replaced because 1) I had no spare, 2) the AT Keyboard situation became cumbersome and 3) the company had moved to hardware time-clocks in other locations, so that was chosen to replace the PC.

    Currently it sits on the "hall of fame-er" shelf along with some serial terminals and another PC which has "important HR data" that no one has any idea how old it is (likely one of the first PCs ever used at the company, and we were early adapters from what I hear - this is long before my time). Before I parked it, I released the turbo button to give it a rest (don't think it was hooked up though :)

  • (Score: 4, Informative) by NotSanguine on Friday January 15 2016, @06:14PM

    My oldest system acts as a router/firewall. It's a Pentium Pro with 96MB RAM purchased in 1994 and has been continuously in its current role since 1996.

    It's been rebooted numerous times, but the system itself has yet to fail.

    I've had to replace (cheapo) ethernet cards and I upgraded (by choice, not due to failure) the hard drive (IDE) to 8GB.

    Granted, routing/firewalling for a residential internet connection isn't very taxing, but this system may well be older than some Soylentils.

    WRT the system the biggest question I have is what to replace it with when it finally fails. Most consumer grade routers are crap and don't provide the level of flexibility of my 22 year old device. The Wifi routers (used in bridge mode on my network) I've purchased haven't lasted more than three years or so.

    If I'm lucky, the family members who clean out my house after I die will wonder if it's a museum piece and get confused when 'net access stops working when they unplug it.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 0) by Anonymous Coward on Friday January 15 2016, @06:48PM

      by Anonymous Coward on Friday January 15 2016, @06:48PM (#289991)

      The Wifi routers (used in bridge mode on my network) I've purchased haven't lasted more than three years or so.

      I would look to your power. It sounds like the wall warts are burning out and not giving up enough power. I went thru 3 wall warts before I found a better solution. I plugged them into a decent power strip that smooths out power spikes. They need to be replaced every 10 or so years though.

    • (Score: 0) by Anonymous Coward on Friday January 15 2016, @07:38PM

      by Anonymous Coward on Friday January 15 2016, @07:38PM (#290003)

      K62-400 w/ 128M 10GB HD 3x PCI NIC but it's backup... 486sx 12MB 273MB HD with 2x ISA 10baseT NIC.

      The K62 HD been screaming for a while 6yrs. Just filled the box with cardboard. Quiet and working great.

      I do have RPi as a bridge router via my phone so all bases covered.

    • (Score: 1) by Frost on Saturday January 16 2016, @11:45AM

      by Frost (3313) on Saturday January 16 2016, @11:45AM (#290231)

      Pentium Pro with 96MB RAM purchased in 1994

      Wow, 1994? I built a cutting edge PPro system in 1996 at great expense. I can't imagine using that level of hardware for just firewall/routing. Hmm ... wikipedia says [wikipedia.org] the PPro didn't go on sale until November 1995, which is consistent with my recollection. I suspect you got yours in 1996 too, if not later.

  • (Score: 4, Interesting) by Unixnut on Friday January 15 2016, @07:16PM

    by Unixnut (5779) on Friday January 15 2016, @07:16PM (#289997)

    Those old Sparc machines were built like tanks, I can attest to it. This was while I worked at a very large company that shall not be named (they are well known, and one of the biggest and richest companies on earth), they had an Ultra60 (running some ancient Solaris) which had been there since 1998, running non stop. And it wasn't doing some unimportant stuff, this Ultra60 was the fulcrum upon which the entire firm relied for its functioning. Some of the most important data paths through the company went through this one system (there were a pair of Ultra60's, in case of failover was needed).

    Essentially the banks systems grew around the Ultra60, and people just came to rely on it always being up, there and doing its job. After years and years, nobody knew how the Ultra60 worked except one guy who was old school Unixbeard. He basically maintained that machine. When it finally came up to his retirement, upper management collectively shat a brick, worrying that everything was going to die once he walked out the door (they offered him so much money to stay on as a consultant, but he wasn't interested).

    After he left, I ended up maintaining the Ultra60 (because I was the only one interested in what the old guy did, and paid attention to what he told me), and it was a magnificent beast, very underpowered compared to the latest machines, but it would sit pegged at 100% for months on end without breaking a sweat. You push one of those new machines to 100% for longer than a day or so they start keeling over, or having a performance drop off (not to mention Linux just kicks the bucket).

    When I left the company, the Ultra60 was still going strong, and I don't doubt that deep in the bowels of the building, under the layers of "private cloud", "web 2.0 dynamic", "node.js" and other bloated crap, that machine is till chugging away, making sure work gets done.

    Main reason nobody replaced the machine (and why you end up with these ancient systems running forever), is because it ran some custom software that was written decades ago (in perl + C), and

    a) the guy who wrote it has long since left and nobody can get in contact with him
    b) nobody can read the code, there was no source for the C, and the perl was just line noise
    c) It was hacked together in places, and there was no guarantee that someone didn't come to rely on some odd behaviour or some really old function that has been superseded

    and most importantly:

    d) If you do replace it without a hitch, nobody will notice nor will you get a raise/promotion. However if you mess it up and break it, the whole company stops working, and you will be kicked out of the company so fast you will leave skin marks on the parking lot.

    Essentially, it wasn't worth anyone to risk sticking out their neck for it, so it will keep chugging along I guess, until it finally dies (if it does, the result will hit the news, so I will be informed one way or another :) )

    • (Score: 0) by Anonymous Coward on Saturday January 16 2016, @02:55PM

      by Anonymous Coward on Saturday January 16 2016, @02:55PM (#290281)

      Sun built solid hardware back in the day. The Sparc based pizza boxes and bread boxes just ran & ran until the SCSI disks died. The Ultra 1 and Ultra 60s were solid too. They tried to cuts costs with the Ultra 5s and 10s. We had some 440 and 880s that lasted too!

      • (Score: 2) by Unixnut on Sunday January 17 2016, @08:27AM

        by Unixnut (5779) on Sunday January 17 2016, @08:27AM (#290615)

        Indeed they did, and we still had v440's in production. Some teams swore by them for their reliability and sustained performance (especially compared to the x86 alternatives), and were unwilling to let go of the machines. It was only after Oracle took over Sun and hiked the licensing costs by 10 times, that the accounting department and management started pressuring everyone to move off Sparc/Solaris.

        The Ultra60 however didn't cost much (if at all), due to being out of support by Sun/Oracle, assuming they even knew of their existence at the company.

    • (Score: 1) by plnykecky on Sunday January 17 2016, @11:02AM

      by plnykecky (4276) on Sunday January 17 2016, @11:02AM (#290666)

      We have a SPARC 5V working as a Scanning Tunneling Microscope control computer since 1999, but the computer was already refurbished when we got it. It runs Solaris 2 I think and is hooked up by a 10cm-wide ribbon bus to a 8086 processor that collects data in the real time and hands it over to the SPARC. The disk has been replaced some 6 years ago, using dd to clone the contents. I also had to change the SPARC computer at some point and hack its hardware ID to force the old SW license work.

      It hangs occasionally but otherwise works fine. We still have the original tapes with the software for it, but no one ever needed a reinstall since 1999.

      Next week it is going to be replaced by a control computer running Win7. How sad..

  • (Score: 4, Interesting) by J053 on Friday January 15 2016, @09:41PM

    by J053 (3532) <{dakine} {at} {shangri-la.cx}> on Friday January 15 2016, @09:41PM (#290030) Homepage
    Still in use, a Vaxstation 4000 Model 90, purchased in 1992, running VAX/VMS 5.4. It drives a SCSI-IEEE488 converter that controls a moving platform to switch the beam of a telescope to different detectors. We've replaced a power supply and a couple of disks (do you have any idea how hard it is to find 4GB SCSI disk drives these days?) - luckily we once had several of these so we have parts. I'm hoping we can finally shut this thing down sometime this year (developing a VME-based system to do the same thing).
  • (Score: 0) by Anonymous Coward on Friday January 15 2016, @10:09PM

    by Anonymous Coward on Friday January 15 2016, @10:09PM (#290037)

    I am pretty sure this would run in a vm, so really no reason to have risked it that long on old hardware. ( still, it needed to be upgraded due to security reasons, but at least no physical risk )