"Remember that one bug that had you tearing your hair out and banging your head against the wall for the longest time? And how it felt when you finally solved it? Here's a chance to share your greatest frustration and triumph with the community.
One that I vividly recall occurred back in the early 90's at a startup that was developing custom PBX hardware and software. There was the current development prototype rack and another rack for us in Quality Assurance (QA). Our shipping deadline for a major client was fast approaching, and the pressure level was high as development released the latest hardware and software for us to test. We soon discovered that our system would not boot up successfully. We were getting all kinds of errors; different errors each time. Development's machine booted just fine, *every* time. We swapped out our hard disks, the power supply, the main processing board, the communications boards, and finally the entire backplane in which all of these were housed. The days passed and the system still failed to boot up successfully and gave us different errors on each reboot.
What could it be? We were all stymied and frustrated as the deadline loomed before us. It was then that I noticed the power strips on each rack into which all the frames and power supplies were plugged. The power strip on the dev server was 12-gauge (i.e. could handle 20 amps) but the one on the QA rack was only 14-gauge (15 amps). The power draw caused by spinning up the drives was just enough to leave the system board under-powered for bootup.
We swapped in a new $10 power strip and it worked perfectly. And we made the deadline, too! So, fellow Soylents, what have you got? Share your favorite tale of woe and success and finally bask in the glory you deserve."
It was the night of January 25, 2003. I was working at a webhoster. We were migrating to a new datacenter. Meaning we prepped the new datacenter, switched off all the servers in the old location, moved them, and switched them on again.
Hours and hours of racking, stacking, pulling cables, testing cables. All through the night. From 23:00 till 06:00 or so. We were beat, but we were done. We started flipping on switches and routers. All looked good. We started flipping on servers. Round about the time we switched on the last 10 or so servers, all the switches and routers lit up like a christmas tree. Blinking lights started to furiously flicker. We thought there was something wrong with the last couple of servers, so we switched those off. The problem persisted. Restarted the switches. That didn't solve anything. We started to switch off servers rack by rack. By this time, customers were starting to wake up and call as well, since we had drifted outside the maintenance window. Everything went crazy.
After some more trial and error, we noticed that if we turned on Windows Servers, the problem would return. Right about that time, our upstream network provider called. They noticed issues on ipaddresses that were running SQL Server. And they were blocking that traffic from that point on.
We had migrated a datacenter on the exact date SQL Slammer became active. Shitiest timing. Ever.
Two different network gone apeshit stories for me:
Doctor: "Do you hear voices?"
Me: "Only when my bluetooth is charged."
I remember those PCMCIA card network adapters fondly.There were a number of different brands, and they all used a similar card to RJ45 dongle. They were even kind enough to share the same physical plug. Except, there didn't seem to be any standardisation between brands on what pinouts to use for the plug.
In a lot of cases, using the plug from one card with a different card would lock up the laptop hard. I had a drawer of cards and dongles and had to work out which dongle went with which card. That was a fun day...