Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by janrinok on Friday January 27 2023, @01:51PM   Printer-friendly
from the windows-tco dept.

Developer Robert Graham has written a retrospective on how his proprietary software was able to detect the Microsoft Sapphire Worm, also known as SQL Slammer as it hit due to his design choices. These choices were first, a poll-mode driver instead of interrupt driven and, second, protocol analysis for recognizing the behavior signature rather than pattern matching.

An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".

Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.

I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.

When it hit in January 2003, the Microsoft Sapphire Worm, also known as SQL Slammer, began spreading quickly across the Internet by doubling in size every 8.5 seconds, infecting than 90% of vulnerable, networked Windows systems within 10 minutes.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by VLM on Friday January 27 2023, @02:16PM (3 children)

    by VLM (445) on Friday January 27 2023, @02:16PM (#1288911)

    This is not a sea story I wuz there.

    I was on call that night (lucky me) and I had to admin down an ethernet port on a smart ethernet switch at about 2am because a data center customer's MSSQL server got infected and it was impacting everyone. "Back in the day" ethernet switch fabrics were sold, even on "enterprise switches" that could not handle line rate between all ports simultaneously but I did not expect latency spikes when just one port was maxed out; kind of interesting experience. There used to be quite a max performance difference between consumer, prosumer, and "real pro" ethernet switches; I simply don't care enough to keep up with the current era hardware; I wonder.

    The overwhelming reaction by everyone at work the next morning was something like "Were they high, putting an unfirewalled MSSQL server port bare on the internet for anyone to mess with, then never patching?". The customer of course was quite mad that I admined his port down, but his server was dead in the water anyway, so whatever.

    As an architectural issue there would be nothing "wrong" with using SQL ports instead of a http based REST API because you could do all that CRUD stuff just fine with either; however just like elasticsearch in 2023, back in 2003 you'd have corporate support only for MSSQL version 1.2.3.4.5.00000001.2 and if you upgraded to patch a security hole then its no longer certified and your app will probably work but nobody will support it. So you'd have an overall theoretically supported system with a bare DB port on an unpatched server anyone could hit...

    • (Score: 4, Insightful) by canopic jug on Friday January 27 2023, @02:58PM (2 children)

      by canopic jug (3949) Subscriber Badge on Friday January 27 2023, @02:58PM (#1288922) Journal

      From what I remember, starting in December 2002 and running through to the attack, there were three-packet probes on the MS SQL ports with increasing frequency against all the Internet-facing systems I was working on. After the attack these were identified as a preparatory part of the worm. However, prior to the attack, they were just probes.

      I considered reporting the ongoing probes to the network team several times that December but did not because I had way more than enough on my plate plus they had been much less than helpful over the past year. Furthermore, it looked unlikely to affect my systems or any of the servers in the departments where I was working (no Windows servers and one largish department even had no Windows desktops) and, also, fuck-em, whatever it was going to be microsofters would eventually get what's coming to them because neither MS SQL nor Windows should be connected to the net. What I would have gained by sacrificing time away from my own work as end-of-year deadlines approached would have been nothing more than the lame ability to say, "I told you so", later. Despite not knowing in advance how efficient the eventual worm was it seemed kind of obvious that something was brewing.

      Sadly, the managers never learn. Notice that despite the successful gain in market share globally, the result was little to no impact on decreasing the footprint of MS SQL nor the uptake of Postgresql or MySQL on GNU/Linux or FreeBSD.

      --
      Money is not free speech. Elections should not be auctions.
      • (Score: 3, Interesting) by VLM on Saturday January 28 2023, @04:08PM (1 child)

        by VLM (445) on Saturday January 28 2023, @04:08PM (#1289083)

        fuck-em, whatever it was going to be microsofters would eventually get what's coming to them

        Yeah I feel the same way but then the old ethernet switch fabric got all "weird" about being flooded causing latency spikes on all the uninvolved ports, almost entirely Linux servers.

        The data center techs dumped an entire can of air duster thru the switch thinking the weird latency spikes were caused by overheating. Good thinking on their part, although nobody noticed one port running line rate which was the actual problem.

        Most of network engineering, heck, most of engineering, is peel away layers of the onion and weirdness starts pouring out. Flooding one port of a prosumer 90s switch shouldn't make it freak out, therefore what was happening was impossible, therefore that can't be the cause. But it was. Welcome to engineering, every time you peel away a layer of the onion, toss out all the old layers.

        • (Score: 3, Insightful) by canopic jug on Saturday January 28 2023, @05:02PM

          by canopic jug (3949) Subscriber Badge on Saturday January 28 2023, @05:02PM (#1289091) Journal

          Good thinking on their part, although nobody noticed one port running line rate which was the actual problem.

          They never do. Not only do you have to tell them which machines are problem, you usually have to have some leverage over them to get the systems pulled offline for "repair". If you have the misfortune of having even a single Windows server on the same pipes, you will occasionally get calls about the perceived slowness of your services from time to time. The actual situation will always be something similar to how you describe it: One or more Windows systems will be maxing out its ports at line rate with stuff like malware, warez, or who-knows-what going to/from/via wherever clogging up the server room LAN or even the externally facing connections. Every time.

          --
          Money is not free speech. Elections should not be auctions.
(1)