Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday March 22 2016, @04:47AM   Printer-friendly
from the commence-speculation dept.

Last week, several major eCommerce sites in Switzerland were targetted by DDoS attacks (German). As far as I have been able to discover, no one knows who was behind the attacks[*]. One might have thought the attackers would identify themselves and demand ransom to stop the attacks, but apparently not. Anyhow, I should hope that no company would be stupid enough to pay, since that would just put them on the list of "suckers" to be targetted again.

This past weekend, it was Swedish government sites, among others.

Today, I have come across two sites that I cannot reach: dilbert.com and an EU governmental site about a minor software project. Dilbert is definitely the target of a DDoS attack; I cannot confirm this for the .eu site, but it seems likely.

Here are a few random thoughts from a non-expert:

- Why would anyone bother with attacks, without claiming credit or demanding ransom? The same reason kids throw rocks through windows? Showing off capability for potential paying customers? Something else?

- If the second (demonstrating capability), isn't this stupid? They've provided ample motivation to disable these attacks, or at least seriously filter them, thus reducing their impact in the future attacks.

- The current DDoS attacks are apparently NTP-reflection attacks (send spoofed queries to vulnerable NTP servers, which then reply to the victim), and similar DNS-based attacks. Is it possible to eliminate these attack vectors, just as Poodle and Heartbleed have been largely eliminated? I.e., issue patches, offer free tests, even blacklist noncompliant servers? Or are the affected protocols so broken that this is not possible?

The whole situation is strange - it seems like there are a lot of missing pieces to the puzzle. I'd be interested in hearing opinions from other Soylentils - what do you think?

[* My German is rusty, but the first-linked story references the "Armada Collective". -Ed.]


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday March 22 2016, @05:33AM

    by Anonymous Coward on Tuesday March 22 2016, @05:33AM (#321432)

    I feel this issue of DNS amplification attacks goes beyond fixing specific software implementations and rather speaks to a larger problem we currently face: The stateless UDP protocol allows by its very design packet forgeries (that cannot be easily detected through analysis of a single packet) which leads to DNS amplification attacks. Both the NTP and DNS protocols use UDP and are vulnerable to this. Adding in fixes to bandaid the problem fixes these two specific implementations, but the real issue is the ability to send forged UDP packets unabated throughout the internet. Any program or protocol that uses UDP without consideration to this issue could become the next attack vector for an attack like this. I doubt the software engineer designing around UDP is fully versed in just how vulnerable the protocols is to attack. As far as I am aware to detect a forged UDP packet you need to have a clear idea of the route the packet itself took. If it came in from china with a low TTL (probably didn't bounce out the continent) but says its from the US, its prolly forged. How do you do the same from one datacenter to another when all the traffic is going across similar backbones, passing through similar transit networks, etc. I'm not a network engineer so I can only speculate at what can be done but I know enough to say that UDP itself should be considered when talking about the insecurity regarding DDoS amplification attacks. The question I have is who should be obliged to mitigate this issue? Is it the software engineer? The network protocol designer? The network engineer him/herself? The ISP's themselves? It doesn't seem anyone has taken up that mantle just yet, otherwise attacks like this would be a thing of the past. There is always going to be someone running old software on the internet or some software thats not been designed properly. If the protocol itself allows for these kinds of attacks by default (if the developer doesn't account for them), then its hard to say its the fault of software developers for not considering every attack vector beforehand. Sure we know now, but the DNS, NTP system's weren't designed and implemented yesterday.

    Am I way off base here?

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 3, Interesting) by mth on Tuesday March 22 2016, @09:21AM

    by mth (2848) on Tuesday March 22 2016, @09:21AM (#321487) Homepage

    I think it would be fairly easy to stop this problem at the sender's ISP. If an ISP router sees packets being sent with a source IP address that doesn't belong on its network, it should drop or deny those packets. I don't know why this isn't done though, is it lazyness or is there a technical reason this is more difficult than it sounds?

    • (Score: 4, Informative) by VLM on Tuesday March 22 2016, @12:29PM

      by VLM (445) Subscriber Badge on Tuesday March 22 2016, @12:29PM (#321558)

      I used to work in that environment

      1) Its political. You're our upstream, you put in filters if you're so hot about filters and "best practices" and hippie RFCs. No its your network full of powned machines, you put in filters on your routers. You owe us one because we pay you. No you owe us one because I advertised your ip space on your word before we got the LOA from legal so you filter. It doesn't really matter if you filter on the ISP's router or the customer's router therefore all answers are wrong and worth fighting about.

      2) You seem to think we don't. We did, for our known problem customers. We had guys who would try to take previous provider IP space with them to us (no no) and god knows how many times I/we blocked some idiot from advertising RFC1918 address space in BGP or even better a 0/0 route. And there were some guys we knew personally or by reputation so they had a very light hand, but they get powned sometimes too. And then there's the mass in the middle who just kind of shuffled thru life.

      3) Related to number 2 above its a technological problem in that to make a very long story, very short, router vendors "MOSTLY" don't dedicate fixed CPU horsepower to filtering individual ports, so its the kind of game where filtering the crap out of all the customers isn't affordable at a technical CPU power level, but we could do minimal filtering for the low risk customers and filter the hell out of the nightmare customers, at significant tradeoff. In the "really old days" like 90s keeping BGP stable was a stereotypical "press your luck" game of CPU and memory... things run great right until they don't. And after the n-th time of emergency upgrades, you start not intentionally hitting your thumb with a hammer by giving the CPUs etc as little to do as possible. Even just stuff like listing giant monolithic config files has a certain cost. Imagine if the entire linux kernel had to be one extremely long .c file. That's how routers are. And we had some routers with 100 or so customers connected to them.

      4) Related to number 3 above, oh given an infinite budget its technologically possible. But everyone thinks they're a better admin than they really are, so would you pay an extra $50/month for a connection that filters the hell out of people who have no idea what they're doing? That's kind of a hard sell to the boss. So if you filter everyone, you'll be too expensive and go out of business, and if you segment it out as an addon service, you'll get your customers who buy it fired.

      5) You'll just get windows machines that are powned (but I repeat myself) which don't have to address spoof to generate tons of traffic. A natural effect of consolidation of the industry. What do you think happens when your security groups expand because you've gone from 10000 little garage scale webhosters to like 10? Naturally internal attacks are going to be 1000x more likely than in the old days. So filtering on the border is becoming less important as the border to area ratio shrinks. The days of the DOS or attack across boundaries is shrinking, just like capitalism in general. Someday they'll be the one ISP and the one webhoster (same?) and they'll be only one BGP AS number and never again a cross company border attack. Well not in practice but in theory we're working to get there ASAP.