Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday March 22 2016, @04:47AM   Printer-friendly
from the commence-speculation dept.

Last week, several major eCommerce sites in Switzerland were targetted by DDoS attacks (German). As far as I have been able to discover, no one knows who was behind the attacks[*]. One might have thought the attackers would identify themselves and demand ransom to stop the attacks, but apparently not. Anyhow, I should hope that no company would be stupid enough to pay, since that would just put them on the list of "suckers" to be targetted again.

This past weekend, it was Swedish government sites, among others.

Today, I have come across two sites that I cannot reach: dilbert.com and an EU governmental site about a minor software project. Dilbert is definitely the target of a DDoS attack; I cannot confirm this for the .eu site, but it seems likely.

Here are a few random thoughts from a non-expert:

- Why would anyone bother with attacks, without claiming credit or demanding ransom? The same reason kids throw rocks through windows? Showing off capability for potential paying customers? Something else?

- If the second (demonstrating capability), isn't this stupid? They've provided ample motivation to disable these attacks, or at least seriously filter them, thus reducing their impact in the future attacks.

- The current DDoS attacks are apparently NTP-reflection attacks (send spoofed queries to vulnerable NTP servers, which then reply to the victim), and similar DNS-based attacks. Is it possible to eliminate these attack vectors, just as Poodle and Heartbleed have been largely eliminated? I.e., issue patches, offer free tests, even blacklist noncompliant servers? Or are the affected protocols so broken that this is not possible?

The whole situation is strange - it seems like there are a lot of missing pieces to the puzzle. I'd be interested in hearing opinions from other Soylentils - what do you think?

[* My German is rusty, but the first-linked story references the "Armada Collective". -Ed.]


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by VLM on Tuesday March 22 2016, @12:29PM

    by VLM (445) on Tuesday March 22 2016, @12:29PM (#321558)

    I used to work in that environment

    1) Its political. You're our upstream, you put in filters if you're so hot about filters and "best practices" and hippie RFCs. No its your network full of powned machines, you put in filters on your routers. You owe us one because we pay you. No you owe us one because I advertised your ip space on your word before we got the LOA from legal so you filter. It doesn't really matter if you filter on the ISP's router or the customer's router therefore all answers are wrong and worth fighting about.

    2) You seem to think we don't. We did, for our known problem customers. We had guys who would try to take previous provider IP space with them to us (no no) and god knows how many times I/we blocked some idiot from advertising RFC1918 address space in BGP or even better a 0/0 route. And there were some guys we knew personally or by reputation so they had a very light hand, but they get powned sometimes too. And then there's the mass in the middle who just kind of shuffled thru life.

    3) Related to number 2 above its a technological problem in that to make a very long story, very short, router vendors "MOSTLY" don't dedicate fixed CPU horsepower to filtering individual ports, so its the kind of game where filtering the crap out of all the customers isn't affordable at a technical CPU power level, but we could do minimal filtering for the low risk customers and filter the hell out of the nightmare customers, at significant tradeoff. In the "really old days" like 90s keeping BGP stable was a stereotypical "press your luck" game of CPU and memory... things run great right until they don't. And after the n-th time of emergency upgrades, you start not intentionally hitting your thumb with a hammer by giving the CPUs etc as little to do as possible. Even just stuff like listing giant monolithic config files has a certain cost. Imagine if the entire linux kernel had to be one extremely long .c file. That's how routers are. And we had some routers with 100 or so customers connected to them.

    4) Related to number 3 above, oh given an infinite budget its technologically possible. But everyone thinks they're a better admin than they really are, so would you pay an extra $50/month for a connection that filters the hell out of people who have no idea what they're doing? That's kind of a hard sell to the boss. So if you filter everyone, you'll be too expensive and go out of business, and if you segment it out as an addon service, you'll get your customers who buy it fired.

    5) You'll just get windows machines that are powned (but I repeat myself) which don't have to address spoof to generate tons of traffic. A natural effect of consolidation of the industry. What do you think happens when your security groups expand because you've gone from 10000 little garage scale webhosters to like 10? Naturally internal attacks are going to be 1000x more likely than in the old days. So filtering on the border is becoming less important as the border to area ratio shrinks. The days of the DOS or attack across boundaries is shrinking, just like capitalism in general. Someday they'll be the one ISP and the one webhoster (same?) and they'll be only one BGP AS number and never again a cross company border attack. Well not in practice but in theory we're working to get there ASAP.

    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4