These stories about mitigating DDoS-attacks often comes up, but there is seldom any actual details on how it was done. I guess it's because there often is a manual process, analyzing logs and banning IPs and/or IP-ranges.
But for these more automated mitigating services, what technics are used? I guess some of the techniques are based on complex machine-learning algorithms and considered company secrets. But there must be some general, more well known, automated mitigation techniques / indicators? For application-level attacks is it enough to ban the offending IPs in the firewall? If so, how do one best identify these IPs? Assuming one have access to the access-log, I can come up with a few ideas:<p>- Like mentioned above; banning IPs / IP-ranges originating from countries normally not associated with traffic to the site (i.e. build a statistical knowledge beforehand)<p>- Simple request counting, per ip, and IP-ban users who are overly active (again based on the sites statistical profile)<p>- More advanced request counting, trying to create a fingerprint of the users behaviour (time between requests, etc) - and comparing this to the sites average fingerprint.<p>- Keep a global request-per-time-count and use this to measure the current load of the site - the mitigation techniques will only be activated if the site is under heavy-load / attack.<p>Any other ideas?
I was entirely wrong when I thought that I understood DDoS attacks fairly well. There's actually at least three different types of attacks:<p>"Layer 3 and Layer 4 DDoS attacks are types of volumetric DDoS attacks on a network infrastructure. Layer 3 (network layer) and 4 (transport layer) DDoS attacks rely on extremely high volumes (floods) of data to slow down web server performance, consume bandwidth and eventually degrade access for legitimate users. These attack types typically include ICMP, SYN, and UDP floods."<p>"A Layer 7 DDoS attack is an attack structured to overload specific elements of an application server infrastructure. Layer 7 attacks are especially complex, stealthy, and difficult to detect because they resemble legitimate website traffic. Even simple Layer 7 attacks – for example those targeting login pages with random user IDs and passwords, or repetitive random searches on dynamic websites – can critically overload CPUs and databases. Also, DDoS attackers can randomize or repeatedly change the signatures of a Layer 7 attack, making it more difficult to detect and mitigate."<p>This glossary proved to be quite useful:
<a href="http://www.prolexic.com/knowledge-center-dos-and-ddos-glossary.html" rel="nofollow">http://www.prolexic.com/knowledge-center-dos-and-ddos-glossa...</a>
Fair warning - I just got pinged from corporate security that this site triggered an alert on our IDS: "the IDS reported the machine accessed a site containing a JavaScript known to contain hidden iFrames and malware redirects". He said it was named "tongji.js". Can't investigate further as I'm still on said network and don't want to re-ping it...
How would you differentiate the traffic coming form headless browsers from normal users ? with a proper user-agent and javascript enabled, isn't it almost impossible ?