By far the biggest part of attack mitigation in my experience is out-scaling the attack. A well written and configured application stack can handle a decent amount of traffic itself before becoming bogged down processing malicious traffic, but at some point you'll cap out either the application, the NIC, the upstream switch, the router, or the ISP line, if your application is running in just one place. To get around that, huge providers like the ones you listed are heavily multi-homed. This means they announce their traffic routes to the internet from multiple locations, so traffic naturally flows to the closest (hops wise, not necessarily geographically speaking) endpoint.<p>From there, you can add layers of protection ranging from simple things like blocking traffic that is obviously malicious (TCP flags, port numbers, etc) to more complex things like pattern recognition in both the overall trends of the data and on a per-packet basis. After you've decided with a decent certainty that it's not malicious traffic, you pass it off to the actual backend service.<p>For systems that are designed to scale horizontally, that may be a neighboring machine (or even the same machine) in that data center. For single-homed backend systems that can't scale horizontally to multiple locations, that "clean" traffic is then sent via some mechanism (possibly a GRE tunnel, possibly just raw internet traffic to a secret IP) to the backend service. Depending on the methodology used, the filtering may be a true bidirectional proxy, in which case the reply goes back to the scrubber and then out to the original sender, or it may be a unidirectional proxy, in which case the reply goes directly back to the original sender.<p>All attack mitigation works in some way like this, whether it be by designing your application from the beginning to be multi-homed and able to run in multiple datacenters, or by installing a separate mitigation layer that scrubs attack traffic.