Nominum, now part of Akamai, Research shows about 15% of DNS DDoS traffic is amplification yet it still has an impact (the rest are random subdomains). Data also shows bad guys continue to leverage open DNS resolvers which after more than 2 years might be considered an "old-days" technique, yet there are still around 17 million of them on the Internet. More recently our research teams have seen bots sending amplification queries.
For those who are new to the topic amplification attacks are simple to execute: hackers identify names where small DNS requests generate large DNS responses. Simple math reveals the ratio:
R = answer size / query size
Legitimate domains with large resource records are available and new "purpose-built" amplification domains continue to be registered.
The screenshot below shows a new purpose-built amplification domain name - mg1.pw with an answer size of 3944 bytes. Using the formula above with a request size of 24 bytes the amplification ratio is: R = 3,944/24, or 164, an impressive return on investment!
Using this name to send just 30K queries to open resolvers generates 1Gbps of attack traffic. Spoofing an IP address of a victim site ensures the traffic gets diverted from the attacker to the target. Provider resolvers act as intermediaries
To deal with amplification attacks, one idea that's emerged is to set the maximum size of UDP packets used for responses. When UDP responses exceed the configured size limit truncated responses are sent. For example a resolver can be configured to limit the size of DNS responses to say 512Bytes.
It seems like a good approach, but can it really mitigate the impact of attacks and solve the problem? One obvious issue is since amplification attacks spoof the source IP address of the target when the target receives truncated responses to DNS queries it did not send it will of course simply discard them. Meanwhile the attacker will continue to send queries. This means the resolver being attacked does a lot of work - for limited return. In intense attacks the additional work can cause considerable stress and even bring down the server. What is gained is the target of the attack does not receive massive waves of amplified traffic. But providers who want to protect their resolvers will need a different approach.
Another issue with very coarse-grained rate limiting, based exclusively on the size of responses, is the increasing prevalence of DNSSEC. Responses are getting larger to accommodate the signatures and keys necessary to transmit the information required to better secure the DNS. It is not uncommon for responses to be larger than 2kbytes and even as large as 4kbytes! Below is an example for net.berkey.edu with a response size of 2330 bytes and there are hundreds (THOUSANDS) of other names with attractively large answers.
Today many legitimate query responses make good amplifiers. Simply relying on response size as a means of rate limiting just won't work very well because too much traffic will end up being rate limited and truncated. It's also simply not productive to create more work for resolvers and hosts in order for DNSSEC to work properly. A better approach is needed.
Rather than simply filtering based on response size a robust policy framework can make better filtering decisions. Filters can be applied at an ingress to the server to minimize the amount of processing needed to verify whether they are legitimate or malicious. Fine grained policies can carefully target malicious queries and always answer legitimate ones. For instance filtering based on an FQDN AND Query Type for commonly abusive queries like ANY is far more effective than filtering on response size.
Dynamic threat lists provide better visibility into malicious activity with comprehensive coverage of fast-changing threats. Matching against dynamic lists, including blocklists AND whitelists, AND applying policy makes it possible to target even more sophisticated attacks, while ensuring legitimate activity is protected. Coupled with fine grained policies the entire DNS infrastructure (authorities and resolvers) can be protected, and unwanted traffic that would otherwise saturate networks and targets can be deterred.
Subscribers also get a better experience when the DNS is fast and predictable.