The following is a guest post from Akamai Principal Security Engineer Rich Salz.
The Internet Engineering Task Force (IETF) is becoming a center for the application of cryptography. There are a handful of factors contributing to this:
· It is the technical organization that defines the protocols and standards that enable the Internet.
· The recent Snowden revelations that showed how much government spying there is on Internet traffic.
· The IETF response (RFC 7258) to treat pervasive monitoring as an attack that must be mitigated.
· Increasing recognition in the academic community that TLS is an important protocol; papers discussing attacks on it get noticed.
There are several players involved, and it's hard to keep track without a scorecard. Here's a rundown:
The HTTP/2 Working Group is just about finished with their work. Unlike previous HTTP work, there is an entire section on "Use of TLS Features." Requirements imposed include TLS 1.2, with the serverNameIndication extension, and a restricted set of cipher suites that provide Perfect Forward Secrecy, and must only use block ciphers that provide authenticated encryption. The net effect is that an evesdropper will not be able to find session keys beyond the one they exploited, and attacks on the venerable RC-4 stream cipher that expose data or key material cannot happen. It is not a coincidence that these features are what TLS 1.3 is working on as some key members are on both groups.
The TLS Working Group is defining a new version of the protocol. They have removed all static key exchange, so if the server's RSA key is intercepted, all sessions will not be exposed. The NSA currently has authority to hold encrypted traffic for as long as they like; presumably this means until it's decryptable. Using PFS algorithms in TLS 1.3 will make that stored data much less useful. Elliptic Curve Cryptography is also a big interest in TLS 1.3, because of the belief that shorter keys (and therefore less computation) will afford the same protection as larger RSA keys.
NIST is the agency responsible for Federal IT standards, under the advice of the NSA. In mid-1999 they released a specification of curves recommended for Federal Government use. There are about a dozen curves, with varying sizes (and therefore strength). Many of them are known by the size of the prime number, so P-244 uses a 244-bit prime number. The use of these curves in TLS is documented in IETF RFC 4492.
There are two problems with the NIST curve family. The first is that there are too many choices.
One attendee at IETF last week said that "we should have picked two or three." Too many options make security analysis harder. The second is the concern that since the NSA influenced an EC-related random number generator such that it had a back door, many people are worried that they exerted similar influence on related cryptography.
There is no proof that this has happened. One fall-out from this suspicion is the interest in "rigidity" in selecting curves. If I say a curve is generated by taking the first prime number larger than 255 bits, that has a greater comfort factor than saying "take the third prime number you find after looking at digits 200-400 of Pi." That latter is reproducible, but the complicated procedure makes you look askance at the process.
Again, nothing has been found, and some experts label these concerns groundless.
The IRTF is the research arm of the IETF. The CFRG, Crypto Framework Research Group, looks at cryptography used in IETF standards. Their mission is to act as a bridge between theory and practice. They've recently become very busy.
The TLS WG has asked them to recommend some curves for key-exchange and signature at a variety of sizes/strengths. This is to augment, not replace, the NIST curves; ECC has greatly advanced in the past 15 years. They have also been asked to look at a new cipher combination known as Chacha20/Poly135.
The interest in this is to provide an alternative block cipher since AES-GCM is "all we have left."
Fortunately, little new work is required for this, and its mostly summarizing existing research papers.