“Cryptographically Strong”: Here Be Dragons

In Government, Technology by Landon NollLeave a Comment

The term “cryptographically strong” is a term that is often misused. For example, in random number generation, there is a very specific mathematical definition for a “cryptographically strong random number generator”. However, all too often, the term is not used correctly in the context of random number generation.

If someone tells you, don’t worry, it is “cryptographically strong”, one can ask for a proof. Usually that is a mean thing to do because you will sometimes get “well that is what IETF says” or “it is a NIST standard” or “it is FIPS certified” or some such rot. In some cases, someone will point to an actual proof. Be very skeptical of the “proof” that something is “cryptographically strong”. Usually the proof makes a very narrow statement under very specific conditions. Real world conditions are frequently presenting conditions not covered by the so-called proof.

No proof will be able to say “this is secure” in an absolute way. They might be able to say “the effort to find a key by random search requires > X random attempts”, while in the real world the attacker uses keys generated by common passwords selected by system admins for the convenience of them having to type the passwords over and over again.

I have seen more than one case where an algorithm was being pushed by a group because of a so-called “proof” and where the algorithm was found to have a significant flaw that was not covered in the so called “proof”.

The IETF has a long history of cryptographic failures. They may be able to codify an existing practice, but they have a long history of producing flawed protocols and flawed cryptographic implementations: especially when the IETF is inventing something new it often turns out needing to be fixed and revised due to newly discovered flaws.

The NSA does not have a core competence in protecting data with cryptographic methods, and therefore should not be viewed as someone to recommend a best practice. This statement may come as a surprise to many, and may be hotly disputed by some. The NSA’s core competence is in breaking security, and in exploiting flaws in algorithms, methodologies, and how products are actually used. So, if they recommend against using something, and do so in an honest fashion, then there is a reason to have caution against using that thing.

Think of the NSA is a building demolition company, not a set of skilled architects and artistic interior designers. I mention the NSA here because often “cryptographically strong” is used because the NSA, via a NIST standard, suggests that something should be used.

Certification processes such as FIPS or Common Criteria may be effective processes for determining that some tests were passed, but that does not in any way guarantee that the product is secure. Usually when someone claims that a product does not need to be tested because it is already certified, then that is a very good reason to be skeptical of the security of the product.

It is interesting to note that many government agencies that have a strong interest in security, almost always exempt themselves from FIPS or Common Criteria requirements. In many cases they require that the product disable FIPS mode before the product can be used within the given agency. Part of the reason for this is that it takes a long time before a FIPS flaw can be fixed, and then a revised certification be established to guard against the flaw, and then have the impacted products corrected and re-certified. And by the time all that has happened, the attack landscape has evolved out from under the certification process yet again.

A better approach, and perhaps more honest, would be to say that something “appears to follow some known best practices as of today”. While not as sexy, it is a more honest statement.

When evaluating the security of some, say function, it is better to make very careful and very limiting statements such as “assuming that the key is properly chosen from a source that is indistinguishable from a true random source, and assuming that the key is not a weak key, and assuming that the key is properly stored and protected throughout the life-cycle of the data it is protecting …”.

Mathematics cannot protect algorithms very well from stupid users, nor incompetent developers. Mathematics might be able to say “do not use a given key to encrypt more than X bytes of data if you want the odds of a random guess to be less than 1 in Y’. But here again Mathematics does not guard against improper key use, nor against new attacks not anticipated by the algorithm developers.

I’m not trying to be cynical here. I’m just suggesting that terms such as “cryptographically strong” are too often inconsistently used, too often over broadly applied, or when applied with some formal rigor too often turn out to be next to useless.

Leave a Comment