Overshadowed by the far more serious X509 parsing vulnerabilities disclosed at BlackHat, one of the problems noted by Dan Kaminsky et al. was the existence of an MD2-signed root certificate.
On the surface it looks bad. If MD2 preimage collision is possible, an enterprising attacker could forge other certificates chaining up to this one, and “transfer” the signature from the root to the bogus certificate, complements of the MD2 collision. Root certificates are notoriously difficult to update– Verisign can not afford (for business reasons, even if it is the “right thing” for the Internet) to risk revoking all certificates chaining up to the root. Re-publishing the root signed with a better hash function is a noop: the existing signature will not be invalidated. Only option is to not trust any certificate signed with MD2 except for the roots.
But looked from another perspective, the MD2 problem is tempest in a teapot. Luckily no CA is using MD2 to issue new certificates. (At least as far as anyone can determine– CA incompetence is generally unbounded.) This is important because the MD5 forgery from last December depended on a bone-headed CA continuing to use MD5 to sign new certificate requests. That means a second preimage collision is necessary; simple birthday attacks will not work. Finding a second message that hashes to a given one is a much harder problem than finding two meaningful, but partially unconstrained messages that collide.
Eager to join in the fray against PKI, the researchers point to a recent result, An improved preimage attack on MD2, to argue that such a possibility is indeed around the corner. It turns out the feasibility of this attack and the 0wnership of MD2 was slightly exaggerated, to paraphrase Mark Twain. The paper in fact does quote 2**73 applications of MD2 hash function as the amount of time required to find a second pre-image. This is an order of magnitude above what any previous brute-force attack has succeeded in breaking but Moore’s law can fix that. What the paraphrase seems to have neglected is a far more severe resource constraint, stated bluntly in the original paper and mysteriously neglected in the Kaminsky et al summary: the attack also requires 2**73 bytes of space. Outside the NSA nobody likely has this type of storage lying around. None of the existing distributed cryptographic attacks have come anywhere near this limit– in fact most of them made virtually no demands on space from participants. To put this in context, if one hundred million people were participating, each would have to dedicate more than a thousand terabytes of disk space. Not happening. This does not even take into account the communication and network overhead now required between the different users each holding one fragment of this massive table as they need to query other fragments.