Inspired by recent discussion, these are my theses, which I hereby nail upon the virtual church door:
1 If you can do an online check for the validity of a key, there is no need for a long-lived signed certificate, since you could simply ask a database in real time whether the holder of the key is authorized to perform some action. The signed certificate is completely superfluous.
If you can’t do an online check, you have no practical form of revocation, so a long-lived signed certificate is unacceptable anyway.
2 A third party attestation, e.g. any certificate issued by any modern CA, is worth exactly as much as the maximum liability of the third party for mistakes. If the third party has no liability for mistakes, the certification is worth exactly nothing. All commercial CAs disclaim all liability.
An organization needs to authenticate and authorize its own users; it cannot ask some other organization with no actual liability to perform this function on its behalf. A bank has to know its own customers, the customers have to know their own bank. A company needs to know on its own that someone is allowed to reboot a machine or access a database.
3 Any security system that demands that users be “educated”, i.e. which requires that users make complicated security decisions during the course of routine work, is doomed to fail.
For example, any system which requires that users actively make sure throughout a transaction that they are giving their credentials to the correct counterparty and not to a thief who could reuse them cannot be relied on.
A perfect system is one in which no user can perform an action that gives away their own credentials, and in which no user can authorizes an action without their participation and knowledge. No system can be perfect, but that is the ideal to be sought after.
4 As a partial corollary to 3, but which requires saying on its own: If “false alarms” are routine, all alarms, including real ones, will be ignored. Any security system that produces warnings that need to be routinely ignored during the course of everyday work, and which can then be ignored by simple user action, has trained its users to be victims.
For example, the failure of a cryptographic authentication check should be rare, and should nearly always actually mean that something bad has happened, like an attempt to compromise security, and should never, ever, ever result in a user being told “oh, ignore that warning”, and should not even provide a simple UI that permits the warning to be ignored should someone advise the user to do so.
If a system produces too many false alarms to permit routine work to happen without an “ignore warning” button, the system is worthless anyway.
5 Also related to 3, but important in its own right: to quote Ian Grigg:
*** There should be one mode, and it should be secure. ***
There must not be a confusing combination of secure and insecure modes, requiring the user to actively pay attention to whether the system is secure, and to make constant active configuration choices to enforce security. There should be only one, secure mode.
The more knobs a system has, the less secure it is. It is trivial to design a system sufficiently complicated that even experts, let alone naive users, cannot figure out what the configuration means. The best systems should have virtually no knobs at all.
In the real world, bugs will be discovered in protocols, hash functions and crypto algorithms will be broken, etc., and it will be necessary to design protocols so that, subject to avoiding downgrade attacks, newer and more secure modes can and will be used as they are deployed to fix such problems. Even then, however, the user should not have to make a decision to use the newer more secure mode, it should simply happen.
Perry E. Metzger perry@
Posted on the cryptography mailing list at Jul 31, 2010