At the 1997 JavaOne conference, the Java Security Architect, Li Gong, gave a presentation on Java security. One of his slides is particularly useful for understanding Java security and cryptography. It contains a list of five inequalities, to which I’ve added explanations.
- Security != cryptography
Adding cryptography to an application will not make it secure. Security is determined by the overall design and implementation of a system; cryptography is a tool for building secure systems.
- Correct security model != bug-free implementation
Even if you have a great design (model), bugs in your implementation can be exploited by attackers. With a correct design, however, you can focus on debugging the implementation. If your design is not secure, you have to go all the way back to the drawing board before you even think about debugging.
- Testing != formal verification
Although testing is a great idea, it won’t prove to anyone that a system is secure. In the real world, “formal verification” means extensive reviews of your system’s design and implementation by knowledgeable security people. A cheap way to do this is to post your application’s source code to the Internet and invite people to poke holes in it.
- Component security != overall system security
System security is a chain, and any link can be broken. Even if the components of a system are secure, they may interact in insecure ways.
- Java security != applet containment
A lot of the buzz about Java security has centered around the applet “sandbox” and the security of applets running in browsers. (Go look at comp.lang.java.security, for example, and you’ll find it’s mostly filled with applet sandbox questions.) In truth, this is only a small part of the Java security picture. Most of this book is about the rest of the picture.
 To see the whole presentation, see http://java.sun.com/javaone/sessions/slides/TT03/index.html.