Secrets and Lies

Digital Security in a Networked World

by Bruce Schneier

430 pages, ISBN 0-471-25311-1, Wiley, New York, 2000.

www.wiley.com

Reviewed by J. M. Haile, Macatea Productions, http://www.macatea.com/

Secrets and Lies presents Bruce Schneier's informed and informative take on computer and network security. The book divides into three parts: the landscape of digital threats, technologies, and strategies for securing computers and networks. The presentation is accessible both for the individual computer enthusiast and for professional system administrators.

Several important themes run through this book, the first is that security is a system (a process), not a product. Networks, computers, and software are all complex systems: they are composed of individual parts that interact with each other, the interactions include feedback, and the systems are susceptible to external disturbances—from users, from other computers, from other networks. As a result, we need security systems to protect them: simply installing an antivirus software product on your computer will not provide much protection.

A second theme is that security in the digital universe poses exactly the same kinds of problems as security in the physical universe. Consequently, our schemes for protection should be of the same kinds; in the physical world, security involves prevention, detection, and reaction. A bank might place its funds in a safe, the safe is placed in a vault, the vault has a single access guarded by a combination lock and perhaps by a locked gate. That's all prevention. In addition, if the safe is opened by bad guys, an alarm sounds; that's detection. And when the alarm sounds, the police show up; that's reaction.

The same three components apply to digital security: a firewall to prevent access, software to detect attacks that penetrate the firewall, plus people and software that react to the detected intruder and nullify the attack. Further, in the physical world we use protection in depth: to steal bank funds, the bad guys must opened a locked gate, open the locked vault, then find the safe and open it. That's protection in depth. But the conventional wisdom seems to be that digital security is only about prevention, and further, that protection in depth is unnecessary: a single firewall, by itself, is presumed to be enough to protect a network.

Another of Schneier's themes is that while the theories underlying many of our digital security technologies are good, in practice they leave us vulnerable. For example, a common vulnerability occurs when trying to protect 128-bit encryption keys with user-remembered passwords (as in much of Windows NT security and in most hard-disk encryption products). In such cases, the theoretically high quality of 128-bit encryption is wasted because the implementations rely on user-remembered secrets, and those secrets are not by themselves secure.

A fourth theme is that as bad as many implementations of digital security actually are, the basic security problem is usually people not technology. People choose bad passwords, and if they choose a good one, they stick it on their monitors so anyone can see it. They use the same password for many different accounts; they may even use a bank card PIN for a password. If they get frustrated with the slow performance of antivirus software, they'll disable it. If a pop-up window displays a security certificate and asks the user to verify the connection to a website, the user blindly clicks "ok". If manufacturers release software updates to plug security holes, people don't install the updates on their machines. If they have access to confidential or classified information, they'll load it onto a laptop and take it home to work with, ignoring the fact that many laptops containing such information are stolen. People just don't want to be bothered.

(On the other hand, some people don't want to be bothered with physical security either. A few years ago a colleague's car was stolen. He bought another. A month later, the new car was stolen right out of his driveway. A month later, he and I were walking out his front door and I automatically locked it behind me. He chastised me, "No need to do that—everything is perfectly safe around here." Some people refuse to learn.)

But users are not the source of all people-related security problems, software designers and developers also contribute. Before new software is shipped, it is tested for functionality: Does the software perform as advertised? Some developers do more thorough jobs of testing than others, but everyone tests for functionality, at least to some degree. However, functionality is not security; the fact that software performs as advertised says nothing about the security of that software. Software designs rarely take security into account and, in any case, no manufacturer tests new software for security flaws. In fact software developers and engineers are not qualified to do such testing. Instead the testing is done, after the software is released, by users and hackers. And even when security holes are discovered, the manufacturer may not create and release a patch, unless the problem has received some amount of publicity. This curious situation is akin to building a house but not putting locks on the doors until after the house has been robbed and the event has been reported in the local newspaper.

A third kind of people-related vulnerability has, in the digital world, come to be called "social engineering;" in the physical world it's called "working a con." Some astute, determined attackers have realized that rather than obtain a password by attacking the technology, it's easier just to phone a secretary, gain her confidence (the con), and ask for the password. Such ploys take advantage of the human desires to be helpful and display superior knowledge. They can be worked successfully on secretaries, janitors, and vice presidents, as well as Joe User. Any technological security measure can be subverted by social engineering; the defense lies in properly educating all employees about the possibilities and giving them procedures for dealing with requests for information.

The last theme I'll mention here is that users and organizations must follow a process if they want to heighten the security of digital resources. No matter whether those resources are a single computer, a small network, or many networks in a large organization, a security plan must be developed and implemented. Planning starts by listing the resources to be protected and identifying the kinds of attacks that could be mounted against those resources. Schneier makes clear that identifying possible attacks is not a trivial task. But once you understand what you need to guard against, you can formulate a defense by combining prevention, detection, and reaction.

Secrets and Lies is easy to read, filled with pertinent anecdotes, and takes a common sense approach to security issues. Schneier is careful to define all the many, many jargon words that have appeared in digital-security discourse (like script kiddies, exploits, page jacking, asymmetric encryption). He also goes to considerable pains to convince the reader that any technological defense can be broken or worked around by an attacker who has sufficient knowledge, time, and resources. Again, this does not differ from situations in the physical world. In summary, this book should be read by anyone who needs to protect digital resources and information.

(jmh 12 October 06) © 2006 by J. M. Haile. All rights reserved.