This week, I was forced to change my Gartner corporate password, and the password I use to access my online pay stubs. The latter was particularly aggressive in demanding a complex string that is impossible for me to remember. According to my password storage software, in the past 6 weeks, I’ve been forced to change 17 different passwords. Its nice to know that so many different institutions are working so diligently to please their auditors, but I have to wonder just how deep this security commitment goes. After all, what good is a fresh password if it is sitting on top of stale security technology?
We spend a lot of time evaluating the performance of security operational processes, but I often wonder if this isn’t a deliberate distraction, a sort of best practice figleaf meant to hide the fact that we often don’t know how secure the underlying code actually is.
Many of the seminal works on computer security were collected and scanned by a team at UC Davis and can be found online at NIST. If you spend some time reading through these documents, which date back as far as 1970, you’ll see relatively little concern about password complexity and aging. What you may well experience is a sort of déjà vu all over again, as you watch over the shoulders of the world’s top computer scientists and their struggle to identify and compensate for the innate security weaknesses of a set of networked shared-resource systems. A prime example of what goes around–goes around a lot–is James Anderson’s 1972 paper for the Airforce, in which he discusses the inherent shortcomings of the typical penetrate and patch process. The only thing that has changed in 40 years is the amount of penetration testing and patching. But it still remains a hit and mostly miss operation.
We keep applying vulnerability patches because we haven’t figured out how to make the code safe in the first place. This is not to say that we haven’t learned a great deal about security architecture, secure coding, and security testing. What we have not learned is how to reach any useful conclusion as to just how secure any particular piece of code is. In contrast to the 1970s, today’s architects and coders have a useful understanding of basic security principles. However, the threat and risk considerations for public cloud-based services are exponentially more significant than what confronted a typical 1974 mainframe.
Vendors are constantly asking us to use systems based on highly complex and unproven software. When asked how secure this code is, providers tell us that they have the best people, and that an auditing firm has evaluated their operational processes. Google brags that their security staff has scooters, a mall cop capability totally irrelevant to the attack resistance of their proprietary infrastructure. I’m afraid that history is going to get this one right. The integrity of your data and processes is dependent upon the robust design and careful coding of somebody else’s shared-resource network-based software. It might be great code, but the vendors struggle to provide useful evidence, and the buyers lack any agreed upon practice for code evaluation.
For the lack of any useful process for the evaluation of code risk, we are choosing instead to address questions that are relatively easy to answer, and pretending that the answers matter. Let the farce be with you!