Just as the transition from a physically-secure batch-oriented environment to a remotely-accessible multi-tasking/multi-user environment had huge implications for security, the evolution from multi-user host-based environments to multi-tenant cloud-based environments represents an equally significant security challenge. The very idea of computer security only became necessary in the early 1960s when a more flexible and complex architecture attempted to provide service to multiple entities simultaneously, without the benefit of their physical presence.
To the extent that cloud computing truly does represent a new model, with greater flexibility, higher complexity, and a multi-entity support concept, it represents a complex engineering and architectural task. A traditional operating system such as Linux, must allow the safe and reliable co-existence of a set of users, ensuring that they all can gain access to memory, storage, processing, and peripherals, without interference or unauthorized access. It must allow for privileged users who have the ability to maintain, repair, backup, and restore the system and users. A cloud computing has those same needs, but they are extended across a virtualized storage and processing environment that not only consists of multiple hosts, it frequently consists of multiple sites. Instead of a flat entity base with a loose sort of group support, cloud environments are intended to support multiple tenants, each of which has its own user base.
Back in the day, I learned Unix security from the ground up, working my way through Kochan and Wood, recoding some of their security tools in C because the shell scripts didn’t all work on the CCI Power 6/32 where I cut my sysadmin teeth. I poured through Maurice Bach’s Design of the Unix Operating System. We used to argue about which version of Unix was most secure, but even the differences between SysV and BSD were pretty minor in comparison.
There are a huge number of people who understand *nix and Windows internals, but how many can claim the same about any of the proprietary public cloud environments? I’d like to think that quite a lot of people know quite a lot about Xen, but as you get more proprietary, where’s the Bach book on internals? How much like a hypervisor is a fabric controller? If Google and Salesforce aren’t using virtual machines, are they still clouds?
A wide variety of engineering tradeoffs are being made in the designs of different cloud models, and the degree to which the protection of data can be guaranteed while it is at rest, and while it is being accessed, likely varies. We shouldn’t assume that all clouds are created equal, because they vary quite a bit under the hood. I’m quite happy to use any old cloud for personal tasks that don’t involve anything sensitive, as long as I can back up my data outside of the cloud and repurpose it if the cloud blows away. For enterprise use involving sensitive data or processes, I don’t know how you can make an adequate with risk decision without knowing at least as much about the cloud internals as I know about Unix internals.
Unfortunately, in the face of a nested set of distributed services, any single one of which might be as complex as a traditional OS, and in the face of implacable vendor opacity, the cloud security community is mostly shying away from the same close examination of architecture, design, and coding that was once considered crucial in computer security. Cloud assessments have a discouraging tendency to concentrate on well-understood things that fit easily on a checklist, such as password aging, and fire suppression. You may be comfortable with today’s complex computing environments, trusting the vendor assurances that because they are sitting in a SAS70 ‘certified’ facility that your data is safe, but it would be inaccurate to assume that the digital exploit community shares that lack of curiousity about the underlying technology.