10 years ago this week, Gartner released Assessing the Security Risks of Cloud Computing Although we had written several research notes in 2007 discussing SaaS security, the 2008 note co-authored by myself and Mark Nicolett was Gartner’s first research using the term ‘Cloud Security’.
Unsurprisingly for a new domain, we had more to say about the hypothetical risks associated with the public cloud than we had to say about the specifics of managing those risks. Most of the advice was generic, including bromides such as “Organizations that have IT risk assessment capabilities and controls for externally sourced services should apply them to the appropriate aspects of cloud computing.” Captain Obvious couldn’t have said it better. That lack of specificity was representative of the conundrum that continues to frustrate the IT world: the cloud is just like traditional forms of computing, except that everything is different.
Our 2008 research highlighted 4 key findings that have remained significant considerations for the use of public cloud computing:
Finding 1: The most practical way to evaluate the risks associated with using a service in the cloud is to get a third party to do it. Formal 3rd party evaluations (ISO 27001, SOC2, FedRAMP) are the only scalable way to provide a useful level of assurance on cloud provider security. Unfortunately, it still remains the case that only a small percentage of cloud service providers have undergone one. ‘How do we evaluate the security of CSPs’ remains my most frequent inquiry topic.
Finding 2: Cloud-computing IT risks in areas such as data segregation, data privacy, privileged user access, service provider viability, availability and recovery should be assessed like any other externally provided service. In practice, the segregation of data between customers has not proven to be a significant problem, but the relative lack of transparency around the associated CSP technical and process controls remains a typical area of customer concern. There have been a number of small CSPs that have gone bankrupt, so vendor business viability remains a uniquely difficult aspect of cloud risk management, but in most cases, the providers have been so small that their loss was barely noticed by the enterprise. CSP incidents resulting in permanent data loss do occur, but are relatively infrequent.
Finding 3: Location independence and the possibility of service provider “subcontracting” result in IT risks, legal issues and compliance issues that are unique to cloud computing. Increasingly strict privacy regimes, and the dominance of the cloud services market by US-owned CSPs, has made data location a growing concern. Today, new global norms for privacy or government data discovery look less likely than ever. The ‘chain of service provider’ model continues to grow in significance. The risks have mostly been sustainable, but it remains an area of ambiguity that would benefit from greater formalization on risk assessment and shared responsibility.
Finding 4: If your business managers are making unauthorized use of external computing services, then they are circumventing corporate security policies and creating unrecognized and unmanaged information-related risks. At one point, a significant minority of Gartner clients had policies that banned the use of unapproved external services. The rapid shift in software delivery from a licensing to a servicing model made this unsustainable, and many IT leaders flip flopped, seemingly totally washing their hands of any responsibility for the implications of shadow IT. The number and significance of cloud services, especially SaaS, has exploded over the last 10 years. Several Cloud Application Security Broker (CASB) vendors provide reliable evidence that most organizations improperly expose large amounts of sensitive data in the public cloud—not because the cloud services are ‘insecure’, but because they are being used insecurely.
A decade ago, we used a tone of sober caution, pointing out that public cloud computing was a new model that raised significant risk and security concerns. What we did not fully appreciate in 2008 was that the majority of security and data loss incidents would not be due to service provider failures, but would be failures on the part of their customers to use cloud services appropriately. Our research did suggest asking “Are instructions provided to administrators and managers for setting and monitoring policies?” Perhaps we should have further asked “and if they do provide instructions, will you read and follow them?” Arguably, the typical cloud services user is not highly interested in discipline or governance, preferring instead to let everyone do whatever they want. CSPs normally do not emphasize the boring and unglamorous aspects of digital operations (indeed, a strong case can be made that most cloud services are marketed as a way to circumvent the IT department). Most cloud use scenarios remain a tacit agreement between the provider and customer to avoid awkward questions about user activity and responsibility.
As public cloud increasingly becomes the default model for software vendor delivery, and hosts a growing share of whatever computing is still left ‘in house’, it remains a marvelously ambiguous topic. In the previous decade, recognizing the inherent difficulty in definitely pronouncing a cloud service provider as being ‘secure’, we began speculating on a philosophical question: if public cloud experiences relatively few failures, will people eventually come to trust the delivery model, even if they don’t have causal evidence? Under what circumstances would the default assumption flip to ‘secure’. It took us until 2015 to declare “Clouds Are Secure”, subtitling that research ‘are you using them securely?’ The caveat about ‘secure use’ remains our most important finding.
Awkwardly, organizations that undertake heroic levels of CSP risk assessment effort fail to demonstrate a significant difference in experience in comparison to organizations that barely bother. We are not saying that you can always take CSP security for granted, but it is difficult to escape the conclusion that many enterprises would be better served by spending more time on the things they can control, instead of trying to manage the things they cannot. Most cloud security incidents involve avoidable customer misuse of the cloud service. Likewise, cloud service providers do a relatively good job of meeting their Service Level Agreements, but their customers often fail to take full advantage of configurable or optional resiliency mechanisms. The best practice for safe use of cloud computing is not the crafting of the ultimate questionnaire—it’s the knowing how to use it appropriately. This is why we keep hammering home the message that responsibility is SHARED between provider and customer.
Ten years of focused research has reinforced Gartner’s observation that while public cloud is very much like traditional computing in the abstract, countless practices have changed in significance and form. It has proven a secure and reliable starting place for computing, with overall low levels of failure. Keeping it that way requires the will to do so, though, and the IT community continues to refine its understanding of ‘how to use it appropriately.’
The Gartner Blog Network provides an opportunity for Gartner analysts to test ideas and move research forward. Because the content posted by Gartner analysts on this site does not undergo our standard editorial review, all comments or opinions expressed hereunder are those of the individual contributors and do not represent the views of Gartner, Inc. or its management.
Comments are closed