Gartner Blog Network

Definition: Service Provider Security Evaluation

by Jay Heiser  |  August 10, 2012  |  2 Comments

The process in which the buyer asks a random list of questions that might have some minor relevance to some aspect of a provider’s security posture, and the potential provider pretends to answer them.

Additional Resources

View Free, Relevant Gartner Research

Gartner's research helps you cut through the complexity and deliver the knowledge you need to make the right decisions quickly, and with confidence.

Read Free Gartner Research

Category: cloud-computing  risk-management  security  

Tags: cloud-computing-risk  cloud-security-standards  risk-assessment  security  

Jay Heiser
Research VP
6 years at Gartner
24 years IT industry

Jay Heiser is a research vice president specializing in the areas of SaaS and public cloud risk and control. Current research areas include SaaS governance, cloud provider transparency and digital business risks.Read Full Bio

Thoughts on Definition: Service Provider Security Evaluation

  1. Craig Heath says:

    Very good 🙂

    What do you think of vBSIMM as a means to ask less random questions?

  2. Jay Heiser says:

    I don’t know of anything else like vBSIMM. Conceptually, it makes a lot of sense, and Cigital clearly has huge amounts of experience and expertise in the area of software security quality management.

    Typical security evaluations tend to dwell on ops–because that’s easy. You can send some auditor fresh out of university into a data centre, armed only with a checklist, and they can reliably tell you whether admins are rotating their passwords, and whether vendor patches are being applied. So what good is that? For relatively simple environments running on well-understood operating environments, it is, as Sammy might say, ‘pretty not bad.’ If the security quality of the code cannot be assumed, then an audit of operational processes is likely to provide a misleadingly positive result.

    Its no wonder that the world avoids attempts to evaluate software security quality. How long did it used to take the NSA before it was willing to ‘trust’ an OS? Long enough that all the test platforms were obsolete hardware. That degree of code evaluation turned out to be totally impractical.

    Process quality is exponentially easier to measure than output quality, so I think its the right approach. I’d love to see this kind of testing applied to some of the major SaaS providers. One of the huge challenges of ‘cloud computing security assessment’ is accomodating the dynamic nature of many of the service providers. They are swapping code in & out on a constant basis. A mechanism that evaluates the quality of code creation and testing would theoretically scale to cloud speeds. Would it be a useful level of assurance?

    What do you think?

Comments are closed

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.