by Jay Heiser | April 11, 2014 | 2 Comments
Change all your passwords. Now. And then do it again in a week. Of course, there’s no evidence that any passwords have been exploited, but isn’t the lack of substantive evidence a suspicious fact in and of itself? It can be if you want it to be.
My favorite presentation at the RSA Conference was from Nawaf Bitar who introduced the immediately popular hashtag #firstworldoutrage. It neatly captures the idea that when a people are relatively comfortable and secure, they will start inventing things to be vocally outraged about.
As a case in outrageous point, I was disappointed with much of the recent media commentary on the GM ignition switch issue that misleadingly characterizes the fix as a simple matter of replacing a $.50 component. In a recent article, “In Defense of GM: No one is asking the right question: Was the company’s risk assessment about the faulty ignition switch reasonable?” I actually had asked that question, and the article provides a compelling explanation that it wasn’t worth the money to fix what is a statistically insignificant source of fatality.
So I ask the same ‘acceptable risk’ question about Heartbleed. Is this truly a Spinal Tap moment in the infosec world in which every single Internet citizen needs to take heroic measures to change the majority of their passwords? While it has been demonstrated that the vulnerability can be used to collect random chunks of data from Internet servers, including password and username pairs, it has not been shown as a practical mechanism to capture large amounts of passwords.
Now that everybody knows about this bug, the race is on to close the SSL holes before they are significantly exploited. There’s no question that the code needs to be fixed, but it is going to cost the collected IT world a lot of time and money to identify all the vulnerable systems, and patch them.
The urgency of password change for the millions of Internet citizens is less obvious. What will be the net social cost of every Internet citizen changing a dozen passwords? I have over 250 myself, most of which probably haven’t even been used during the 2 year vulnerability window (Neustar told Gartner this week that the average is c. 50 passwords/user) I wonder how much the overall support cost will be to recover from the inevitable password change failures? Will it all have been worth it?
One cost will be the cultural impact of one of the Internet’s biggest incidents of ‘crying wolf’. Most people assume that a wolf was sighted just outside the doors of Facebook. When the digital dust finally clears, my expectation is that very few password exploit incidents will be documented, but that will be old news for a world looking for new forms of outrage on a daily basis. But if we do experience more incidents like this, people will start asking questions about whether or not these ignition switches always need to be changed, and over time, they will lose whatever appetite they have for the fun of warning their Facebook friends that they better change all their passwords
What this incident has turned into is yet another example of the inherently flawed nature of passwords. A more unusual lesson to derive from this incident is that the global Internet rests upon widely shared code that represents the potential for more single points of failure. Major public cloud service providers, financial service firms, social networking services, hardware devices, and countless other Internet-enabled technologies not only turned out to be dependent upon the same SSL source code, but like much of the open source code that defines our digital world, it was developed by a small group of part-time volunteers. That seems an insufficiently substantial foundation to support the global expectations of privacy, confidentiality, and reliability. Perhaps that’s why they call it the cloud.
Category: risk management security Tags: heartbleed, security
by Jay Heiser | April 8, 2014 | 5 Comments
Its too bad that Dick Cheney’s awkward little epistemological speech has been so thoroughly politicized, turning an important risk management principle into an opportunity for derision. Intelligence analysts, and IT analysts, need to be acutely aware of the limits of their knowledge, especially when making decisions about the how to take advantage of public cloud services.
Anybody making risk decisions about public clouds needs a strong understanding of the degree to which they can trust the information they have about those services. To apply this particular Theory of Knowledge principle to Public Cloud Services:
1) Known Knowns: If data is encrypted before it is uploaded, we know that it is encrypted. If the data is not encrypted, we know that it can be read by anyone who accesses it, which leads to the second category.
2) Known Unknows: If our clear text data is in someone else’s site, we know that it is vulnerable. What we cannot ever know for certain is whether an unauthorized person takes advantage of that vulnerability. Its a level of ambiguity, but an understood one.
3) Unknown Unknowns: If there were some sort of vulnerability that we had never conceived of, it would be in this final categority. The fact that you haven’t even thought it might exist, means that you don’t know what to look for to ensure that it is controlled. An example might be a cloud service provider that exposes your email boxes to external surveillance in order to conduct load balancing and facilitate service continuity.
In retrospect, if a cloud service provider claims to be spreading your data across multiple locations (which virtually all of them do), before storing your data in that service, it would make sense to ask them what mechanism they use to transmit your data between those locations, and how they protect it in transit. I see a lot of cloud service provider questionnaires, but I don’t ever remember this particular issue coming up.
An example has recently come to light of an ongoing confidentiality failure involving a CSP copying customer data between their data centers. For many cloud buyers the news of this unexpected form of exposure has moved this particular risk from category 3, into category 2. Now we all know that we don’t know how our data is protected when it is being replicated between CSP data centers.
We can only hope that not every virtual backend is flapping open in a packet storm.
Category: Cloud IT Governance risk management security Tags:
by Jay Heiser | September 25, 2013 | 2 Comments
Although the actual events took place at widely varying times, the summer of 2013 has witnessed the public release of 3 major ‘inappropriate use of the cloud’ incidents.
On July 28, Oregon Health & Science University (OHSU) felt compelled to notify 3,044 patients that while there was no reason to believe that their data had leaked, or been misused, it was in a place that it shouldn’t be, and they wanted to apologize. Several physicians had decided that their personal GoogleDrive accounts would be an appropriate place to share data, and while this undoubtedly was a convenient place to compare notes on their patients, they hadn’t undertaken a HIPAA BA with the service provider.
The following day, July 29, NASA’s Office of Inspector General released a report that “found that weaknesses in NASA’s IT governance and risk management practices have impeded the Agency from fully realizing the benefits of cloud computing and potentially put NASA systems and data stored in the cloud at risk.” (NASA has a LOT of data in the public cloud.) Citing a laundry list of weak cloud control practices, including not asking the permission of the non-existent Cloud Czar, the OIG further stated that “in four cases NASA organizations accepted the cloud providers’ standard contracts that did not impose performance metrics or address Federal privacy, IT security, or record management requirements,” concluding from this that “As a result, the NASA systems and data covered by these five contracts are at an increased risk of compromise.” (see page iv) I agree that most standard contracts are extremely non-committal about levels of security service, but such a direct correlation between risk and contract verbiage seems….well, cloudy to me.
A month later, on August 28, the US Federal Trade Commission issued a complaint against LabMD for a 2007-8 incident in which the Limewire client had been installed on one of their servers, resulting in personnel data being compromised through that P2P system. Reminiscent of the OHSU incident, it is yet another case of people of good will who are just trying to get their job done by using spreadsheets to supplement the weakness of IT-provided systems. It appears that Limewire was the private toy of the sysadmin, and not used to support the spreadsheet-based workgroup, but unfortunately, the directory they used was shared to that service. Unusually for a ‘privacy breach’, it seems that personal data was actually obtained by somebody who tried to use it to commit financial fraud. 5 years after this undisciplined use of the cloud on the part of a sysadmin, LabMD is now required to spend the next 20 years allowing a CISSP to assess their posture.
Category: Cloud IT Governance Tags: Cloud, clouds, compliance, regulatory compliance
by Jay Heiser | September 18, 2013 | 1 Comment
You’ve got 2 weeks to get several Petabytes of data from a dissipating cloud. Will you get it all back safely? Hundreds of Nirvanix customers are asking themselves that question right now.
Although their web site remains blissfully mum about this unfortunate development, The Wall Street Journal is only one of several media organizations reporting that Storage as a Service provider Nirvanix has run out of money, and will cease service in 2 weeks.
Given that many of their customers are large media companies, I have to assume that they have an awful lot of data stored their. 2 years ago, a Nirvanix press release bragged that USC would be moving 8 Petabytes of data to the Nirvanix Deep Cloud Archive(tm).
What kind of a data storm do you get when a thousand customers all simultaneously start trying to copy out petabytes of data? How much technical support can a company offer when they are going bankrupt? Can you reasonably expect that their staff will be motivated to stay on, and undertake any necessary heroic efforts that might be needed to help you recover your data? Where are you going to put that data?
I continue to get a lot of questions from Gartner clients about cloud security. While there are issues associated with the confidentiality of your data in the cloud, for the majority cloud customers, the potential loss of confidentiality is just not the biggest form of cloud risk. Cloud Computing turns your software into a just-in-time supply chain maintenance issue. If a vendor goes bankrupt, or suffers a catastrophic failure, your data disappears immediately, and there may not be anybody around to help you find it.
What’s your contingency plan?
Category: Cloud risk management security Tags:
by Jay Heiser | September 13, 2013 | 2 Comments
I’m feeling the walls of our linguisitic purity come crashing down, battered by the waves of language evolution. In short, I’m ready to acknowledge an increasingly popular usage, and start using the trendy term ‘Cybersecurity’.
Such terminological transitions are no new thing in a space that could still legitimately be labeled as ‘computer security’. Working for a beltway bandit in 1995, I have vivid memories of a passionate beer-fueled discussion over the relatively new term ‘information security’, and whether that was an appropriate designation for an increasingly significant discipline, or just a pretentious and hyped new label.
Since that time, my friends in the military-industrial ghetto have recharacterized the holistic approach to ensuring that nothing bad happens to stored communications as ‘information assurance,’ and arguably arriving several years later, the commercial world has an essentially equal set of expectations for the term ‘information risk management’.
Meanwhile, Gartner is fielding a record number of calls on ‘CYBER’ security topics. Unsurprisingly, the answers vary when we try to dig deeper into the underlying questions. When I asked one Cybersecurity vendor just what they thought the term meant, they explained that it referred to ‘computer security–with the Internet’. Given that I’ve been on the Internet, and involved in security topics, since 1987, I just didn’t find that a satisfactory answer at the time. Yet, the more I think about it, the more it rings true.
In today’s parlance, ‘cyber’ clearly equates to ‘digital’. With all due respect to Norbart Wiener, and his groundbreaking work in the field of cybernetics, a prefix inspired by the Greek word for ‘steersman or rudder’ has been hijacked by 30 years of speculative fiction, losing its association with the esoteric concepts of ‘control’ and ‘systems’.
For the overwhelming majority of people, ‘cyberspace’ refers to the Internet, and by extension, anything with an IP address. Cybersecurity essentially applies to the realm of all that is digital, be it an office computer, a personal table, operational technology, or next year’s digital refrigerator. While the term certainly implies the role of Internet connectivity, that distinction is becoming less significant for the inhabitants of an ‘Internet of things’.
The good news is that we no longer have to be worried about paper. The self-identified practitioners of ‘Information Security’ have spent the last 20 years grappling with the dilemma of the printed page, and to a lesser degree, with the implications of human memory. Cybersecurity means freedom from the thankless task of trying to protect information outside of the digital realm.
Computer Security is dead; long live computer security. I wonder what they will come up with next.
Category: risk management security Tags: cybersecurity, security, terminology
by Jay Heiser | June 14, 2013 | 1 Comment
Gartner security analysts are being bombarded with questions about CYBER security. Is this cyber reality, or cyber hype?
A few years ago, we had seriously entertained the idea of creating a sort of ‘IT Buzz Term Hype Cycle’, that would map overused prefixes across trigger, hype, disillusionment, and productivity. At the time, ‘I-‘ had reached the peak of hyperfication. Its not hard to envision a future in which the prefix ‘cyber’ goes the way of the dodo, trapped forever in a linguistic graveyard with the suffix ‘dot com’.
In Gartner, we actually do have a concept of cybersecurity, incorporating operational technology into a broader concept of digital domain protection. It is also fair to say that many uses of the term cybersecurity connote, if not denote, the concept of offensive digital warfare. I want to go on the record right now and say that we specifically do NOT recommend that commercial and non-profit users of digital technology develop hackback capabilities.
We live in a constant state of verbal inflation. I started my career in computer security, lived through long painful discussions on whether or not information security was a valid term, and have watched, without actually encouraging, adjectival divergence into information assurance, cybersecurity, and cyberassurance.
All of these terms originally arrived with the best of intentions, bringing new concepts and connotations to a complex and changing cyber world. They inevitably turn into positioning playthings, as commercial entities and government agencies use the latest buzzterms to position themselves as being leaders—in something. Its anybody’s guess whether these various terms will evolve into sharply defined meanings not just for small specialty domains, but for the IT world in general.
For the time being, if you want to ask us about cybersecurity, we are going to ask you to provide more details. Are you military? Are you considered critical infrastructure and are you responsible for OT? What is it that you want to protect from whom?
Fresh terminology doesn’t necessarily mean that the old concepts were stale.
Category: risk management security Tags: buzzwords, cyber, hype, Hype Cycle
by Jay Heiser | June 3, 2013 | 1 Comment
Life in the cloud would be so much easier if there were only some sort of ‘cloud risk seal of approval’. Most public cloud services seem to offer a reasonable risk proposition, but its extremely difficult to provide defensible evidence of this. A comprehensive and well-accepted ‘standard’ would go a long way towards bridging this gap.
Working towards the revision of the Hype Cycle for Cloud Security (which will be published in July), I wrote the following text: “Current standards only have a relatively small amount of material relating to the design, build and test phases of technology, which means that they are not yet able to fully address all risk-relevant aspects of a provider’s offering.”
In our internal peer review process, analyst Khushbu Pratap noted “This is because the move to cloud was meant to get rid of this headache. The service beneficiaries continue worrying about assurance in these areas. Cloud has taken away the whole implementation and maintenance piece but outsourcing cloud assurance is still a risky bet.”
I think that very neatly summarizes the inherent dilemma of using a commercial cloud service provider.
Category: Cloud security Tags: certification, Hype Cycle, standards
by Jay Heiser | May 29, 2013 | Comments Off
Gartner clients have a lot of questions about the topic of data classification. It is a primary concept that has long been enshrined in the canon of computer security, yet in practice, it remains a concept that is impractical for the majority of non-military organizations to successfully apply.
In 1998, information security pioneer Donn Parker wrote in Fighting Computer Crime “All too often, organizations end up adopting a three- or four-level classification scheme with vague definitions of information sensitivity, then fail to apply it consistently or update it as the information sensitivity changes.” (p.20) 15 years later, this observation remains more than current. While he does say that classification can work for highly-motivated organizations, it is not one of the major themes of this fascinating and still highly-relevant book.
The growing availability of ‘rights management’ technologies, such as trusted viewing, board portals, VDI, and other ‘share your data without losing it’ technologies demonstrate the prescience of Donn’s observation that “we need a new concept of two-level information classification—public and classified….” and the suggestion that this should be supported by “mandatory and discretionary controls for the protection of information, rather than by subjective classification of its sensitivity into arbitrary levels.” (p. 370)
My advice to Gartner clients is that classification theoretically represents a useful way to ensure that security controls are proportional to data sensitivity, and that its primary use should be to facilitate decisions about what NOT to do, as much as what to do. I typically give a canned 2 minute speech on the history of military classification, explaining why that level of effort is not practical for commercial organizations.
For those who are motivated to learn more about the history of the use of classification, the Federation of American Scientists has a very interesting 2002 online essay by Arvin S. Quist, “Security Classification of Information: Volume 1. Introduction, History, and Adverse Impacts”. The most important lesson I took from this essay is that classification is a difficult proposition, even for the people who are hugely motivated by national intelligence, and even national survival considerations. Scheme complexity evolved over time, and not without a great deal of discussion, and even resistance. If NATO struggles with a 5-level scheme, any commercial organization should seriously consider that they likely have little appetite for more than 2-3 levels.
Lately, its become fashionable to criticize Wikipedia. To my mind, the recent controversies only provide evidence that this crowd sourced system does have mechanisms to ensure integrity and validity, and I remain both a financial and cultural supporter. Wikipedia has a lengthy entry on the topic of Classified Information. This article does not delve into the historical context, but does provide a great deal of information about current practice, and includes a table with 88 different national language classification markings, capsule summaries of government classification regimes in multiple jurisdictions, and links to nation-specific entries on classification practices. There’s a great deal of information here for the morbidly curious.
Few corporate IT risk managers will take the time to explore the intricacies of military classification, let alone its history, so let me boil it down for you. The most important lessons to be gained from this historical experience is first that classification schemes provide a vitally important role in aligning levels of effort with data significance, and second, that they are difficult and costly to utilize. Commercial organizations, non-profits, and civilian agencies lack the motivation for a military-style scheme, which is why a growing number of government and non-government entities are choosing a simple Low/Medium/High scheme.
A simple scheme that is reliably used, is infinitely more useful than a granular one that cannot get off the ground.
Category: IT Governance Policy security Tags: classification
by Jay Heiser | March 28, 2013 | Comments Off
We’ve riffed for years on the distinction between “Dr. No” and “Mr/Ms Yes”, but many enterprises continue to back the security professional into the awkward far corner of the Business Prevention Department. If the risk assessor is going to be blamed for security failures, then that person is always going to be motivated to make extremely conservative decisions.
The idea that risk can be understood and managed with the goal of reducing the potential for negative outcomes, and their impact, is not a radical one. This is what risk management is all about. Unfortunately, it can only flourish in an atmosphere of cooperation and team work. Blame cultures are not conducive towards making difficult decisions involving poorly understood forms of risk.
Employees operating within a culture of blame are motivated to value CYA at the personal level before the corporate one. If people feel they are going to lose their job, or experience losses of prestige or status, when they are associated with failures, then the organizational culture is providing them economic and social motivation to avoid risk. This counterproductive organizational dynamic plays out in spades in the intriguing yet ambiguous context of commercial cloud computing.
A blame culture typically approaches SaaS something like this:
- Somebody in the business thinks they can save money (or avoid IT’s annoyingly inflexible rules) buy using some kind of cloud service.
- They put together a business case that contains nothing but good news and beneficial financial outcomes.
- Contracting staff is asked to provide contract language that a) ensures that nothing bad can happen, and b) will be completely acceptable to the service provider (which has a reputation of not negotiating substantive contractual provisions).
- The IT contracting staff balks at this impossible task, it is treated harshly and is accused of empire building, and being non-cooperation.
- Meanwhile, the security staff is asked to approve a deal in which the buyer hasn’t stated their security requirements and the seller refuses to explain how their system actually works.
- The security staff balks at this impossible task, and is treated harshly. Treated as being deficient in imagination, it is accused of being out of touch and is characterized as participating in business-disabling power games.
- Provided with the binary choice, the people who have the expertise to understand and mitigate the risk do what the blame culture motivates them to do and say that they cannot approve this deal.
- The line of business makes it clear that they believe these in house functions cause more harm than good, and strongly suggests firing the lot of them.
The tragedy of this all-too-common scenario is that few, if any, of these people were actually dead set against the externally provisioned service in the first place. Life is full of ambiguity, and significant business decisions always require someone being willing to accept a risk. If the person who benefits from the positive outcome of a decision is also the person who will accept the blame for a negative outcome, then an organization is positioned to take advantage of new forms of service. If somebody wants to save money, while dumping the negative consequences into somebody else’s lap, it should come as no surprise that the owners of those laps have developed mechanisms for pushing back.
It takes a well-coordinated team to say yes to an ambiguous risk question.
Category: Cloud IT Governance risk management security Tags: risk assessment, risk management
by Jay Heiser | March 20, 2013 | 1 Comment
It would be the rare soul indeed, who, after spending hours or even days cleaning up from a hack, didn’t feel the strong red rage of revengeful urges. And how many PC owners or site managers, still recovering lost data, time, and pride, if presented an opportunity to strike back at their attacker, to make that anonymous bully feel the same pain themselves, would not be sorely tempted to undertake an act of violence and coercion themselves?
The idea that the victim of a computer crime might not only attempt to traceback the attack, but also to attempt some form of retaliation, is hardly a new one. Its a Gibsonesque theme that resonates through decades of cyberpunk novels. But it is the case that the volume of discussion around the topic has been ramping up, a form of legalistic debate that is probably indicative of the underlying smoke of mysterious attacks, and even more mysterious hackbacks. Now that the topic has been discussed in the hallowed halls of the US Congress, its more than ever likely to become a topic not just for the family dinner table, but for the corporate policy committee, and of course the national government.
It seems that the act of responding in kind to a computer attack is technically illegal in the USA—as it is in many places in the world. This is not something that has been widely tested through case law, and as a general legal principle, the right to self defense is widely recognized. But its a can of legal, practical, and moral worms.
Hackbacks are nothing new. Whenever value must be protected in an unregulated competitive system, individuals are economically incentivized to take the law into their own hands. Just as drug lords defend their honor and turf through physical violence, some cybercriminals resolve their disputes on servers with obscure domain names. Sometimes, a spammer, vandal, bot master, or criminal hacker has the misfortune to attack someone with the skills and personality necessary to respond in kind. This has literally taken place for decades, out of site, and out of mind for the overwhelming majority of Internet citizens.
As the impact of cyber crime continues to grow, it seems to inevitably lead to greater discussion about what to do about it. Historically, when populations become fed up with coercion and violence, they band together to promote self protection. Depending upon the degree of frustration, Neighborhood Watches can evolve into posses and even escalate to vigilantism. We are already seeing a form of that today with the self-styled Robin Hood approach of the loosely formed network army that refers to itself as Anonymous.
Without taking a stand on either the legality or appropriateness of hackbacks, I’m confident in saying that conducting reverse hacks is more than impractical for the overwhelming majority of Internet victims, and the potential for collateral damage to other hacking victims is extremely high. But I’m also confident in the expectation that as the feelings of digital victimhood continue to grow, the response will be demands for dramatic protective action. I really don’t know what form that will take, but the coming decade is likely to be an interesting one for both cops and robbers.
Category: Policy risk management security Tags: hack back, hackback, hacking, law, retaliation