by Jay Heiser | January 9, 2015 | 2 Comments
I put my money where my mouth was, and took my wife on a date last week. I’m sure that we were not the only people who saw The Interview out of a sense of duty. We expected it to be a tedious and silly movie, but we also felt that paying to watch it, in a regular movie theater, would be a statement in support of freedom of speech, and against cyber coercion. It turned out to be a much better movie than we expected.
No, I don’t think this was a cinematic masterpiece. No, I don’t think I’ll need to watch it again. No, it isn’t going to win any awards. The flick is immature, politically incorrect, and the basic concept is tasteless. Yet it has a certain charm to it, with Rogen and Franco successfully portraying a pair of lovable rogues who get in just a bit over their heads. It wasn’t great art, but it was entertaining, and surprisingly upbeat. It has a 7.2/10 rating on IMDb right now, a positive score only partially due to patriotism.
As 2014 came to a close, we have had a lot of passionate discussion inside Gartner on the degree to which 2015 should be considered a new normal for information security. We didn’t fully agree on that, but we did agree that the world will not be a better place if society allows empty threats to shut down freedom of speech. The second week of 2015 has turned into a sobering reminder that not all such threats are empty.
I don’t really think that a company that makes movies should be considered as a ‘critical’ part of the infrastructure, but the Sony hack did provide an effective way to leverage a relatively small level of effort into a message heard around the globe. Increasingly, it will be recognized that we are all together in the same cyberrealm, and our digital fates are related. Everyone in IT Security can help make the world a safer place, by ensuring that their own cyber infrastructures are well managed. Every IT professional can help reduce the potential for threat, coercion, and terror, through well-governed infrastructures that resist cyberattack. It isn’t just a movie–sometimes its reality.
Category: risk management security Tags: freedom, Interview, new normal
by Jay Heiser | December 18, 2014 | Comments Off
The gist of a new lawsuit against Sony is that by failing to adequately protect social security numbers, they have doomed former employees to a lifetime of credit fraud.
“The class-action suit was filed by Michael Corona and Christina Mathis, both of whom had their social security numbers made public after a hacking group calling itself Guardians of Peace dumped studio documents, employee information and salary charts online….Corona was an employee from 2004 to 2007. The suit says that he has so far spent $700 for a year of identity theft protection.” Sony hit with Class Action Lawsuit by ex-Employees
It boggles the mind to think that every current and former Sony employee immediately needs to pony up $300-700 to hire a personal financial security guard. I don’t actually believe that they do, but if you are trying to generate sympathy for a lawsuit, why not claim that anyone ever employed by Sony has just lost hundreds of dollars?
All other ramifications of the Sony hack aside, as in the hack against JP Morgan Chase reported several months ago, the form of data in question in this lawsuit consists of a phone book. The personal data stolen from Sony that is referred to in this lawsuit consists of personal identifiers. Names and addresses are matters of public record and can never be protected. Social Security Numbers (SSNs) are theoretically not matters of public record, but the awkward truth is that so many different parties use them that they can never be protected from somebody who is highly motivated to obtain them.
Motivation is the systemic problem here: if possession of someone’s SSN is sufficient information to commit credit fraud, then widespread credit fraud is inevitable. The CISO asked to protect names and SSNs has been handed a sysyphean task that can never be successful. The banking, card processing, and legal systems have inadvertently contributed to an unsustainable situation in which individuals have no choice but to share their SSN and name with countless commercial and governmental organizations. Those organizations have no choice but to maintain multiple copies of that personal data, and often must share it externally in support of business process. If that data can be exploited to commit fraud, it is inevitable that people will be motivated to steal it.
Further security failures are inevitable. No organization handling large amounts of personal data can ever hope to fully protect it from theft. All they can do is try to encourage the crooks to attack some other organization instead.
If Sony employees actually do need to pay money out of pocket to protect themselves, it should be viewed as evidence of a systemic problem with the legal structure of our financial system. The problem isn’t that SSNs are not being adequately protected–the problem is that they are being inappropriately used. Instead of pretending that we can actually protect personal information, we need to better protect consumers from creditors that are not adequately authenticating the individuals they give credit to.
Category: IT Governance risk management security Tags: privacy, security, SSN
by Jay Heiser | December 12, 2014 | 1 Comment
Getting attacked by the North Koreans for making a movie that spoofs their sad little country and its tinpot dictator makes Sony the most sympathy worthy attack victim of the millennium.
No shopper is comfortable with the idea that a merchant might have leaked their credit card, but nobody is going to boycott a movie maker because they leaked Sylvester Stallone’s social security number. On the contrary, the news of this dramatic hack is going to encourage huge attendance for a movie that otherwise doesn’t seem to have the ingredients typical of a cinematic masterpiece. They couldn’t have invented better PR than this.
The actual source of the attack remains to be seen, but given the glee expressed by the officials from Asia’s answer to Grand Fenwick, for the time being we might as well treat this as a surprising act of technical competence from a place that is generally considered a digital trailer. Can it be that the mouse that roared can’t take a bad and tasteless joke when it hits close to home? Fearless Leader fears satire?
In a period of days, Sony has suddenly become the globe’s cybersecurity poster child. It is morbidly fascinating to see a continuing series of news articles related to the material stolen from Sony, and anyone with any background in Infosec is itching to learn more details about what level of protection effort was in place, and what form of attack managed to so thoroughly comprise such large chunks of their digital enterprise. However, it is too early to have a definitive opinion on the relative degree to which Sony may or may not have followed best security practices. It is uncertain how many additional negative consequences will accrue as embarrassing internal memos leak. It is premature for any other organization to use the example of Sony as a significant part of their business case for security program improvements.
What do I think? I’m pretty sure that there will be some important lessons that will come out of the analysis of this incident. I don’t think it represents a new normal in the degree and prevalence of digital compromise, but only time can establish norms. What I know for certain is that after all this buildup, I’m deadly curious about a flick that otherwise would have been pretty far down my list. I’m going to the theater, and I’m going to cheer for the good guys.
Category: security Tags: hacking, North Korea, security, security incident, Sony
by Jay Heiser | October 6, 2014 | Comments Off
The blogosphere and the punditerati are all in a tizzy this week with the titillating news that a major financial services firm has reported that a bunch of their services were compromised, and the attackers have purportedly stolen…the phone book. In other words, somebody obtained a list of publicly available names and email addresses.
Let’s not minimize the potential significance of a widespread hack on a set of digital components within our critical financial infrastructure. But if it remains the case that the only information stolen is a set of publicly available names and email addresses, then the impact on individuals will be low.
Instead of dwelling on the OMG factor of this latest newsoid, maybe this could be an opportunity to revisit the basic assumptions underlying a system that finds it increasingly awkward to attempt to prevent access to an exponentially expanding amount of data. It is impossible to protect data types that are explicitly designed to be publicly known and shared, such as personal names and email addresses. These are just not secrets.
If knowledge of an individual’s name or email address enables an attacker to perform a useful form of attack, then such attacks are inevitable. Publicly-known and long-term names are not robust authentication factors, and that includes identifiers such as Social Security numbers and credit card numbers. Shared secrets are oxymoronic.
I’m not ready to fully admit that ours is a world without secrets, but it is difficult to avoid the conclusion that most data is going to have to take care of itself. We just cannot protect everything, and the more often a piece of data is used, the less useful it is in protecting our money. If our personal financial integrity is dependent upon protecting reusable long term identifiers, then we are doomed. The practical approach towards safe existence in an increasingly complex and hackable digital world is to explicitly not try to protect everything, concentrating instead on smaller and more robust quantities.
The inevitability of identifier compromise has long been recognized, which is why robust systems have a revocation mechanism. Credit card numbers can be relatively quickly revoked and then replaced. Personal names and email addresses cannot. Symmetric encryption mechanisms give us the ability to verify possession of a secret without the need to actually share that secret, greatly reducing the exposure of ‘names’ to attack.
We have the technology, architecture, and experience–not to build systems that are ‘hack proof’, but instead to take the much more realistic approach of anticipating compromise, and building systems that have less to steal. We just need the courage to solve the right problem, but this week’s news and net suggests that instead, we’re going to double-down on the futile task of pretending that we can protect indefinitely large amounts of data.
We do not need to protect more data; we need to protect less.
Category: risk management security Tags: authentication, hack, hacking, PKI, secrets, security
by Jay Heiser | August 4, 2014 | 2 Comments
C: we are concerned about putting our email into the cloud.
C: Somebody might look at it.
J: Somebody can already look at it, even when you do host your email server in house. SMTP is a data leakage protocol, that isn’t designed to secure your data, but is intended to disseminate it as widely as possible. Email has always broadly exposed your data across the Internet, with both deliberate and accidental addressing of sensitive messages resulting in a steady stream of undesirable data leakage.
C: So what do you suggest?
J: For a start, an enterprise-managed File Synch and Share service would be a much more controlled way to share sensitive data. If you truly have data that you are concerned about leaking, then you can protect it with anybody number of higher end data sharing services that will maintain end to end encryption, and even control cut/copy/print/save on the end point.
C: Oh, we wouldn’t want to do that. We couldn’t transition away from email to EFSS, and we certainly won’t pay extra for something secure.
J: So what do you want to do?
C: We want to keep doing what we’ve always done, but we want to pay less for it, and if there is a failure, we want to be able to blame someone else.
J: I don’t think anybody sells a service like that.
Category: Cloud IT Governance risk management security Tags: email, email security
by Jay Heiser | June 19, 2014 | 1 Comment
Code Spaces, a vendor that claimed to provide secure Source Code hosting and project management support, has just been forced to admit to their customers that they’ve been sabotaged by a cyber extortionist, and they probably cannot fully recover. They put all their hopes, and all their customers’ data, into a single cloud, and it burst.
While not an especially large service provider, the remains of their site on the Wayback machine mentions a number of blue chip clients. I have to wonder how many Code Spaces customers didn’t bother to keep a copy of their code somewhere else.
At the time of this writing, Code Spaces has an explanation, with the unhappy news As such at this point in time we have no alternative but to cease trading and concentrate on supporting our affected customers in exporting any remaining data they have left with us.
Business failure and client data loss are always unhappy events. It is particularly distressing when it happens to a cloud-based service that advertises “redundant, high specification servers with guaranteed uptime and availability.” Guarantees are empty paper when the data is permanently gone because the vendor failed to adequately protect it from hackers, and failed to adequately back it up. And if a vendor is no longer financially viable, service levels and contracts become moot.
As I stress in my latest research note, Everything You Know About SaaS Security is Wrong, the users of cloud services need to take responsibility for the care and feeding of their own data.
Category: Cloud IT Governance risk management security Tags:
by Jay Heiser | April 11, 2014 | 2 Comments
Change all your passwords. Now. And then do it again in a week. Of course, there’s no evidence that any passwords have been exploited, but isn’t the lack of substantive evidence a suspicious fact in and of itself? It can be if you want it to be.
My favorite presentation at the RSA Conference was from Nawaf Bitar who introduced the immediately popular hashtag #firstworldoutrage. It neatly captures the idea that when a people are relatively comfortable and secure, they will start inventing things to be vocally outraged about.
As a case in outrageous point, I was disappointed with much of the recent media commentary on the GM ignition switch issue that misleadingly characterizes the fix as a simple matter of replacing a $.50 component. In a recent article, “In Defense of GM: No one is asking the right question: Was the company’s risk assessment about the faulty ignition switch reasonable?” I actually had asked that question, and the article provides a compelling explanation that it wasn’t worth the money to fix what is a statistically insignificant source of fatality.
So I ask the same ‘acceptable risk’ question about Heartbleed. Is this truly a Spinal Tap moment in the infosec world in which every single Internet citizen needs to take heroic measures to change the majority of their passwords? While it has been demonstrated that the vulnerability can be used to collect random chunks of data from Internet servers, including password and username pairs, it has not been shown as a practical mechanism to capture large amounts of passwords.
Now that everybody knows about this bug, the race is on to close the SSL holes before they are significantly exploited. There’s no question that the code needs to be fixed, but it is going to cost the collected IT world a lot of time and money to identify all the vulnerable systems, and patch them.
The urgency of password change for the millions of Internet citizens is less obvious. What will be the net social cost of every Internet citizen changing a dozen passwords? I have over 250 myself, most of which probably haven’t even been used during the 2 year vulnerability window (Neustar told Gartner this week that the average is c. 50 passwords/user) I wonder how much the overall support cost will be to recover from the inevitable password change failures? Will it all have been worth it?
One cost will be the cultural impact of one of the Internet’s biggest incidents of ‘crying wolf’. Most people assume that a wolf was sighted just outside the doors of Facebook. When the digital dust finally clears, my expectation is that very few password exploit incidents will be documented, but that will be old news for a world looking for new forms of outrage on a daily basis. But if we do experience more incidents like this, people will start asking questions about whether or not these ignition switches always need to be changed, and over time, they will lose whatever appetite they have for the fun of warning their Facebook friends that they better change all their passwords
What this incident has turned into is yet another example of the inherently flawed nature of passwords. A more unusual lesson to derive from this incident is that the global Internet rests upon widely shared code that represents the potential for more single points of failure. Major public cloud service providers, financial service firms, social networking services, hardware devices, and countless other Internet-enabled technologies not only turned out to be dependent upon the same SSL source code, but like much of the open source code that defines our digital world, it was developed by a small group of part-time volunteers. That seems an insufficiently substantial foundation to support the global expectations of privacy, confidentiality, and reliability. Perhaps that’s why they call it the cloud.
Category: risk management security Tags: heartbleed, security
by Jay Heiser | April 8, 2014 | 5 Comments
Its too bad that Dick Cheney’s awkward little epistemological speech has been so thoroughly politicized, turning an important risk management principle into an opportunity for derision. Intelligence analysts, and IT analysts, need to be acutely aware of the limits of their knowledge, especially when making decisions about the how to take advantage of public cloud services.
Anybody making risk decisions about public clouds needs a strong understanding of the degree to which they can trust the information they have about those services. To apply this particular Theory of Knowledge principle to Public Cloud Services:
1) Known Knowns: If data is encrypted before it is uploaded, we know that it is encrypted. If the data is not encrypted, we know that it can be read by anyone who accesses it, which leads to the second category.
2) Known Unknows: If our clear text data is in someone else’s site, we know that it is vulnerable. What we cannot ever know for certain is whether an unauthorized person takes advantage of that vulnerability. Its a level of ambiguity, but an understood one.
3) Unknown Unknowns: If there were some sort of vulnerability that we had never conceived of, it would be in this final categority. The fact that you haven’t even thought it might exist, means that you don’t know what to look for to ensure that it is controlled. An example might be a cloud service provider that exposes your email boxes to external surveillance in order to conduct load balancing and facilitate service continuity.
In retrospect, if a cloud service provider claims to be spreading your data across multiple locations (which virtually all of them do), before storing your data in that service, it would make sense to ask them what mechanism they use to transmit your data between those locations, and how they protect it in transit. I see a lot of cloud service provider questionnaires, but I don’t ever remember this particular issue coming up.
An example has recently come to light of an ongoing confidentiality failure involving a CSP copying customer data between their data centers. For many cloud buyers the news of this unexpected form of exposure has moved this particular risk from category 3, into category 2. Now we all know that we don’t know how our data is protected when it is being replicated between CSP data centers.
We can only hope that not every virtual backend is flapping open in a packet storm.
Category: Cloud IT Governance risk management security Tags:
by Jay Heiser | September 25, 2013 | 2 Comments
Although the actual events took place at widely varying times, the summer of 2013 has witnessed the public release of 3 major ‘inappropriate use of the cloud’ incidents.
On July 28, Oregon Health & Science University (OHSU) felt compelled to notify 3,044 patients that while there was no reason to believe that their data had leaked, or been misused, it was in a place that it shouldn’t be, and they wanted to apologize. Several physicians had decided that their personal GoogleDrive accounts would be an appropriate place to share data, and while this undoubtedly was a convenient place to compare notes on their patients, they hadn’t undertaken a HIPAA BA with the service provider.
The following day, July 29, NASA’s Office of Inspector General released a report that “found that weaknesses in NASA’s IT governance and risk management practices have impeded the Agency from fully realizing the benefits of cloud computing and potentially put NASA systems and data stored in the cloud at risk.” (NASA has a LOT of data in the public cloud.) Citing a laundry list of weak cloud control practices, including not asking the permission of the non-existent Cloud Czar, the OIG further stated that “in four cases NASA organizations accepted the cloud providers’ standard contracts that did not impose performance metrics or address Federal privacy, IT security, or record management requirements,” concluding from this that “As a result, the NASA systems and data covered by these five contracts are at an increased risk of compromise.” (see page iv) I agree that most standard contracts are extremely non-committal about levels of security service, but such a direct correlation between risk and contract verbiage seems….well, cloudy to me.
A month later, on August 28, the US Federal Trade Commission issued a complaint against LabMD for a 2007-8 incident in which the Limewire client had been installed on one of their servers, resulting in personnel data being compromised through that P2P system. Reminiscent of the OHSU incident, it is yet another case of people of good will who are just trying to get their job done by using spreadsheets to supplement the weakness of IT-provided systems. It appears that Limewire was the private toy of the sysadmin, and not used to support the spreadsheet-based workgroup, but unfortunately, the directory they used was shared to that service. Unusually for a ‘privacy breach’, it seems that personal data was actually obtained by somebody who tried to use it to commit financial fraud. 5 years after this undisciplined use of the cloud on the part of a sysadmin, LabMD is now required to spend the next 20 years allowing a CISSP to assess their posture.
Category: Cloud IT Governance Tags: Cloud, clouds, compliance, regulatory compliance
by Jay Heiser | September 18, 2013 | 1 Comment
You’ve got 2 weeks to get several Petabytes of data from a dissipating cloud. Will you get it all back safely? Hundreds of Nirvanix customers are asking themselves that question right now.
Although their web site remains blissfully mum about this unfortunate development, The Wall Street Journal is only one of several media organizations reporting that Storage as a Service provider Nirvanix has run out of money, and will cease service in 2 weeks.
Given that many of their customers are large media companies, I have to assume that they have an awful lot of data stored their. 2 years ago, a Nirvanix press release bragged that USC would be moving 8 Petabytes of data to the Nirvanix Deep Cloud Archive(tm).
What kind of a data storm do you get when a thousand customers all simultaneously start trying to copy out petabytes of data? How much technical support can a company offer when they are going bankrupt? Can you reasonably expect that their staff will be motivated to stay on, and undertake any necessary heroic efforts that might be needed to help you recover your data? Where are you going to put that data?
I continue to get a lot of questions from Gartner clients about cloud security. While there are issues associated with the confidentiality of your data in the cloud, for the majority cloud customers, the potential loss of confidentiality is just not the biggest form of cloud risk. Cloud Computing turns your software into a just-in-time supply chain maintenance issue. If a vendor goes bankrupt, or suffers a catastrophic failure, your data disappears immediately, and there may not be anybody around to help you find it.
What’s your contingency plan?
Category: Cloud risk management security Tags: