by Dan Blum | January 13, 2012 | Submit a Comment
Last week Gartner SVP Peter Sondergaard announced I’d won Gartner’s Golden Quill Award (2011): “Dan Blum’s coverage of security topics is deep and engaging: he is a fount of knowledge. His readers are drawn in by a dry sense of humor and attention to detail that shines a light on the many dimensions of security and risk management.”
I learned that the award was based on several documents I produced during 2011; Gartner’s Senior Research Board (SRB), comprised of the Chiefs of Research from all teams, judged the work as outstanding.
I take so much pride in this and the award because enjoy writing. In fact, writing runs in the Blum (and Holmes) family; my mother, father, and at least one grandparent were authors. Already one of my sons is published as well!
Just one question – does this mean I now have to blog more often ?
These were actually the documents that I wrote for Gartner in 2011 that the SRB may have looked at. I’ll provide a link to them and a brief summary for those who can’t access the Gartner research product.
Malware, APTs, and the Challenges of Defense
Malware is increasingly dangerous, and organizations must be vigilant. Even layered defenses that use the latest anti-malware technologies are not enough to eradicate the risk of APTs or automated malware exposure. Organizations must take proactive measures to operate in an IT environment that’s potentially already compromised.
Application Control and Whitelisting for Endpoints
Application control and whitelisting solutions can put endpoints into a stronger default-deny posture against unknown and potentially malicious software. Solutions come from a variety of market segments and, because they offer a potentially powerful endpoint protection alternative, are gaining mind share and deployment. Going forward, organizations should consider including application control and whitelisting in their endpoint security strategy…but recognize that difficult learning curves for administrators and cultural changes for users may lie ahead. Start with the easier, more static use cases and progress to the more complex, dynamic use cases using more advanced solutions that can handle changes in software, systems, threats, and user needs.
Determining Criteria for Cloud Security Assessment: It’s More than a Checklist
Neither enterprises nor vendors can support many-to-many audits long term. The Cloud Security Alliance (CSA) and other industry organizations are preparing standard cloud security assessment frameworks. Enterprises should demand service providers support the frameworks, but must also choose appropriate criteria to emphasize for given use cases. Gartner’s guidance helps organizations develop these criteria but significant work efforts lie ahead. Enterprises must model risks of specific use cases, factor cloud into security architecture, and specify service provider trust requirements. Also, organizations should instantiate repeatable architecture and process patterns for business adoption, vendor management, compensating controls, and assessment.
Endpoint Protection Platforms: Blending Security, System Management, and Data Protection
Traditional endpoint security markets for point solutions such as anti-malware, encryption, device control, and network access control are being eclipsed by endpoint protection platforms (EPPs). EPPs are available from vendors in the enterprise anti-malware, security suite, security and management, and security-as-a-service market segments. The market for EPPs is being disrupted by consumerization trends, handheld device deployment, and cloud-based delivery. Organizations must carefully consider their strategic direction and prioritize requirements before attempting to rank and select EPP solutions.
2012 Planning Guide: Security and Risk Management
Information security groups face economic volatility, a dangerous threat landscape, compliance and regulatory challenges, and sweeping changes across the IT landscape. Major macro trends in business and IT, as well as powerful security market drivers, are disrupting and transforming the security landscape. IT security professionals must think outside the box and establish versatile security programs that can adapt to trends such as mobility and cloud computing. But many traditional security practices — such as risk management, audit, zoning, and information classification — remain as important as ever.
Category: Uncategorized Tags:
by Dan Blum | December 28, 2011 | Submit a Comment
“I’m sorry if I’m inconveniencing you and the teachers, but I will not allow a networked computer system to be placed on the ship while I’m in command,” said Commander Adama as I watched the first episode of 2004’s re-imagined Battlestar Galactica series. Immediately, I was hooked.
You see, ever since Gartner’s internal email post mortems starting in March 2011 after the RSA SecureID breach I’ve been thinking that organizations should be more hard core about internal network security and administration than most actually are.
To understand why, consider this. The RSA breach followed a familiar pattern: intelligence gathering over social networks -> spear phishing email -> exploitation of Flash vulnerability to compromise a company system -> more intelligence gathering from within -> compromise of additional systems -> access to systems with critical data.
It’s the last link of the chain at least that I’d like to see our clients try to cut off by putting any systems with critical data (like the RSA token seeds database) into a Restricted Zone. In such a zone, these systems aren’t accessible from the Internet, or even by administrators on the “trusted” internal network using the same endpoints employed in “dirty” email and web surfing environments.
Here I have to stop and give due credit to Gartner colleague Jay Heiser, the first of us to say in one of those internal emails: “As long as people with access to [critical data] are sitting on Internet-routed networks, and are reading email and surfing on the same systems that they use for privileged access, then simple attacks using sophisticated code are going to be commonplace.”
I also have to stop and deal with a few potential objections:
1) “We can’t completely cut off the critical data (e.g. customers lose their account information and call the help desk in a panic.)” Understood, provide a single heavily-restricted query service that the help desk can use from a known machine for heavily-monitored and rate-limited access into the restricted zone.
2) “We’re augmenting our endpoint security and anti-malware filtering. That should be good enough.” It isn’t. Time and again, advanced malware has overcome endpoint security. And security departments are getting pressured to reduce endpoint security in the name of consumerization. Endpoint security is worthy goal but trying to guarantee that every one of thousands of endpoints is malware free is like trying to boil the ocean. Don’t fight this losing battle. Don’t let the systems used for email and web surfing in dirty environments have direct access to critical data.
3) “Administrators need to get in and fix the system during an emergency.” Sorry. It’s only an “emergency” if the organization doesn’t hire and train enough administrators so that someone is always available to actually come into the highly secured building, strongly authenticate, log into the highly-secured dedicated administrative console, and manage the system in a secure way.
Yeah, I admit it, restricted zones may be a bit more expensive, a bit more inconvenient than business as usual. But breaches are even (much) more expensive and inconvenient. To pull a few more choice quotes from the 2004 pilot episode of Battlestar Galactica:
“You’ll see things that look odd or even antiquated to the modern eye. Phones with cords and computers that barely deserve the name. It was all in the face of an enemy who could infiltrate even the basic computer systems. Galactica is a reminder of a time we were so frightened by an enemy that we literally looked backward for protection.”
Maybe our “Adversary” isn’t as dangerous as the Cylons of Battlestar Galactica, but according to the Vanity Fair article Enter the Cyber-Dragon (and countless other articles about countless other breaches that I could go on all day citing) we seem to be a bit outgunned, at least for now. Let’s face this fact and use restricted zones as a starting point for enhancing the defense.
Let it be like “It’s all hands on in Galactica, Commander Adama’s orders.”
Category: Uncategorized Tags:
by Dan Blum | October 30, 2011 | Submit a Comment
I feel that in the what hath we wrought? post I succumbed to the emotionalism around the cyberwar topic. This morning I was even thinking of changing the post to give it a more neutral bent, but I see Marcus Ranum commented on it, and if something draws his attention, it must be a good thing!
Anyway, to carry on with the closing thought in the wrought post, some security professionals seem to consciously or subconsciously avoid covering cyberwar – perhaps because its a difficult, inflammatory, and frustrating subject. Others plunge in for the professional opportunity. We will likely see many more of those.
Gartner analysts haven’t articulated any consensus position on cyberwar yet that I know of. So this one is just my own opinion: Security pros, organizations, and individuals should collaborate together more than ever before to peacefully resist both the threat of cyber-conflict and the regulatory or military over-reaction to it. “We” must resist through good security practice, information sharing, lobbying, and diplomacy.
And maybe the discussions of defintions that used to make me impatient because they didn’t lead to action aren’t such a bad thing after all. Avoiding “over-reaction” needs to be part of the “action.” I’m floating a suggestion in the title of this post: Maybe we should call the problem “cyber-conflict” instead of “cyberwar.” The term is less exciting, but that could be a good thing when we’re in danger of a bit too much excitement.
Category: Uncategorized Tags: cyber-conflict, cyberwar
by Dan Blum | October 29, 2011 | 3 Comments
Rain falls outside an office window on a grey October morning, as if to usher in a moody Saturday with little to do. So my thoughts turn again to cyberwar.
In an earlier post I wrote I was “fascinated with cyberwar”. As one might be fascinated with a dangerous animal…
Yesterday three articles flew like ill-omened ravens into my browser and email inboxes:
- Schneier on Three Emerging Cyber Threats: The Rise of Big Data, Ill-Conceived Regulations, and the Cyberwar Arms Race. Bruce focuses like a laser; his last point captures exactly my fear that an arms race would create a proliferation of weapons ultimately worse than the imagined war they were built for and perhaps ultimately caused.
- Brian Krebs – in Who Else Was Hit by the RSA Attackers? – lists 650 organizations attacked, points the finger at China, and notes Congressional interest in the matter. Law enforcement is key to protecting against cyberattacks, but we must be careful what we ask for. It’s best to wish the issue doesn’t become overly inflamed and for cooler, more deliberative counsel and diplomacy to prevail.
- On Techdirt, the “The Non-Existent ‘Cyber War’ Is Nothing More Than A Push For More Government Control” post forecasts a money grab, endless “faux” war, and loss of civil liberties.
The last post inspired my “what hath we wrought?” mood. Shall I now join some of my colleagues who seem to consciously or subconsciously avoid covering cyberwar because of its most ominous connotations?
Category: Uncategorized Tags: cyberwar
by Dan Blum | October 7, 2011 | Submit a Comment
The Cloud Security Alliance (CSA) announced in late September that the Security as a Service working group has published its first white paper, “Defined Categories of Service 2011”. The purpose of this group’s research is to identify consensus definitions of what Security as a Service means, to categorize the different types of Security as a Service and to provide guidance to organizations on reasonable implementation practices.
The white paper covers 10 categories of service including:
• Identity and Access Management
• Data Loss Prevention
• Web Security
• Email Security
• Security Assessments
• Intrusion Management
• Security Information and Event Management
• Business Continuity and Disaster Recovery
• Network Security
I took a look at the white paper yesterday and found it interesting as a starting point that describes the category and provides a non-exhaustive list of “cloud” and “non-cloud” vendors in the space. It doesn’t yet, however, provide much discussion of how these services would actually provided as “cloud” and how that would differ from premise-based implementations.
Future versions of the white paper need to add more discussion of what it means to provide these services in the cloud. This has to cover the use cases because many of these categories are pretty big. For example, what does it mean to provide web security in the cloud? A non-exhaustive off the top of my head list is:
• Secure web gateway
• Web application firewall
• Reputation database accessible from browser plug-ins
But that’s an easy one. Identity, or IAM, is harder. Here we have myriad use cases in including multi-protocol federation gateways, attribute authorities, entitlement brokers, claims transformers, and many types many identity provider (IDP) services ranging from Open ID consumer services to high assurance IDPs.
These use cases need to be broken out and addressed individually before the cloud security service working group can really provide implementation guidance. In addition to use cases, the working group must consider architecture. What does it mean to provide data loss prevention (DLP) in the cloud? A secure web gateway service (see above) that acts as a proxy for mobile, roaming users could provide channel DLP for the web as part of its service. An “email security” service might cover another channel. But to really protect cloud-hosted data at the source, the DLP security service would have to integrate with major CSPs like Amazon, Google, and Salesforce. Then it would have to provide a common policy management and data discovery capability. Whew! Is this one service or bits of DLP capability strewn through many services one has to ask, and these would be radically different implementations.
Fortunately, not all categories are hard as to pin down as DLP. The working group has an opportunity to identify, divide, and conquer important use cases. There’s even to opportunity to recommend assessment criteria (based on the Cloud Security Alliance Cloud Control Matrix) that providers could build to and customers could use to evaluate the services. Ultimately, that’s what I hope this group will start doing.
All this is a lot of work, but that’s the type of challenge an organization like CSA with thousands of members can take on. If you read this and you’re interested, know that you can get involved. The working group is open to new participants and it has a wiki through which volunteers can collaborate.
Category: Uncategorized Tags: cloud computing, cloud security
by Dan Blum | July 12, 2011 | Submit a Comment
There is no dictionary.com definition of “cyberwar” but there’s plenty of colloquial use of the term. Especially lately.
As multiple breaches and DDOS attacks struck the U.S. and Europe in the savage spring of 2011, I became fascinated with the concept of cyberwar, and then increasingly appalled. In the heat of those news moments I felt we were approaching a state of cyberwar in the world.
I struggled to blog about it, but the initial efforts made trusted colleagues uncomfortable. One told me to “be wary of over-simplifying an area that has many actors and dimensions and largely speculation about the role of nation state intelligence agencies in several noteworthy and publicized attacks.”
Yet the unease with definitions isn’t universal. Andrew Walls at the Gartner Security Summit asked rhetorically: “Does the definition matter?” and noted that the last time the U.S. formally declared war was during World War II.
Indeed, many define cyberwar pretty broadly. According to the “free online dictionary,” cyberwar is just “an assault on electronic communications networks.” As described in another post on cyberwar definitions, even the experts’ opinions differ.
Yet another colleague, Ramon Krikken, got me wondering: Should more sober heads prevail? Ramon comes from the Netherlands, a country on the plains of Northern European, one of history’s great invasion routes.
To paraphrase Ramon’s concerns, “I’d rather you didn’t call it war. I’d rather we didn’t get into another war. Definitions are important and calling something war when it isn’t is asking for trouble. I don’t see current cyber-attacks rising to the level of war, which would mean we’re considering all options including a kinetic response.”
To follow this train of logic, most of the cyber-attacks we’ve seen have been undertaken by individual criminals or hacktivists rather than nation states. And most suspected nation state advanced persistent threats (APTs) fall into the espionage bucket. Bitterly resented, maybe. Cause célèbre for war, rarely.
This isn’t to say we won’t see nation states (or their proxies) conduct digital attacks with kinetic consequences in the future. In fact, a spectacularly successful attack on the electrical grid or the financial system might cause enough economic damage to be called an act of war even if it doesn’t immediately kill anyone.
The good news is that we’re not there yet. The bad news is that nation states are building digital arsenals through cybersecurity programs just in case on again, off again efforts at diplomacy fail. Last month I received an inquiry from a client in a medium-sized country asking me to rate the comparative cyberwar capabilities of four other nations. I declined because we don’t provide that sort of research.
But it was one more data point that me think one should be careful with what language one uses on this complex issue, and consider de-escalating the situation. But that’s for my next post.
Perhaps this sums it up the best. Again, from Andrew Walls: “When I hear the term cyberwar, it tells me that the speaker is attempting to define their pursuits and interests as different and more important or arcane than mere information security. The term is political speech, not a meaningful term that defines or describes a group of activities.”
Category: Uncategorized Tags: cybersecurity, cyberwar
by Dan Blum | April 8, 2011 | Submit a Comment
Recently having discovered Google’s 2 step verification feature, I found much to like, but a few concerns.
What’s to like? I’ve been wondering for 10 years when it would become commonplace to use the ubiquitous mobile phone as a second factor authentication device. Now it’s here from Google, and it’s free. After signing up for the feature in a Google account, you’d receive the one time password (OTP) via an application on the phone, a short message system (SMS) text, or even a voice mail.
Now you may be wondering, is 2 step convenient? What if I don’t have coverage? The OTP app on the phone generates different codes every 30 seconds according to a time-based algorithm; it doesn’t need coverage. Also if you find it inconvenient to deal with the two step signon many times a day, you can set the Google Account to remember your PC, i.e. eat cookies. This diminishes physical security but keeps the risk of remote attacks about as low as it can be. A ZD Net Tech Broiler blog entry put it more colorfully “the bad guys won’t have a [rat’s] chance in hell of breaking into your account.” That is, as long as they don’t have spyware on PC. (Note that spyware is always a threat to OTP, at least at the time of use.)
What’s not to like? Tech Broiler complained that Google 2 Step Verification “also broke all the web sites which I use that have to cross-site authenticate using my Google account, of which there were about a dozen, including FaceBook and Quora.”
Google has (since then?) instituted per-application passwords. As described in Google Account help : “Some applications that access your Google Account (such as Gmail on your phone or Outlook) cannot ask for verification codes. To use these applications, you will not use verification codes. Instead, you’ll enter an application-specific password in place of your normal password.” The per-application password can also be remembered on the PC or device if that application allows it. Though to be honest I don’t completely understand this feature, it sounds like per-application passwords protect the main account but give up some of the convenience of single sign on in return for not much incremental per-legacy-application assurance.
My final concern is around recovery and I have a question about this into Google which must be answered . Because the last thing I need is to be travelling in South America someplace, lose my phone AND my gmail account. I’m going to need to be able to reest back to one step – and fast – in that scenario.
I did see some recovery features with Google Accounts that let you reset your password through a pre-configured alternate email address or use secret questions and answers but I’m not sure how that works when two step is turned on. Purists might say: “What good is two step authentication if recovery (or exploitation of recovery) only takes one step?” To which the counterargument would be that the recovery mechanism also (sort of) takes two (weaker) factors and that all multi-factor authentication mechanisms have this kind of issue. Authentication is just plain HARD!
Personally, I just want to know how the recovery works before I turn on the feature. It may be that Google has brought better authentication a bit closer to the masses.
Category: Cloud IAM Tags:
by Dan Blum | February 21, 2011 | 1 Comment
This irresistible pun-and-metaphor popped into my head Sunday morning after my wife showed me the Economist’s Enomaly SpotCloud article, and it stuck, leaving no option but to succumb to writing what is for me an unusually short and chatty blog entry.
For all the upcoming sarcasm or irony, I believe cloud brokerage has a shot at being the wave of the future, and some of my colleagues are dead certain of it. Cloud brokerages will compete to commoditize cloud services while providing security functions.
But for now SpotCloud has high fees and no guarantees. The Google App Engines-hosted spot market for buyers and sellers of cloud computing skims 10%-30% off the top and offers no service level agreements (SLAs).
Further, the identity of the sellers of virtual machine (VM) capacity is opaque to SpotCloud buyers. Although according to the article “buyers can also specify in which country or even city they want their virtual machines to run,” they must rely on sellers such as an unnamed “entertainment company” to keep any border-sensitive data in the desired country.
On the bright side, the entertainment company’s 4,000 servers would “otherwise sit unused, probably in the lull between making animated movies.” I like that – it’s the bit about how companies with servers to sell Christmas cards in December, Easter Eggs in April, and pumpkins in Halloween could share and consolidate resources to make this a greener planet.
So take a look at the savings (as long as you don’t need security). The FAQ is here. You’ll find that, like the fabled Horatio Alger, Enomaly with SpotCloud is working its way up from the bottom.
Category: Uncategorized Tags: cloud computing
by Dan Blum | February 16, 2011 | Submit a Comment
The ongoing turmoil in Egypt encourages me to finally write this long-fermenting blog post. I’m recalling one of the exchanges from a very interesting dialogue with a company considering a three-tier U.S. hub, world region hubs, and country data center model. After brainstorming about hosting in Europe and Asia with me he asked, “Where do we put a data center in the Middle East?” We then thought “Qatar’s good, but too close to Iran. Maybe Egypt would be better ‘while Mubarak still rules.’” How times change and the stability once-assumed is gone!
Many global, multinational companies are larger than small or even medium-sized nation states. Does globalization of entire industries and increasing political assertiveness by governmental regulatory agencies demand that IT, too, have a foreign policy? That puts the ball squarely in the CIO’s court. With the security and business continuity issues looming large among many others, CISOs must also get involved. I enjoy advising organizations thinking about these topics, which marry my lifelong interest in international affairs with my IT expertise.
In the age of abundant, low cost bandwidth, international affairs may drive decisions on siting even more than performance speeds and feeds. Global siting may be affected by any or all of the following concerns.
• Threats: Some organizations are facing ongoing waves of cyberattack from certain countries (see my blog post on Operation Aurora). Putting a data center in a “hot” country may expose the organization to more attacks or make it impossible to screen out traffic from that country.
• Compliance and identity: As I wrote in “The End of Identity Silos,” organizations face restrictions on cross-border transfers of personal data. Having data centers in multiple regions allows organizations to keep identity data localized in Europe for example, or U.S. ITAR data in the U.S. Some data would stay in the desired region, or country, under control of local data owners that work for the company.
• Lawful intercept: Google, Blackberry-maker Research in Motion (RIM), and many other companies that operate services internationally face increasing law enforcement demands for access to data files and data flows. Encryption without backdoors or escrow isn’t allowed everywhere. Private data center operators may face the same demands. Keeping data in a relatively unobtrusive jurisdiction helps preserve privacy and confidentiality.
• Business continuity: No organization wants to see its data center investment go down before riots, or up in flames. Part of the foreign policy is a search for stability.
• Patronage: Siting in an important country or regional market shows potential customers and partners that one’s organization is serious about doing business there. Getting orders from overseas customers may even demand reciprocity, with both the company and region investing in a common business venture.
• Cost: Also a major factor. Some world regions have higher land, labor, or energy costs.
To net things out, multinational corporation CIOs must develop foreign policy that balances stability with patronage concerns, compliance with cost concerns, and so on. Of course one could always paraphrase Microsoft’s commercial: “To the cloud!” and outsource some of these concerns.
The question then becomes, what is the foreign policy of the cloud service? At the end of day, public or private cloud operators alike might do well to put data centers in multiple regions but make the data (and applications) highly mobile. Picture that – barbarians at the gate, but the data is already in flight.
Category: Uncategorized Tags: cloud security, foreign policy
by Dan Blum | January 24, 2011 | Submit a Comment
It was the “Hey You, Get Off Of My Cloud: Denial of Service in the *aaS Era” title that got me out of bed in the morning for the cold commute to the conference in Crystal City. In the best BlackHat form, Bryan Sullivan’s presentation uncorked a lot of information about attacks on the Web layers – and still deeper – into the cloud:
• Repetitive AJAX client calls on discretely callable web services in the midst of state transition to induce deadlocks (e.g. “hold seat” in reservation process)
• String to floating point conversion attack that puts old versions of PHP into infinite loops
• ZIP nested file bombs to fill cloud storage
• Billion laughs attack, exploiting nested variable resolution on XML parser and hang the compute process
• Regular expression (regex) strings that tie the regex evaluation process in knots and hang the compute process
To net it out, if you’re a user or service provider of a multi-tenant cloud – especially an IaaS or PaaS one with all kinds of different programs running in it – the “beast” is the volume of cybercrime and cyberbugs. Threat meets opportunity. Think botnet doing Google searches to find all the vulnerable web exposed services in a cloud, or just traversing its address range, and launching these (and more) attacks from thousands of bots. This is bad.
Which leads me to reflect on some larger issues raised in other conference presentations. Keynote speaker and former government security executive Franklin Kramer argued for a strategic emphasis on resilience that starts with the assumption that some attacks are going to get through. Which blindingly obvious fact was empirically reinforced by Apple sandbox disassembler Dionysus Blazakis in his recount of software fuzzing work that first found bugs in Adobe and then that “most software breaks a lot” and at least some of those bugs can be exploited. Kramer’s architectural prescription for resilience is:
• Distribution, isolation, and segmentation
• Integrity and least privilege
• Moving defenses
• Deception, and
• Adaptive management
Sullivan provided practical examples of ways that cloud services, or any web-based services, can become more resilient in his defense proposals for each of the cloud DOS attacks. The “number of the beast” attack uses an evil valid number to infinite loop the PHP string-to-floating-point conversion function. Bryan said one can mitigate this by upgrading to PHP 5.2.17 or 5.3.5 versions, setting a compiler flag to change the default way conversion is done, or (worst) writing a blacklist signature for a WAF to filter the number. For the others: don’t expose transactions in the midst of state transitions as web services, disable external entity resolution in XML if you don’t need it, do anti-virus scans on all uploaded files (e.g. .ZIP), and so on. Both Sullivan and Blazakis also mentioned that platform defenses (ASLR, NX, stack canaries) are working to make exploitation harder, so one needs to be sure they’re enabled for specific applications.
So many attackers, so many bugs, so little time. That’s the “beast,” and if we’re going to get its “number” we’re going to have to become more resilient.
So ask questions. Is the cloud OS enabling platform defenses and assessing DOS vulnerabilities? How can the provider and customers harden, whitelist, throttle, rate limit, isolate, monitor, etc., etc.? I’m currently working on a paper about “Determining Cloud Security Assessment Criteria.” I know where I have to tweak it after going to this conference!
Category: Cloud Tags: cloud security