Dan Blum

A member of the Gartner Blog Network

Dan Blum
Research VP
19 years at Gartner
33 years IT industry

Dan Blum, a VP and distinguished analyst, covers security architecture, cloud-computing security, endpoint security, cybercrime/threat landscape, and other security technologies. Mr. Blum has written hundreds of research… Read Full Bio

Building Cloud Immune Systems with Security Services – February 12 webinar

by Dan Blum  |  January 25, 2013  |  Submit a Comment

This is the link to my upcoming webinar on February 12

 Building Cloud Immune Systems with Security Services

Abstract: Cyber-attacks, malware, and data leakage constantly threaten IT. Cloud-based security services are key to the future of defense, but providers must continually improve just to run in place.

Discussion topics

·         What is the outlook for security services in the cloud?

·         Why do cloud services need increased information sharing to preserve protection?

·         How can the industry improve threat intelligence and security information sharing?

I hope to  update this post soon with a more substantive discussion of what the presentation covers, but for now, I encourage those interested to go to the link and put it on your calendars.

Thanks as always for visiting my blog

Dan

Submit a Comment »

Category: Uncategorized     Tags:

Playing chess with APTs

by Dan Blum  |  December 28, 2012  |  Submit a Comment

During a briefing from the top security analyst at one of the Washington-area cyber centers, I got the idea that resisting targeted attacks from sophisticated adversaries (so-called advanced persistent threats, or APTs) is a bit like playing chess at the grand master level.

Security efforts disproportionately emphasize endpoint anti-malware. But users, desktops and devices are only the pawns on the board (who, unfortunately often hold the crown jewels – your data). Sophisticated attackers adeptly perform the necessary intelligence-gathering to find just the right social vulnerabilities for the person of interest and the right technical vulnerabilities for the device. Once exposed, most useful devices are easily compromised by targeted malware exploits riding on the back of spear phishing or similar attacks.

The rook, or castle, provides a strong defense in a chess game. On the anti-malware chess board we’ve tried to protect our pawns behind the rook’s analogues – firewalls and system hardening. But these technologies seem so 1990s and continue losing effectiveness today. Users and developers got around firewalls long ago while locked down endpoints have become a quaint concept for many in bring your own device (BYOD) era. As I wrote in my Restricted Zones post firewalls still have a crucial job protecting data centers and servers. But in the defense of users and devices you can’t think in terms of a Maginot line.

Instead, work with other IT groups to craft a mix of user capabilities and security mechanisms to suit the business use cases. In IT just as in chess you have to be smart to do this but encrypted information containers, server-hosted desktops, contextual access management, system re-imaging and user profile management are some examples of tools you can use in today’s data center and end user computing environments to control the center of the game, or the data itself.

You can also guards some user interactions outside the firewalls using security components such as secure web gateways (SWGs) and secure email gateway (SEGs) to cover the vectors of infection through which malware is delivered. Like bishops and knights slashing and leaping across the chessboard, SWGs and SEGs extend your protective reach.

Even with all these defenses, remember that sophisticated and persistent groups of adversaries can reconnoiter and work around any static defense. In information security as in chess, the board stays in constant motion. Assume you’re already compromised to greater or lesser degrees and try thinking a few moves ahead. Develop hypotheses about threats, targets and attack paths and use advanced monitoring to confirm or disprove these hypotheses. Also monitor for anomalies that suggest further hypotheses or are worrisome in themselves. Regain some of the home field advantage lost to BYOD through awareness programs and telemetry gathering tools that turn your users and devices into sensors.

Collect logs from networks, endpoints and applications and infuse your security information and event management (SIEM) system with local IT and global threat intelligence context. Use this context and data to correlate events and maintain your alertness throughout the game. Don’t’ underestimate the importance of threat intelligence. Share security information with vendors, law enforcement and peer company contacts to collectively learn more about the threat, or attacker. Only through insight into threats, vulnerabilities, attacks and targets can you hope to stay a few moves ahead of APTs.

One gets the sense that threat intelligence and reputation systems – delivered through cloud assist and security information sharing – could become the kings and queens of cyber-defense. At my cyber-defense briefing, I saw timescale slides depicting kill chains, or attack graphs, based on one of their most prolific attacker’s social engineering, vulnerability exploits, command/control, lateral movement and data exfiltration tradecraft. By studying the adversary’s techniques, the cyber-center was able to shut “him” down for awhile after learning that attack waves always began with an increase in domain registration activity at a certain bullet proof hosting center in China.

Eventually the attacker must have realized he’d been made and changed some of the tradecraft because – ominously – detections further down the kill chain re-appeared to the right of the timescale.  But developing threat intelligence on the attacker bought some time, and sharing the information appropriately with security teams at other organizations helped them find ways to shore up defenses against similar exploits.

Finally, in this never-ending cyber-conflict we’re starting to realize that – just as in chess, soccer and war – sometimes the best defense is a good offense. Not having an offensive arm to information security is a huge liability against which we still labor while competent attackers operate with virtual impunity. While most organizations understandably don’t want to try and turn their security department into the NSA, some with advanced security programs do deploy threat attribution, legal sanctions, law enforcement contacts and so-called active defense techniques such as deception and information hiding in their networks to deter or confuse attackers. Although such thinking is very early in development outside the defense industry, keep your mind open to learning about the capabilities mentioned because – if you’re targeted – you may soon need the extra edge.

Submit a Comment »

Category: Uncategorized     Tags: , ,

How to control appropriate use of the web in modern work environments

by Dan Blum  |  November 30, 2012  |  Submit a Comment

In general, organizations are finding a need to strike a balance between restricting employees’ use of the Internet (for reasons of security, liability or productivity) and allowing such use in order to create a more agreeable work environment. I wrote about this in my report “Assessing Secure Web Gateway Technologies”, saying:

 ” With all the Web’s risks, liabilities and time-wasting opportunities, employers might block all non-business-related Web categories but for constraining forces. One such force is social networking. Another is work-life convergence as millions of workers spend a significant amount of time traveling or telecommuting. Workers who are traveling can’t completely leave their home life behind; they may need to download personal email, shop, check online bank accounts and so on. Telecommuting gives workers more flexibility but tends to expand the number of hours they must be available to work or be “on call.” Workers tend to want more flexibility as part of the bargain. Employers must be careful not to create adverse conditions for morale or retention through appropriate use policy (AUP) enforcement. For organizations that want to embrace social networking and/or work-life convergence, secure web gateways (SWGs) are the technological equivalent of a knight in shining armor that enables liberal policies while still providing some opportunity for control.”

 The question of whether to provide relatively permissive web filtering environment is a close cousin of the questions around bring your own device (BYOD) usage. We’ve written about that extensively in “Creating a Bring Your Own Device (BYOD) Policy” and other documents.

 At the same time, establishing a relatively permissive web filtering approach can increase the security risks of malware infection via malicious web sites, data leakage and liability to the organization from inappropriate use of the web.  Generally organizations seek to strike a balance so that they can gain the benefits of an employee-friendly web filtering policy but mitigate the risks.

 Some of our business clients have tried blocking most categories of web sites used for personal reasons while whitelisting a few on an exception basis. While leading vendors such as Bluecoat or Websense do provide a capability of whitelisting, managing it can be problematic. For example, one client I spoke to had tried blocking all unknown sites until exceptions could be whitelisted but didn’t find the vendor’s functionality for this satisfactory. More typically, organizations take a blacklisting approach to block specific categories such as pornographic sites. Those most concerned about liability should weight a capability to perform dynamic classification highly in their evaluation criteria for an secure web gateway (SWG) solution. Leading vendors have the dynamic classification capability. For more information, see my report “Selecting and Deploying Secure Web Gateway Solutions.”

 Note, however, that even with advanced web filtering it will be difficult to keep a user that’s highly motivated to access the illicit content using a blacklisting-by-category approach. For example, I’m familiar with a case involving an employee at a government agency where pornography was blocked; this individual had over 500 attempts to access pornography blocked but a review of the logs found that he also got through the filters hundreds of times. Perhaps the web filtering wasn’t very good, but it is hard to imagine perfection in the face of such determined persistence on the part of a user. Web filtering technology must be supplemented by personnel training and management policies to deal with such cases, which typically represent only a small minority of users.

 When allowing web sites such as youtube.com, facebook.com or gmail.com that are used for personal entertainment, social networking or email there is an increased risk of contracting malware. Although these are “legitimate” sites they have a great deal of “user-generated” content that may not be legitimate. Organizations concerned with malware infection to endpoints via the web should weight a capability to perform advanced malware scanning, or content inspection (in addition to URL filtering), highly in their evaluation criteria.

 Social networking, personal email and other sites can pose a risk of data leakage in two ways. First, a malicious or policy-violating user may deliberately upload confidential company information to the sites. Note, however, that draconian web filtering policies will not be effective in preventing determined data leakage since such users can find many other avenues for exfiltrating the information.

 Second and more insidiously, well-meaning users can facilitate the intelligence-gathering efforts of adversaries (such as financially-motivated cybercriminals, or fraudsters) simply by disclosing seemingly-innocuous information about themselves or colleagues such as indicating they work for “Company X” in their profile and then by posting something about their role as an “SAP administrator.” This and other more personal information could be used to craft spear phishing attacks against that user. 

 In order to deal with the risk of data leakage over the web, organizations should consider SWG data loss prevention (DLP) features for the web channel in their evaluation. Organizations should also evaluate SWGs’ application control features, such as the ability to allow users to read Facebook but not post, or to scan posts to Facebook or Twitter for certain keywords, or to allow Facebook but block certain applications available through the social network. But understand that such features are only part of the DLP puzzle. Organizations must also address the business processes for promoting user awareness of appropriate use of social networks with business discretion through training, investigating DLP-related incidents, and classifying sensitive information to prevent its unnecessary spread.

Submit a Comment »

Category: Uncategorized     Tags:

Nowhere Man

by Dan Blum  |  October 1, 2012  |  Submit a Comment

Nowhere Man is not referring to the frequency of posts on this blog recently :-), nor to the Beatles song. I’m recalling a 1990s TV series about a journalist whose life was erased. Imagine (if it happened today) finding none of your credit cards or id cards work, your email’s locked out, your Facebook’s erased and even your wife and friends don’t know you.

Erasing a life may require national intelligence agency level APT capabilities and even for “them” it would hopefully be hard – if, like me, you bought your wife flowers AND an iPhone 5 for her birthday :-)

Unfortunately, erasing all or part of – just – a person’s digital life is much easier and within the capabilities of the common scoundrel. Recall the epic hacking of Mat Honan, a writer for Wired whose Apple and Amazon accounts were exploited, online reputation attacked, and irreplaceable baby pictures deleted along with the rest of his Mac’s hard drive. And you thought the Mac was safe! In his story about it, Mat ruefully wishes he’d used Gmail’s free two factor authentication capability. If you’re reading this and use Gmail, please do that!

However, although two factor authentication in Gmail would have stopped the exploit against Honan, it isn’t enough to stop a competent hacker (or administrative screw up at a personal cloud service) from finding some other attack vector. This a business problem as well as a personal one – the same weak password authentication and account management techniques that left Honan vulnerable could threaten any organization through its users.

As a writer myself the Wired story, like the Nowhere Man show before it, affected me personally. As a result I’ve done some additional things to harden my digital life and I’d like to do more to help friends and clients. In a coming series of blog posts I’m going to write about how account recovery is a weak link and explore what we can do about it, both personally and as businesses.

Recommended Reading

Submit a Comment »

Category: Uncategorized     Tags: ,

For Those in Glass Houses

by Dan Blum  |  August 1, 2012  |  Submit a Comment

Picture yourself in a large control room watching some computer monitors with centrifuge displays when suddenly loud AC/DC music blares through the room. “Thunderstruck.” You have to watch the video or listen to a cover of the song on Spotify to imagine what it may have been like there in Iran -

“Seriously,” you may ask, “What are the security questions? “

What happened? There’s been reports of another cyberattack on the Natanz nuclear site by parties unknown. The story is still young and could even turn out to be misinformation, but F-Secure has received email reports that malware from Metasploit was used to deliver the raucous AC/DC payload.

Who was responsible? After the Flame and Stuxnet virus revelations, it’s natural that some would point the finger at the U.S. and its allies. But the AC/DC virus hardly sounds like a typical sophisticated and stealthy nation state attack. Has cyberwar – not to put too fine a point on the definition - taken a turn for the bizarre? Has some U.S. defense or intelligence agency developed a sense of humor? Is it a form of psychological warfare? Who knows. The attack could equally have come from a hacktivist group or individual prankster. It’s very important to attribute threats as much as possible but it takes time.

What does it mean? Maybe the reports F-Secure received will turn out to be false and we’ll have been all thunderstruck by a bad song for nothing. But the implications of nation state cyberattacks are so big they’ve brought me out of my cave to write about it anyway.

If the U.S. was behind yet another cyberattack, I think we have to ask what kind of future we’re creating. President Obama himself, according to the New York Times, has repeatedly told his aides that there are risks to cyberattacks on nation states. No kidding! In fact, it may be that no country’s physical, financial and energy infrastructures are more dependent on computer systems, and thus more at risk of cyberattacks, than those of the United States.

As I wrote in Proposing an International Cyberweapons Control Protocol it may be only a matter of time before we’re attacked and the arms race goes into overdrive. Cyberwar is destabilizing, as Bruce Schneir wrote. Shouldn’t the world’s nations attempt to deter military cyberweapons much as they’ve banned chemical weapons and struggled against nuclear weapons proliferation? The actual Chinese-Russian proposals to UN for cyberweapons control are seen by some as yet another state censorship initiative or an attempt to stop the U.S. from developing an area of military advantage. But we have to keep talking.

Recommended Reading and Sources

Submit a Comment »

Category: Uncategorized     Tags:

Collective Defense or Collective Dissent?

by Dan Blum  |  April 16, 2012  |  Submit a Comment

In a recent “botnet bruhaha” post Brian Krebs found that Microsoft stirred up a hornet’s nest when it moved aggressively through a civil law procedure (rather than the more cumbersome criminal law system) to shut down some Zeus and SpyEye botnets. Microsoft’s side of this is that the company is working to the degrade the criminals’ operations. However, some security researchers argue that Microosft staged an ineffective PR exercise for itself and accomplished nothing more than to compromise ongoing investigations by mis-using information the community had shared in good faith.

I’m not here to say who’s right and who’s wrong in this debate, but I thought I’d share it with you to show how complex the world of threat intelligence and security information sharing can get. Reading through the comments on Krebs’ blog feels more like an exercise in collective dissent than collective defense. It’s sad to think that as a result of a company trying to take action against criminals, members of the security research community may trust each other even less and share even less information than they did in the past.

This quarrle may make the security community look like a lot of squabbling mercenaries, but I don’t think that’s the reality for most of us. Some people at Microsoft probably thought they were doing the right thing but some security researchers may genuinely have been inconvenienced from doing work that they also thought was good. Its possible that both arguments have merit. Perhaps we shouldn’t begrudge Microsoft some PR kudos if it strikes a blow against botnets, and we should also want the security researchers to be rewarded for their work and information sharing.

How can we strike a balance between the need to take quick action to operationally degrade cybercrime, but also continue to chip away at the problem of attribution and longer term investigation, prosecution, and deterrence? This should not be an either-or equation. Both the short term and long term action are needed.

How can we (the security community) establish protocols and networks of trust for sharing to help us work together more effectively in the common cause against cybercrime?

Submit a Comment »

Category: Uncategorized     Tags:

Are we Wildebeests or Are we Lemmings?

by Dan Blum  |  March 29, 2012  |  Submit a Comment

Last week I was closeted in our Cloud Adoption Contextual Research findings consolidation meeting. We were researching cloud adoption by the early adopters. We found a great variety of patterns, in some cases anti-patterns; nterprises are all over the map on risk management, for example. Perhaps this is only what one should expect from the cloud computing phenomena we’ve dubbed “the transformation of IT.”

The security non-findings were interesting. None of the large enterprises from our survey reports breaches. They’ve seen no major disasters. Implementation issues a-plenty, and some outages yes, but no “advanced persistent threat” activity. From the security perspective the migration seems to be proceeding smoothly. Concerns are holding some organizations back, but it’s just concerns. These concerns, implementation issues and architecture changes are extremely interesting in themselves – and you may expect to hear more from me on that – but they aren’t the subject of this particular blog post.

Part of me was looking for breaches, and that dog didn’t bark, at least among the 15 large enterprises we interviewed. I also looked at other information sources to see how enterprises are faring in cloud security. For example, a survey of attacks by Alert Logic reports that enterprises who use both premise-based applications and cloud-based ones are finding fewer attacks in the cloud. Does that mean the cloud is more secure than the enterprise, or just that the other shoe has yet to drop? As I’ve written before, I think some cloud service providers (CSPs) operate with stronger security controls than many enterprises, but they face a potentially more serious threat landscape long term due to the risk that’s aggregated in their volume of services. Thus, CSPs must be more secure than enterprises.

Clearly, the realization of higher cloud risk from the aggregation has yet to materialize for most large end user enterprise customers. (Notice the careful wording to exclude the likes of Sony Playstation Network, which is a service.) But one has to assume that as large amounts of sensitive and valuable IT reach the cloud they will be breached much as they are (continually) on premises. Perhaps breaches of enterprise security objectives will be less frequent in the cloud but when they happen they may be larger and more spectacular.

So far the breaches we’ve seen from Amazon, Azure, and others are mostly outages impacting our availability objectives. Bad enough in themselves, but not yet trampling enterprise confidentiality and integrity like Operation Aurora, Shady Rat, Night Dragon, and Zeus did. I mean to say that while we’ve seen forceful browsing or phishing vulnerabilities from Amazon, Google, Microsoft, and Salesforce these are still small potatoes that haven’t caused big losses. But it is inevitable that larger breaches of confidentiality and integrity will.

On the plains of the Serengeti wildebeests conduct their annual migration. Some are pulled down by predators, many survive. An interesting risk management question lies there: what is an acceptable loss rate?

Submit a Comment »

Category: Uncategorized     Tags: ,

We’re Right, We’re Free, We’ll Fight, You’ll See

by Dan Blum  |  February 29, 2012  |  Submit a Comment

Only in San Francisco would Art Coviello end a keynote address to a security audience with those lyrics, which he called “the immortal words of Twisted Sister.” But the feeling of inspiration soon changed into questioning: Amidst information security’s gathering storm, how do we “fight” but still be “right” and “free”?

I found this question woven into the subtext of two RSA Conference presentations (so far) and then in some discussions over dinner last night. It started with calls for the U.S. National Security Agency (NSA) to be given more power to combat cybercrime.

First – Mike McConnell, a former Director of NSA kicked off the Cloud Security Alliance (CSA) Summit by saying that the U.S. has the most to lose from cyber-attacks. At one end of the spectrum is the chilling possibility that fanatic cyberterrorists who can’t be attributed or deterred obtain military-grade cyber-weapons and launch an attack. At the other end of the spectrum is cyber-espionage “where our IP is being taken from us on a regular and consistent basis.”

McConnel said that “NSA is doing better at its mission than ever before.” The agency has a clear picture into global activity except but “the U.S. is a black hole” because by law NSA can’t conduct warrantless electronic monitoring there. Thus, threat actors could cover their tracks by diverting communications through the U.S.

Second – in James Lewis’s panel on active defense – another former NSA director Michael Hayden said “My instinct is that the NSA represents too much capacity to be [on the bench]. I’m comfortable with a dialogue that says, how do we want to get this team on the field?”

But other voices counsel moderation. Lewis asked “How did we get to the point that the best resources we have are in a top secret agency? It’s not too late to reverse course…”

Cut to dinnertime, I’m sitting next to Bob Blakley discussing the panel. We both agreed, by the way, on our respect for the integrity and skill of the people at the NSA.

But I noted my own confused frustration from time to time, that whereas some nations conduct industrial cyber-espionage as a matter of policy, the U.S. does not. Although many nations’ intelligence agencies spy on citizens or visitors if they sense a threat, the U.S. seems to be taking all the flack over the Patriot Act for putting it on the record. And yet former NSA directors are saying they don’t have enough authority. Haven’t NSA and other sponsors of the Patriot Act already gotten us into enough trouble with Allies? Or is the U.S. too idealistic? Some countries would just spy away and cynically deny everything.

But Bob countered: “What I love about this country is its idealism. I don’t want to lose that. I want us to be right.”

It was one of those moments when the scales fall from your eyes. You see that when issues get confusing, one must return to one’s principles. I felt like we can’t just give lip service to “a balance of security and privacy” or something like that. We have to keep on being, in the words of Ronald Reagan, “a shining city upon a hill whose beacon light guides freedom-loving people everywhere.”

So what does that mean? Getting better at catching cybercriminals will require more electronic monitoring, no getting around that. But why can’t monitoring be done with appropriate levels of accountability, transparency, and oversight? No one has shown why due process won’t work if you think outside the box. For example, what if an electronic search warrant could be implemented for electronic searches with fast enough turnaround time but full accountability? What if Patriot Act 2.0 could say that foreign governments, in general, would get notified when their citizens’ data is acquired from a provider via blind administrative subpoena – provided, of course, that government offered reciprocity for us?

We have to “fight” cybercrime and cyber-espionage, but we still need to be “right.”

Submit a Comment »

Category: Uncategorized     Tags: , , ,

Proposing an International Cyberweapons Control Protocol

by Dan Blum  |  February 20, 2012  |  1 Comment

Stuxnet. Duku. DigiNotar. Commodo.  The names of exploits and breached organizations reel past like dark clouds of a gathering storm. Cybersecurity programs spread ominously around the world. I’ve seen the importance of international cyberweapons control for some time and wondered why more people weren’t talking about it. But recently, a new voice from the other side of the world took up the call for a virtual détente.

I first saw Eugene Kaspersky on the beach in Cancun. Lulled to tranquility by tossing turquoise waves – like someone in a Corona commercial – I observed two individuals speaking Russian setting up microphone stands and cameras in the sand. I watched behind sunglassed anonymity as a few more came, one in a black jacket with bushy gray hair. As he sat on the contrasting white sand and began an interview I realized this must be THE Kaspersky.

Kaspersky’s Proposal

The next morning at his company’s analyst conference Kaspersky spoke about “The Internet as a military-free zone – A Dream or an Opportunity?” He began by saying that cybercrime has worsened, but governments now understand the problem and would solve it in a couple years. “I’m not going to talk about cybercrime,” he said, “I’m going to talk about digital passports, social networks, and cyberwar.”

Unlike some who would have a narrower definition of cyberwar, Kaspersky uses the term expansively. He once forbade his employees from even talking about cyberterror or cyberwar publicly. But after watching Hollywood portray the subject quite accurately in the movie Diehard 4, he decided it’s time to tell the world.

Could Stuxnet’s sabotage of nuclear centrifuges be replicated on a broader scale against power plants and water plants? “I’m afraid yes.” Because so much of our physical infrastructure is Internet-connected and computer-controlled, it’s possible to stop critical equipment from working.  Once, Kaspersky told the audience, he toured an Internet-connected experimental nuclear fusion reactor facility.

Cyber-weapons are easier and cheaper to develop than physical ones and cyber-attacks tend to be less attributable. A number of governments have cybersecurity programs and some have announced they are developing cyber-weapons. Still, we’re unprepared for cyberwar consequences and we can’t reasonably harden the physical infrastructure against cyberattacks anytime soon. Kaspersky warns that the major victims of cyberwar will be developed countries.

“We are living in a very dangerous world,” he said. “I do my best to explain this to governments.” The only way to avoid a “cybershima” scenario is to create an international agreement not to develop and not to share cyber-weapons. Nuclear test bans and restrictions on biological or chemical weapons show that treaties can be effective in curtailing arms races.

Time’s not on our side

Over the rest of the meetings in Cancun, I talked with people and explored implications and challenges of cyberweapons control. I’m concerned that the line will blur between well-heeled cyberterrorists and financially-motivated criminals. The subject Kaspersky didn’t talk about – how governments may come to control cybercrime – is interwoven with creating a viable cyber-weapons disarmament protocol. Without a way to greatly deter, attribute, and prosecute cybercrime and cyberterror, it might be too easy for bad actors to sow discord among the nations in the much the same way as extremists on both sides of the Middle East conflict and others conflicts have sabotaged peace efforts.

With multiple countries already developing cyberweapons, time isn’t on our side. What if weapons leak to criminals, or are reverse-engineered? What if cybersecurity programs and institutions grow larger and more lucrative, creating powerful and entrenched interests (like conventional arms dealers and defense industries) for developing yet more cyberweapons and ever fomenting distrust among their nation state customers, if not actual cyberwar?

Do you start to see the complications? There’s so much to do, and so many competing interests, it boggles the mind. It’s enough perhaps to make some proponents of a cyberweapons treaty wish for an actual cybershima (picture cities without power for days, hospital generators failing, people in intensive care dying) that would the foment public outrage to compel a solution.

But I fear the protocol that would emerge from a post-traumatic atmosphere even more than our current state of confused purposes and discussions. What if political support in the wake of cybershima built for retaliatory cyber-weapon programs rather than détente? What if cybershima led to a legislated state of panoptical government surveillance – something many fear is already in the making?

The only way forward

Should such worst case scenarios arise, events could spiral out of control. Official responses might take the form of arms race escalation, ride roughshod over civil liberties, or both. We might then see an escalation of conflict, with idealists and hacktivists taking up cyber-arms against the governments who are in turn in conflict with each other. Rather than an open but secure Internet with the transparency so many people are demanding, we might see escalating suppression of free speech and anonymity, growing darknets and chaos, an endless sate of cyber-insecurity.

In my opinion, the protocols for cyberweapons weapons control and law enforcement are linked. Both must operate in a form that enhances human dignity, privacy, and trust between people. It helps to know that  problems and aspirations are similar worldwide; in Russia as elsewhere, restive hacktivists are compromising web sites, cracking email accounts, and dumping out embarrassing information. It is encouraging to find a voice from the other side of the world echoing sentiments I’ve long held myself.

1 Comment »

Category: Uncategorized     Tags: , , , , ,

The end of confidentiality?

by Dan Blum  |  February 2, 2012  |  1 Comment

Every day it seems that we have less control in the world of information security. Shadow IT rules some enterprises. Applications move to the cloud, IT’s buildings empty out, security staff are reduced to skeleton staff. While a regulatory tide rises across the world in a tower of Babel, employees and contractors in the enterprise embrace mobility by any means necessary. And the information sprawls. BYOD is touted as cost savings by some business executives.

In 2012 Gartner speaks of the nexus of forces – cloud, social, mobile, and information – yet for security staff this could be a dark place to stand like deer in the headlights. Consumerization and compliance are at loggerheads. What happens when an unstoppable force meets an irresistible object? Will it mean the end of confidentiality as we know it?

Before I get into this I must give due credit to my colleagues. What I’ve loved about working at Burton Group and now Gartner is that I stand on the shoulders of giants. This blog post was originally inspired by Bob Blakely’s posts on the the end of secrecy. And I would not even be doing this if another colleague, Eric Maiwald, hadn’t been inspired to take up Bob’s original topic as a potential 2012 Catalyst session.

So what does this perplexing notion actually mean? It can’t be that we just give up and stop data protection efforts. But it does mean that we have to change our paradigms. We should try to centralize data access with server-hosted virtual desktops and enterprise content management systems. But this can only partially hold back strong tides of data dispersion. We can monitor the flow of information with DLP. But malicious users will often evade surveillance – this in an old game of low assurance.

We can attempt some stronger techniques as I advised in my restricted zones blog entry – stop assuming that we can win the futile battle to 100% protect our endpoints and instead get more hard core about building fortresses, or secure zones, around our most critical data. Done correctly, this can reduce the magnitude of worst case consequences but still doesn’t represent a 100% compliance solution.

Yet compliance is a many-sided coin. It needn’t be achieved solely through security technologies. We can change the game by changing business processes; for example, some organizations have stopped storing credit card numbers. Our organizations can also use business process outsourcing, corporate subsidiary structures, and other business approaches to transfer risk or manage it in creative ways.

Creativity will be essential if the nexus of forces coupled with an ever-more challenging threat and regulatory landscape really brings the end of confidentiality as we know it. I recently heard the CISO of a large financial institution muse about “What we would do if all our controls still prove ineffective against the threats?” He spoke of then using business and information management techniques in the realm of espionage – counter-intelligence, deception – consciously and systematically varying the timing, audience, completeness, and accuracy of information flows, watching what happens, and adapting. This is not actionable yet – no more than a thought experiment. But could it represent the shape of things to come in the not too distant future?

1 Comment »

Category: Uncategorized     Tags: