John Pescatore

A member of the Gartner Blog Network

John Pescatore
VP Distinguished Analyst
11 years at Gartner
32 years IT industry

John Pescatore is a vice president and research fellow in Gartner Research. Mr. Pescatore has 32 years of experience in computer, network and information security. Prior to joining Gartner, Mr. Pescatore was senior consultant for Entrust Technologies and Trusted Information Systems… Read Full Bio

Twelve Word Tuesday: Sesquicentennial Anniversary of the First US Telegraph System

by John Pescatore  |  October 25, 2011  |  Submit a Comment

Within weeks of the first telegraph, DoS, MitM and phishing messages were commonplace.

Submit a Comment »

Category: Uncategorized     Tags:

Twelve Word Tuesday: Blackberry Outage Reinforces Security Through Diversity

by John Pescatore  |  October 18, 2011  |  1 Comment

Supporting more than just Blackberries for email raises support cost and security.

1 Comment »

Category: Uncategorized     Tags:

Twelve Word Tuesday: Firewall Policy Management Tools As Rosetta Stones

by John Pescatore  |  September 27, 2011  |  Submit a Comment

Increasingly distributed/complex NGFW/IPS: netsec policy more dimensions than quantum physics.

Submit a Comment »

Category: Uncategorized     Tags:

SSL Is About As Useful As Dumbo’s Magic Feather, But Security Blankets Are Hard to Outgrow

by John Pescatore  |  September 22, 2011  |  1 Comment

Jim Crow: You wanna make the elephant fly, don’t ya? Well, you gotta use a lot of ‘chology. You know, *psy*-chology. Now here’s what you do. First, you’ll uh…
Jim Crow: [all the crows whisper]
Jim Crow: And then right after that, you’ll uh…
[whispers continue]
Jim Crow: [plucks a feather from the youngest crow's tail; he yelps] Use the magic feather. Catch on?
Timothy Q. Mouse: [accepting the feather] The magic feather?
[smiles, now getting the secret, then winks as he gives Jim an elbow in the wing]
Timothy Q. Mouse: Yeah! I gotcha.
[rushes joyfully to Dumbo, then places the feather at the end of his trunk]
Timothy Q. Mouse: Dumbo! Look! Have I got it! The magic feather! Now you can fly!
.
From the movie “Dumbo” Disney, 1941

Secure Sockets Layer was invented by Taher ElGamal at Netscape in the mid 1990s, back in the days when most network authentication protocols were totally open – since they were all written assuming they’d be carried over internal networks only. When the Internet joined the mix in the 1990s, attackers found it easy to install network sniffers and capture network logons and credentials – causing a lot of resistance to the idea of ever logging in or transacting over the Web. This begat the need for something like SSL and the little key turning blue to make people feel safe.

However, SSL in actual use has always had major security holes and while it made people feel safer. It has never been a strong security solution or “natively secure protocol” by any means. Recently there has been a continuing stream of  attacks against the use of Secure Sockets Layer SSL)/Transport Security Layer (TLS) recently. The lax security practices of certificate authorities have been exploited to issue fraudulent server certificates. The reality has been for years SSL server certificates provided little to no authentication assurance to users, they mainly served to support transport security to make sure password entry and cookie passing traveled over a security pipe.

However, more recently researchers developed a tool (BEAST) that exploited a known vulnerability in TLS 1.0 that allows attackers to actually decrypt data carried in SSL sessions. Uh oh – now SSL isn’t even good for transport security??

This TLS vulnerability has been known about since the early days of SSL. It is not present in the latest version of TLS, but TLS 1.0 is what is widely used. In order for attackers to exploit this, they must (1) inject code into the users browser and (2) have a man in the middle position as well. Doing both of these things makes it a non-trivial attack to launch, but the BEAST tools greatly simplifies this.

All of the major browser manufacturers do have patches to shield this problem, but they have been slow to release them because use of TLS version later than 1.0 breaks many older applications. The availability of this new attack tools and the publicity around it should drive the browser vendors to accelerate efforts to released updated browsers and Gartner’s standard advice is to prioritize all patches for critical vulnerabilities such as this one.

It took a loooong time for DNS security to get upgraded, ever longer for BGP security to improve, and SSL improvement or replacement will take just about as long.

1 Comment »

Category: Uncategorized     Tags:

Twelve Word Tuesday: The Real Issue is Securing Heterogeneity

by John Pescatore  |  September 20, 2011  |  1 Comment

iPhones/iPads are a hurricane. Business demanding  heterogeneous devices is climate change.

1 Comment »

Category: Uncategorized     Tags:

Twelve Word Tuesday: Forcing Standard Cloud Processes on Custom Business Problems Leads to New Vulnerabilities

by John Pescatore  |  September 13, 2011  |  Submit a Comment

Square pegs jammed into round holes leads to leaks and exposures galore.

Submit a Comment »

Category: Uncategorized     Tags:

Web Sites: Perennially Squishy, Time to Shield and Crunchify

by John Pescatore  |  August 9, 2011  |  Submit a Comment

Web-site vulnerabilities: hacker’s low-hanging fruit – don’t leave a ladder against the tree.

Submit a Comment »

Category: Uncategorized     Tags:

The Durability of the DMZ

by John Pescatore  |  August 5, 2011  |  Submit a Comment

I’ve done a lot of calls this year with Gartner clients reviewing and updating their DMZ designs. As I pointed out here, not a lot of “de-perimieterization” going on – and for the usual good reasons. Most of the redesigns are adjustments for dealing with virtualization in the data center or in changing patterns of B2B connections, or taking advantage of security technology in new data center switches, App Delivery Controllers, etc. But the basic concept of separation and containment and different security policy enforcement around external facing resources and internal facing resources are still valid – even more so given the advanced targeted attacks we are seeing these days.

The change in B2B connectivity is probably actually the area I find myself recommending the most change. Those B2B connections are less likely to be site to site IPSEC VPNs and more likely to be web services or SOA connections, and many of the business partners are more likely to be cloud service providers than individual companies. So, I’m starting to recommend network security managers look to see if their SOA team or application architects are working in any SOA governance technology (see Gartner RN here) – if so, that should be incorporated into the overall DMZ strategy and be a key component of B2B DMZs.

Just as the rise of suicide bombers brought the concept of a DMZ back to physical security in front of high risk targets, the increasing sophistication of targeted attacks has reinforced the need in enterprise cybersecurity.

Submit a Comment »

Category: Uncategorized     Tags:

Cloud Security and Septic Systems

by John Pescatore  |  August 4, 2011  |  1 Comment

I grew up in Long Island, New York and pretty much took it for granted that when you flushed the toilet, the waste products went down a pipe out the front of your house to a bigger pipe where professionals handled it all from there. When I moved to Maryland and bought a house, I learned about something called a “septic system” where all that stuff went down a pipe, out the back of your house to another pipe and then stayed in your back yard!

Pretty scary to a New Yorker, but over the years I learned septic systems were as reliable and often more reliable than city-sewer. It turns out in both approaches, the weakest link is not the end destination of the nasty stuff, the key is the pipes between the house and the final destination, which for purposes of illustration,  I will call Portapotty as a Service (PPaaS). And I will call the nasty stuff “data” to clean up the analogy I will eventually get around to making.

You see, if the house settles, or a sinkhole forms in your yard under the pipes, or tree roots invade those pipes, or the guy pumping out your septic tank damages the pipe or if the city sidewalk repair cracks the pipe, or if many other scenarios happen where the pipe is no longer reliably carrying the “data” to the PPaaS “cloud” service – well, the data hits the fan is what happens. Never good, never career (or marriage) enhancing.

I’ve noticed that this scenario has been behind a lot of major security incidents that have occurred where cloud-based services are used. It’s not that the cloud service wasn’t secure enough, the problem was that the business processes (the “gazouta” pipes) didn’t align with the cloud service provider processes (the “gazinda pipes”) and the data went spilling out onto the yard, making quite a stink.

I pointed this out in a Gartner Research Note back in March: “HBGary’s Gmail Hack Shows Process Is Vital in Managing Cloud Risk” HB Gary Federal’s CEO at the time had made statements about infiltrating hacking groups, and one of those groups targeted them and compromised HB Gary’s web site, which HB Gary shut down. Realizing this compromise also put their Google Mail accounts at risk, HB Gary attempted to turn off their Google service, but the process Google used for that made sense for a web search company, not so much sense for actual companies. It took so long to shut down the email service, that thousands of HB Gary Federal emails were exposed.

The “pipes” didn’t line up right – the data was flying out of the “PPaaS” service, but the shutoff valve wasn’t working. Google Mail wasn’t vulnerable or hacked, but part of their incident response process couldn’t connect to their customer’s processes. HB Gary was trying to turn the shut-off valve, but nothing was happening – and all that “data” out in their yard made quite a mess.

Now, part of the reason is that Google Apps is still primarily driven by consumer mail demands, not enterprises. Connecting a business HQ building to a “pipe” designed to carry out a family of four’s “data” would like not have a happy ending, either.  There is, and will always be, a huge difference in both how much security consumers want and how much businesses need – and those consumer-grade pipes really, really need to be inspected by businesses attempting to use those consumer-oriented services.

1 Comment »

Category: Uncategorized     Tags:

Turning Penetration Testing Inside Out

by John Pescatore  |  August 3, 2011  |  2 Comments

Back in early late 1990’s and early 2000’s, penetration testing got a bad name. Mostly because there were a lot of  small security consulting firms sprouting up and offering penetration tests for $500 or less, and these pen tests weren’t all that much different than what more established firms had charging tens of thousands of dollars for. That caused conventional wisdom to basically dismiss pen testing just vulnerability scanning with good Powerpoint to scare management.

But back in 2006, I saw a rapidly increasing amount of Gartner clients getting hit by advanced, targeted attacks, and lead a research noted calls “Penetration Testing Augments Vulnerability Management to Deal With Changing Threats” saying:

Deeper penetration testing (also known as pen testing) is needed to augment existing vulnerability management processes, especially in light of the rising level of targeted attacks, but the technique must be applied in the appropriate situations.

Flash forward five years to today, and the continued growth of targeted threats (and the recent hype of Advanced Persistent Threats) has lead to a large increase in Gartner client calls around penetration testing. I go through a decision framework with Gartner clients (soon to be a Gartner Research Note) on contracting for pen testing, vs. doing it yourself and how to choose the best product or service provider.

One recommendation I added a few years ago, driven by the growth in botnet threat delivery mechanisms: make sure penetration testing includes what I call “inside-out” pen testing: having one of your internal PCs access a “captive” malicious site and see if the first stage dropper executable could get on, then see if the second stage (communicate to bot Command and Control sites) and third stage (payload delivery) succeeds. It is pretty scary how often this succeeds – which is why botnet delivery mechanisms are so prominent in advanced targeted threats.

2 Comments »

Category: Uncategorized     Tags: