Erik Heidt

A member of the Gartner Blog Network

Erik Heidt
Research Director
1 years at Gartner
20 years IT Industry

Erik Heidt is a Research Director on the GTP Security and Risk Management Strategies team. His research focus areas include IT risk management, IT GRC, application security and cryptographic controls. Read Full Bio

Webinar “When Encryption Won’t Work: Implementing Practical Information Protection”

by Erik T. Heidt  |  July 21, 2014  |  Comments Off

Enterprise data breaches are occurring all-too-often. Many enterprises have overestimating or misunderstood the protection provided by current, or planned, encryption deployments. This presentation focuses on the attacks that are resulting in expensive and embarrassing data disclosures, and provide prioritized actions for you to consider for addressing these threats.  Portable media and data outside the data center are not discussed. We explore the limitations of database, bulk storage and application encryption approaches. Recommendations include a solid mix of controls – sometimes including encryption – that complement and strengthen each other.

Questions we will explore:

  • If your encryption strategy poised to address likely threats
  • Whether you have invested properly in database, bulk storage or application encryption
  • How to leverage other preventative controls to complement or replace encryption

Click here to register.

How to see you there,

Follow me on Twitter (@CyberHeidt)

Comments Off

Category: Uncategorized     Tags:

Trusting SaaS With Your Data, eh?

by Erik T. Heidt  |  June 19, 2014  |  2 Comments

Two significant SaaS data loss events is short order…

May 6th,, a SaaS solution for qualitative research announced a major data loss event and today (June 19) announced that they are down,  have lost significant amounts of client data, and may be out of business.

What should current or prospective SaaS users learn from this right away?

  1. Take responsibility for having copies of your data! 
  2. Establish regular and routine procedures for backing up your data.
  3. Ensure you can use those backups!
  4. Be prepared to accept the consequences of provider failure.

Both of these services provided mechanisms for their customer to create their own backups – but how many users used them? In the case of codespaces, there primary service was providing svn based code repositories and svn tools for creating backups are commonly available. Dedoose offers an export to excel capability.

Many SaaS providers make no commitment about the availability of your data – none. For those providers that do, and that you might have a contract with, you can’t get data (or a settlement) from a company that doesn’t exist anymore.

It’s a simple rule, if you care about that data, make sure you have copies of it.

If a supplier can’t provide you with a means to get copies of the data, then you need to have a contingency plan for when they are no longer able or willing to provide it. The most important component of any supplier relationship is a solid exit strategy.

Note, that it doesn’t appear that the root cause of either of these events was an infrastructure failure. It sounds like, it was a operations failure for Dedoose and a security failure for codespaces (similar to Wizard Lays Waste to Acme Data Analytics with Chef Spell…). 



P.S. Here is a quote from

“Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.”


Category: Cloud Risk Management Real World Information Security Risk Management Uncategorized     Tags:

Attending Gartner Security & Risk Management Summit 2014 Next Week ?

by Erik T. Heidt  |  June 18, 2014  |  Comments Off

I am speaking at the Gartner Security & Risk Management Summit next week and there are a few talks that I believe will be of particular interests to folks who follow my blog.

But first…
Please be aware that I am now also using Twitter as @CyberHeidt — my schedule next week is very booked, but if I get any down time for random and opportunistic meet ups I will tweet my location!

Details on these sessions, as well as other Security Architecture track sessions, can be found here

On to the sessions I would like to highlight for you…

Boeing Case Study: How We Secure 300 Key Applications
John Martin, Information Security Program Manager, Boeing

In this session, John will:

  • Outline the steps Boeing took to implement a structured approach to addressing third-party security that holds vendor-supplied software to the same security standards as internally developed applications.
  • Discuss how the global manufacturer worked with their vendors to create a successful vendor application security testing program and how the program continues to evolve.

I had the pleasure of meeting John yesterday and getting a pre-view of what he and his team at Boeing have accomplished, and believe this is a must-see session. In fact, if you are planning on attending my session “How to Assess Cloud Service Provider Security”, I would strongly recommend attending one of John’s session as much of this content can be leveraged to support evaluating the application security practices of SaaS providers.

Two opportunities to attend:
TUESDAY JULY 1 11:00 AM EDT (3:00 PM GMT) OR 2:00 PM EDT (6:00 PM GMT)

The Security, Privacy and Ethics of Big Data
Ramon Krikken

Security for big data is an up-and-coming concern for many organizations. These organizations don’t necessarily have a handle on “traditional” data security, so big data seems all the more troublesome. But given the fuzzy dividing line between big data and not-so-big data, are these concerns overblown? Cutting through the industry and vendor hype around security for big data, this session will dig into what really is net-new and where old concepts and technologies can be applied with success.

Key Issues:

  • What changes in security and privacy when going from big to bigger data?
  • How should organizations include security and privacy in their big data initiatives?
  • Which existing concepts and technologies translate from small to big data, and which are new?

Security in a DevOps World
Ben Tomhave

DevOps has become a hot topic over the past couple years, but many technical security professionals wonder if there is a place for them in this world. The answer is not only a resounding “yes,” but also the revelation that security and risk management practices can be vastly improved in conjunction with these changes. This talk will discuss how security and risk management can get involved with DevOps practices to achieve meaningful, mutually beneficial outcomes.

Key Issues:

  • Does security have a role in DevOps?
  • How can security and risk management practices be improved with DevOps?
  • Is DevOps a benefit or a risk to security?

Security Incident Response in the Age of the APT
Anton Chuvankin

Increased complexity and frequency of attacks, combined with reduced effectiveness of preventative controls, elevate the need for enterprise-scale security incident response. This presentation covers ways of executing incident response in the modern era of cybercrime, APT and evolving IT environments.

Key Issues:

  • How to prepare for enterprise security incident response?
  • What tools, skills and practices are needed for APT IR?
  • How to evolve security IR into “continuous IR” or hunting for incidents?

Securing Cloud Services
Erik T. Heidt

Here we will focus on understanding what risk and security controls are necessary for a successful CSP deployments. Software, Platform and Infrastructure as a Service will be examined and contrasted with one another as a set of operational and security risks are explored. A range of IT Risks will be examined, including information security (such as, network, host, application security), operational (such as, availability and quality of services), and strategic (such as, data residence issues, exit strategies) risks will be examined.

Key Issues:

  •  From 10,000 feet, what would a CSP risk model look like?
  • How can each of these risks be addressed?
  • Do these controls align to defend against known and active threats?

How to Assess Cloud Service Provider Security
Erik T. Heidt

Enterprises are under increasing pressure to consider the adoption of a wide range of Cloud services, while also increasing their governance and oversight of existing supplier relationships. This presentation will examine general practices for dealing supplier governance in general, and discuss the nuances and particulars cloud services have.

Key Issues:

  • How can I improve the efficiency and effeteness of my IT governance of suppliers?
  • How can I model and understand the risks associated with CSP engagements?
  • What gotchas and pitfalls need to be considered and avoided?

To the Point: When Encryption Won’t Work: Implementing Practical Information Protection
Erik T. Heidt

Enterprise data breaches are occurring all too often. Many enterprises have overestimated or misunderstood the protection provided by current, or planned, encryption deployments. This presentation focuses on the attacks that are resulting in these expensive and embarrassing data disclosures, and provides prioritized actions for you to consider for addressing these threats. (Portable media and data outside the data center are not discussed.) We explore the limitations of database, bulk storage and application encryption approaches. Recommendations include a solid mix of controls — sometimes including encryption — that complement and strengthen each other.

Key Issues:

  • Is my encryption strategy poised to address likely threats?
  • Have I invested properly in database, bulk storage, or application encryption?
  • How do I leverage other preventative controls to complement or replace encryption?

Hope to see you at the summit!


Comments Off

Category: Uncategorized     Tags:

Heartbleed Exploit in OpenSSL – How Should You Respond?

by Erik T. Heidt  |  April 9, 2014  |  2 Comments

What is the fault?

It has been discovered that a coding error in OpenSSL enables attackers to examine memory on remote servers, or devices.

Specificly: “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal … memory to a connected client or server.”


For those of you without a programming background, here is a good summary of the fault’s impact in business and user terms:

“We have tested some of our own services from attacker’s perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.”


What is the Impact?

OpenSSL is a well respected, solid implementation of a broad set of cryptographic functions, and I have sometimes refer to it as “The crypto-child of a Swiss Army knife and a chainsaw”. OpenSSL is embedded in TONS of software solutions, including the dominant web servers that run on Linux and Unix operating systems – not to mention appliances and devices (Raspberry PI anyone).

The fact that the vulnerability enables the examination of memory provides an attacker with a way of snooping details and bits of traffic off your server — and that is big deal.

Additionally, because the attack can be used to recover and private keys that the server uses, those keys will also need to be retired and rotated.

Bottom line: Heartbleed lays bare and unprotected the contest of the server or device’s memory.

This in-turn enables attacks on keys, or other important things in the memory of the server.
The stolen keys (or other critical information) can be used in other attacks.

And more bad news…

“Exploitation of this bug leaves no traces of anything abnormal happening to the logs.”

As a result there is no data on the server than can be used to determine if you have or have not been compromised.

You will need to patch, but you can not assume the secrecy of key material that has been on these servers. As a result you will need to reissue keys and certificates after the patch has been applied.

Step 1: Identify the real scope…

What about services other than HTTPS, like SSH, VPNs, email servers using TLS?
Are COTS applications OK?
What about those “appliances” we have?
Oh, and what about Internet of Things (IoT) or Operational Technology devices?

All services that run on Linux or Unix based operating systems, and utilize SSL/TLS session encryption or authentication, should be evaluated. The presence of any one exploitable service on a server or device, impacts all the keys processed or stored on that device.

OpenSSL may be embedded in a number of  your COTS applications, appliances, or devices.

Also, many tools and applications may have used OpenSSL in the past. Anything that utilizes SSL/TLS should be checked to see if it uses one of the impacted versions of OpenSSL now, or any time back to 2012 when the fault originated.

Bottom line: Linux and Unix servers are the place to start, but you will need to examine any service, device or application that uses SSL/TLS encryption or authentication.

Step 2: Apply the Patch, Update COTS, …

Update OpenSSL as quickly as possible. At this time I am not aware of available attack code, but certainly the hacker community is working on this. Organizations should feel a sense of urgency to get the update applied, but should not risk the quality of their operations by circumventing good service management and change control practices.

Attack code is already available (! Keep in mind that hackers will not just use the exploit to view memory, but to immediately use that information to gain full control of the box. Imagine, for example, an attacker being able to observe administrators logging into the box and capturing their passwords! So, expect this attack code to be combined into compound multi-step attacks shortly.

Keep in mind that “update OpenSSL as quickly as possible” may require updates to COST applications, “appliance” solutions, and infrastructure components that use Linux. There may be many uses that are not obvious at first blush…

Step 3: Reissue Certificates, BUT FIRST regenerate your key pairs!

The existence of this fault on a server undermines any confidence in the confidentially of keys that have been used on that server. Issuing a new certificate is necessary, but not sufficient. Many organizations perform “lazy” certificate rotations, and do not create new keys! This is a bad practice. Because this attack enables the recovery of the private key itself, certificate rotation alone will not protect you! New private keys must be generated.

Thanks and attribution

This analysis is based on public information on the Heartbleed ( web site.

Also, thanks to Ramon Krikken for spirited discussions that contributed significant value and content to this analysis.

Cheers, Erik



Category: Cloud Risk Management Internet of Things Real World Information Security Risk Management Uncategorized     Tags: , , , ,

CERT IT Risk Podcast

by Erik T. Heidt  |  March 26, 2014  |  Comments Off

Julia Allen invited Ben Tomhave and I to collaborate with her on a podcast for CERT “Comparing IT Risk Assessment and Analysis Methods” (link).  (Note, there is a full transcript available for folks who prefer to read their podcasts.) The podcast includes a summary of recent research that Ben, Anne E. Robbins, and I published recently.


Thanks Julia for approaching us with the project! It was fun collaborating with you!


Cheers, Erik



Comments Off

Category: IT GRC+ Risk Management Uncategorized     Tags: ,

Wizard Lays Waste to Acme Data Analytics with Chef Spell…

by Erik T. Heidt  |  March 10, 2014  |  Comments Off

As reported today on the front page of Cloud Wizard’s Journal:

Easy come, easy go. The same Cloud Wizard that created Acme Data Analytics cloud based data services, the differentiator that has enabled their dominance, their literal Midas Touch in every market they have entered… Undid it all when she cast a angry curse, scripted in her native Chef, in response to this years bonus and compensation letter. Regrettably, the bonus letter contained a typo and was missing four zeros. As Acme’s “Core Knowledge” 40 Petabyte market intelligence database appears to be unrecoverable as well as the 4 billion lines of analytics and machine intelligence code, the company will likely be deemed worthless and delisted from the stock exchange…

Ok, so I am trying to have a little fun here, but there is a reality here that many organizations are overlooking in their infraucture and platform cloud deployments. The clouds great capacity for rapid creation, is also a capacity for almost instantaneous destruction.

Importantly, destruction doesn’t have to be deliberate!

How often has non-production code been accidently run in production environments? Or test systems run against production databases? Anyone who has worked in IT for any length of time has encountered these problems. The cloud of course is a whole new type of creature that is subject to this same type of error, and with devastating effect, as the cloud itslef is software defined. In the past the “worst case” scenario for this kind of error was rolling back the production environment to a known good point and then painfully working forward from there. “Painfully” could be lost customer orders, dropped transaction information, scheduling problems — but usually doesn’t threatened the end of the enterprise itself.

The cloud is different

A simple script, say designed to wipe away a test environment if run in production can wipe away production in a whole new way. In a moment, not only can the machine instances be removed, but data, storage and backups. In a few short moments an entire infrastructure can disappear. In the physical world such destruction is almost unthinkable — what kind of disaster can make all your computers, network configurations, data and backups disappear all at once? In the physical world that would require the total destruction of multiple sites, but not so in the cloud as your CSP’s master account may allow management (aka destruction) of resources across the globe all from a single console and account.

Privilege Management

The good news is, this is a problem with available solutions.

The management and control of privileged accounts, those accounts with significant administrative capabilities, has always been an important aspect of effective risk management. In cloud environments, due to their agile, software defined and scriptable nature, the danger associated with these privileges is amplified, which is bad news. The good news is that many of same techniques, technologies and products that are effective at the control of privilege accounts in on-premises deployments work for privileged accounts no matter where they reside, or how they are used.

My colleague Nick Nikols has just published a research report that touches on these and many other issues associated with managing identities and access in the Amazon web services public cloud environment:

Identity and Access Management Within Amazon Web Services’ Public Cloud
March 2014
Analyst(s): Nick Nikols
(Gartner for Technical Professional’s subscribers can click here)

Last August Nick published research from a broader perspective, again exploring the problem as well as viable solutions:

Managing Privileged Access in Private Clouds and IaaS
August 2013
Analyst(s): Nick Nikols
(Gartner for Technical Professional’s subscribers can click here)

Nick’s research can help you avoid both kinds of disasters, accidental and purposeful, in addition to aiding in the adoption and management of cloud services — which can have pleasant benefits!


Comments Off

Category: Cloud Risk Management Real World Information Security Risk Management     Tags: , ,

New Self-Audit Toolkit

by Erik T. Heidt  |  September 25, 2013  |  Comments Off

In “Achieving IT GRC Success“, Gartner recommended that enterprises consider six core activities in the Execution phase of the IT GRC practice.

These included:

  • Risk Assessment
  • KRI Measurement and Management
  • Ad Hoc Risk Decision Support
  • Compliance Management
  • Audit Support
  • Policy Management

There are many aspects of Audit Support that are discussed in the document, and one of them is creating a partnership between the IT GRC practice and IT operations groups to improve audit outcomes through mock-audits and pre-audit preparation. My colleague Khushbu Pratap, who is a member of Gartner for IT Leaders Governance, Risk and Compliance research group, has just published a toolkit “Toolkit: Avoid Audit Headaches by Planning an Information Security Self-Audit” to jump start or improve self-audit capabilities. The toolkit includes useful resources such as a PowerPoint presentation on self-audit planning and a sample self-audit plan in an  Excel worksheet.

Check it out.

Thanks, Erik


Comments Off

Category: Compliance and Exam Tips IT GRC+ Risk Management     Tags:

Effective Selection and Implementation of IT GRC Solutions

by Erik T. Heidt  |  September 20, 2013  |  Comments Off

The basic question is, how do you select tools to support your IT Governance, Risk Management and Compliance (IT GRC) needs? This has been a major focus for my research over the last 10 months. The first phase of that exploration focused on defining a guidance framework that could be used to identify the IT GRC needs of your enterprise and then structure your IT GRC practice to address those needs. That research culminated in the publication of “Achieving IT GRC Success“.

The next natural questions were:

  • What are the critical capability for an IT GRC solution?
  • What should my requirements be for such a solution?
  • What tools are available that posses these critical capabilities, and may address my needs?

These questions are addressed in my latest document “Effective Selection and Implementation of IT Governance, Risk Management and Compliance Solutions“. This document should also be useful to organizations that want to optimize their utilization of an existing solution. The document examines these capability in terms of the how they are applied to improving IT GRC processes, and not in terms of technical characteristics of the tools – more on that later.

The six areas are:

  • Asset and Entity Management
  • Risk Management, Measurement and KRIs
  • Risk Register and Exception Tracking
  • Report and Dashboard Support
  • Policy Management
  • Risk Control Self-Assessment and Measurement

The document focuses on establishing requirements and evaluation criteria around these use cases, as opposed to focusing on a long list of technical features. RFPs and product evaluations that are focused around long checklists of features often overlook more important qualitative aspects, such how the tool will be used and maintained. It is very easy for suppliers to add capabilities through integrations with commercial (or open source) code libraries. Bolting on these capabilities does allow suppliers to check a lot of boxes and demonstrate technical capabilities, but how difficult those capabilities are for your organization to implement, maintain and manage is a completely different story.

In discussions with clients about failed implementations, technical capabilities are rarely the issue — and really only become an issue for enterprises with data volume or performance needs that push extremes.

The vast majority of failed implementation stem from:

  • A lack of well defined requirements, resulting in classic project management failures.
  • Low quality configuration or implementation work, resulting in unreliable or unmaintainable deployments.
  • Inability to maintain a positive relationship between the supplier and client.

Their really isn’t anything unique to this list. These are the same common failure modes that plague all types of complex enterprise software deployments. As a result this research focused on establishing baseline and stretch requirements for each of these core capabilities, that enterprises can use to optimize their selection and deployment plans.

Additionally, 17 pages (roughly 40%) of the document focus on an analysis of the products in this market in terms of these capabilities, and should enable organizations to identify suitable products for inclusion in product evaluations.

Thanks, Erik

Comments Off

Category: IT GRC+     Tags:

Relativistic Control Theory

by Erik T. Heidt  |  September 19, 2013  |  Comments Off

A few weeks ago I had the pleasure of attending a roundtable of IT Risk Managers. Most of the participants were folks involved in day-to-day risk and governance in financial institutions. During one of the presentations there was an exchange that occurred between one of the speakers and myself, that has helped me to understand that “Relativistic Control Theory” isn’t well understood – something that has been a theme for me over the past few months.

During a presentation about risk assessment, the speaker wanted to establish as an assumption that there are always controls present – an assumption I take issues with. The exchange went something like this:

Speaker: “In in my earlier career – as an auditor – I never encountered an environment without controls.”

Erik: Raised hand! “Oh, I am sure you did.”

Speaker: “No, there were always some controls present.”

Erik: “I think what you mean is that there was always something on the checklist that was present, but controls are relative to specific risks or threats. A locked door may protect against theft, but has no value in protecting a computer system from a logical attack over a network.”

Speaker: “But the locked door is a control!”

Erik: “But it has not no impact on protecting the system from network based threats. If the system were attached to the internet, the locked door has no impact on protecting it from being exploited from a network based attacker. Furthermore, the locked door has no bearing on a number of other risks, such as the stability or availability of the system.”

At this point, in occurred to me that continuing this line of dialog was going to be (more) disruptive and I dropped it. The conversation continued to disturb me, because I knew I had hear the platitude “There are always controls present” used time and again before – and that this was a symptom of a larger problem. That problem being that a lack of understanding that controls are designed to address risks – there is an essential relationship between a particular control and a particular risk. Furthermore, there is an hypothesis that presence of a particular control reduces the risk (either through a preventative, detective or corrective mechanism).

My colleague Ben Tomhave touched on this in his recent blog posting “Understanding “Why” Aids Policy Conformance“. There he discussed how linking policy statements with the “why” has a profound impact on effecting awareness, acceptance and compliance. The point I want to make here is that for every control there is also a “why”, and it is critical to understand it in order to make appropriate decisions about control effectiveness, as well as the design of controls and compensating controls.

I think that this challenge may stem from the culture of many of the large firms concerned with audit as a primary business. The dominant model for such firms is to heir large numbers of freshly milted auditors to perform fieldwork. In the “up or out” performance management culture of these organizations many of these individuals do not progress to the point of designing the audit processes but only with using the fieldwork processes designed by others. This is the origins of the dreaded “audit checklist”.

To compound the issue, many regulatory or commercial compliance programs are very checklist heavy. Examiners are often testing for the presence of a control without understand under what circumstances the control is meaningful or the limits of its effectiveness. The use of encryption as a Silver Bullet is a great example of this. Often the presence of encryption is sufficient to reduce the scope or address large portions of an examination without the examiner or auditor ever understating what risks or threat vectors are and (more importantly) are-not addressed.

Discussion of these relationships, between controls, and the risks or threats they do and do-not effect, are common on client calls and at some point I coined the phrase “Relativistic Control Theory” or “Theory of Relativity of Controls” – only to provide people with an anchor to the concept.

So, the thing I wish I had been articulate enough to close that exchange that inspired this with is: “Determining if controls are present, of if they are appropriate, must always begin with an understand of the control objective. That being, to begin with the risks and threats and then to evaluate effectiveness. This is the only way you can find out if meaningful controls are present.”

Thanks, Erik


Comments Off

Category: IT GRC+ Risk Management     Tags:

Raspberry PI & Securing the DIY Internet of Things

by Erik T. Heidt  |  September 3, 2013  |  Comments Off

(Note, if you know what a PI is and just want to jumpstart the security posture of your device, skip to How do I secure this thing?)

What is a Raspberry PI and who are these Makers?

You have probably heard a number of organizations discussing the “internet of things” or “industrial internet”, an emerging situation where almost every device is accessible via the internet. Most people picture this being driven by large enterprises adding features that leverage the power of the internet’s instant communications capability into consumer goods (like cars and refrigerators) and industrial products (like power meters and buildings). But there is also a powerful community of “Makers” who are creating their own Do It Yourself (DIY) Internet of Things.

In the 70’s and early 80’s the term Maker didn’t exist. The folks you saw purchasing soldering irons and electronics components where Nerds back then (or Hackers in the pre-cyber crime meaning of the word). They read BYTE magazine for Steve Ciarcia’s “Circuit Cellar” column and opened Scientific American directly to the “Amateur Scientist” column.

The appearance, hair styles and labels for these folks have changed a bit, but not their passions. And the good news for the tinkerers, amateur scientists, innovators and home-brew engineers of 2013 is that advances in technology and manufacturing are providing access to low-cost, inexpensive and powerful platforms.

Enter the Raspberry PI “Version B”. My first computer had 48K of RAM, 4 MHz (after modification) CPU, 360K floppy disk, and a cost of – well, expensive. A $35 Raspberry PI-B has 512 MBytes of RAM, 700 MHz ARM CPU (similar to iPhone 3 or Pentium II), USB, on-board Ethernet, and uses an SD card (GBytes) for storage. Sufficient computer capabilities that it really can be used as a proper computer and many folks are making DIY set-up top media centers, game consoles and laptops with them.

The really interesting feature of the PI is that it is setup to communicate with the outside world. It’s General Purpose Input/Output (GPIO) pins enable it to be interfaced with all manner of sensors, motors, displays or anything electronic you can think of. BTW, if you can’t imagine the fun all this can provide, YouTube is packed with videos of Makers showing off their Raspberry PI projects. Want to build an internet connected clock, thermostat, robot, weather station, email alarm, or … You get the idea, the PI is for you.

Last week, all of this culminated in the Raspberry PI winning a 2013 INDEX Award.

So, when mine arrived, you can image with this world of interesting stuff my first concerns was:

How do I secure this thing?

Most Raspberry PIs run a version of the Linux operating system. This is good news all around, as Linux is great fun to tinker with itself, brings loads of powerful software to the platform, and can be configured to be very secure.

Note: The recommendations here will be sufficient for many Makers and enthusiasts who want to build things with the Raspberry PI and are targeted at that community. If you are trying to develop a commercial product or have aspirations for the PI beyond “enthusiast” use, then additional security measures should be examined. The good news is that there are lots of resources on securing Linux for all manner of deployments, so leverage them!

If your solution is not properly secured, it will not be reliable. Few things will be as frustrating as having to re-image a device that has been pwned.

Once connected to the internet, the Raspberry PI is going to be subject to all manner of random attacks. As a result, we should take a few moments to apply common sense controls against the biggest threats. Generally attackers gain access through:

  • Unchanged default passwords
  • Known vulnerabilities in the OS or software that have not been patched
  • Not controlling what services are available from the internet or public networks

These are the dominant security issues for hosts from your PI project all the way up to the largest breaches you have read about affecting banks and government agencies. So let’s dig in…

 Step 1: Change default account passwords.

The default distribution comes with an account called “pi” and a password of “raspberry”. This needs to be changed. To do this, first login using the “pi” account then enter the command “passwd”. You may be prompted to enter the current password, and then will enter your new password twice. “password updated successfully” equals success!

$ passwd
Changing password for pi.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Additional commands and concepts you many want to research include sudo, adduser and passwd. Try using the command “man” command to learn how to use Linux’s built in documentation (i.e. man man, man sudo, man passwd, and so on).

Step 2: Update it and keep it up to date.

The operating system and many of the application on the PI can be managed through the Advanced package tool (APT):

$ sudo apt-get update

This command updates the information in APT’s libraries about available software and versions. In order to actually install the updates use:

$ sudo apt-get upgrade

The update command should always be run before using the of ‘apt’ family of commands. Note, the upgrade process may ask you a few questions – I have always accepted the defaults when prompted and have not had any problems on the PI or other platforms.

The important thing is to keep the host up to date regularly. I have found it is best to automate the update. I would suggest that you set up the “unattended-upgrades” package and do a weekly or daily update. Setting up the package requires editing the configuration file, and as I am trying to keep these recommendations accessible to folks who are new to Linux as well as the PI, I am going to skip them here.

Note: It is also a good idea to update the firmware on the PI itself from time to time. Firmware updates do not warrant automation, but should probably be performed as a component of any major platform changes you install, or when trying to debug problems. Here is the command:

$ sudo rpi-update

Step 3: Block suspicious SSH connections.

Odds are you are going to want to use SSH in order to login to and play with your PI, especially if you want to access your PI from the internet. There are two things that are important to a solid SSH implementation: (1) ensuring SSH is configured properly and (2) using a password-guess countermeasure.

The security minded can find a number of great resources on configuring SSH, but SSH comes pretty well configured in the Raspberry PI’s standard Linux distribution – which is great news.

Strong passwords are great, but they work most effectively when they are paired with anti-guessing countermeasures. My favorite on Linux is fail2ban, which works by temporarily blocking the IP address of any host from which several bad login or password attempts are made in a short period of time. The default is three bad password attempts, which results in a 10 minute block on the IP address. This is a powerful tool, capable of providing anti-guessing support for a number of common Linux services and applications. Trivial to install:

$ sudo apt-get install fail2ban

If you are going to be using SSH a lot, then you may want to consider setting up certificate based (vs. password) login. A quick search of “ssh certificate login” will provide a number of resources with instructions for setting up this kind of access.

 Step 4: Controlling publicly available services.

Most home networks are behind a NAT based router and, as a result, if you want to a service to be available from the internet you have to deliberately configure the router and the host. The host generally has to be set up with a static IP address, and the router has to know what ports/services to forward to that host. Most consumer routers can do this, but no two have the same configuration instructions – so you will have to research your router. From a security perspective, the good news is that you will likely have to take some deliberate action in order to put a service or application (like SSH or Apache the HTTP server) on the internet from your PI. As a result, you should have a pretty good idea about what services you are publishing on the internet and can quickly research the proper steps for security them.

If you are going to run your PI using a routers “DMZ mode” where all inbound internet will be forwarded to the PI or will be using the PI on an untrusted network (such as at a public event), then you may want to consider restricting the network accessible services with a firewall. The good news is that Linux comes with a great firewall the bad news is that the Linux firewall isn’t the simplest to manage. This is often the tradeoff between power or capability and ease of use. That said, there are a number of utilities that seek to simplify the Linux firewall. There are also some good intrusion detection or prevention tools that are not too complex. If you are interesting in taking the security of your PI to this next level, a few tools I recommend looking at are iptables, ufw (an IP tables management tool), psad, and fwsnort.


Ongoing innovation by individuals continues to be an exciting source of invention and new ideas. The emergence of the Raspberry PI and other platforms that provide innovators with cost-effective and powerful platforms is very exciting. I hope that this content is helpful to folks who want to experiment with the PI while avoiding common security pitfalls.

Thanks, Erik

Comments Off

Category: Internet of Things Real World Information Security     Tags: ,