My colleagues in IT Security have had a busy weekend. Since its discovery on Friday afternoon, the WannaCry ransomware attack has continued to spread this weekend, impacting over 10,000 organizations and 200,000 individuals in over 150 countries, according to European authorities. However, while measures have been taken to slow the spread of the malware, new variations have begun to surface.
RIGHT NOW you must apply the MS17-010 patch. If you do not have it, and you have TCP port 445 open, your system will be hit by ransomware.
Go ahead. I’ll wait.
As luck would have it, I have a hospital appointment today. I was tremendously pleased to see the following tweet from my local hospital:
Services are running normally at our hospitals today. Systems have not been affected by the #nhscyberattack and we remain vigilant.
— FrimleyPark Hospital (@FrimleyPark) May 13, 2017
So, what can we learn from this attack, and what three things can we do to guard against future attacks of this nature?
- Stop blaming.While it’s tempting to point fingers at others, one of the key stages of incident response is to focus on root causes. Hindsight is always 20/20, and picking apart why systems were not migrated does not dig you and your enterprise out of the mire right now. Windows XP, a system which has been hit hard by WannaCry can be embedded into key systems as part of the control package and the firmware may not be accessible, nor under your control. Where you have embedded systems (for example POS terminals, medical imaging equipment, telecommunications, and even industrial output systems such as smart card personalisation and document production) make sure that your vendor is able to provide an upgrade path as a critical priority. This should apply even if you have other embedded operation systems such as Linux or other Unix variants as it is safe to assume that all complex software is vulnerable to malware.
- Isolate vulnerable systems. There will be systems which although haven’t yet been affected by malware, but are still vulnerable (see above). It’s important to realise that vulnerable systems are often the ones on which we rely the most, and so a useful temporary fix is to limit the network connectivity – what services can you turn off, especially vulnerable services like network filesharing which can be disabled during the duration of this crisis incident. During a crisis of this nature it is better to err on the side of caution, even if business processes are delayed, this is better than total disruption and non-linear data loss!
- Stay frosty. Gartner’s adaptive security architecture emphasises the need for detection. Make sure your malware detection is updated. Make sure your intrusion detection systems are operating and examining traffic. Ensure that UEBA, NTA and SIEM systems are flagging up unusual behaviour and that this is being triaged and incident handlers are responsive. Bear in mind that additional resources may be required to handle the volume of incidents, liaise with law enforcement, and field questions from the public (and possibly the media). Keep your technical resources focused on resolving key issues, and let external questions be handled by someone else.
After the crisis, there will be time for lessons learned. There will be time to revisit vulnerability management (and you must!). There will be time to look at how you refocus not just at protective measures, but also in key detection capabilities such as UEBA, NTA, and advanced SIEM. There will be time to do some additional threat modelling, and consider carefully what risks you can afford to tolerate – it’s less than you think. Cloud security may come back into the risk management discussion, and that’s also useful.
But right now, you are in the swamp, and the alligators are still lurking beneath the surface. Patch, isolate, and stay vigilant.