By Tony Campbell, Editor, ChiefIT.me Magazine
With Oracle releasing 270 critical patches, many of which need to be applied to its E-Business Suite (EBS), it’s a good time to look at your operational security posture and see if you’re cyber hygiene is as good as it should be.
In case you weren’t aware, the Australian Signals Directorate (ASD) has said that there are four mitigation strategies that businesses can adopt that introduce basic cyber hygiene into their IT operations and will thwart 85% of the intrusion techniques that the Australian Cyber Security Centre responds to. Known as the “Top 4 Strategies to Mitigate Targeted Cyber Intrusions (https://www.asd.gov.au/infosec/top-mitigations/top-4-strategies-explained.htm),” if you insist that your IT team gets these basic operational security countermeasures working properly within your IT environment, you’ll likely not hit the headlines as the next big breach story. The top four strategies are:
- Application whitelisting
- Patching applications
- Patching operating system
- Minimising local/administrative privileges
With two of these mitigation strategies relating to patching and, interestingly, application patching ranking higher than operation system patching, this is something that absolutely needs to be done well if you are to remain safe. Oracle’s latest set of patches contains 121 critical security patches for its E-Business Suite (EBS), so you can see why ASD ranks application patching so high, especially when 97% of the EBS vulnerabilities are remotely exploitable without any authentication.
A chain is only as strong as its weakest link. This analogy is used to explain why vulnerable, unpatched applications are the chinks in your security armour that will be targeted by an attacker. Now that Oracle has released these patches, most of the modern vulnerability scanners will be able to detect if you’ve applied them to EBS. This kind of vulnerability audit is one of the first things an attacker does when trying to figure out how to compromise your systems. So, patches should be applied as soon as they are released from the manufacturer, right? Well, kind of. There are reasons to be cautious, some of which are real and some of which are myth. It’s true that sometimes patching can go catastrophically wrong. There have occasionally been patches that have caused availability issues with operational software, where the patch breaks the application. These are costly to remediate and the service provider almost certainly gets the blame for the loss of productivity. This makes IT managers err on the side of caution, especially since service level agreements are often focused on system and application availability. Patch testing is the obvious answer – put all patches through the change management process, thus rolling them out in the normal change windows, with appropriate testing and pilot deployment prior to enterprise roll out. This approach is fine, until you consider critical patches in the context of normal changes and the timeframes for packaging, testing and piloting complex updates, such as the 121 patches required for EBS. Every day that goes by without those vulnerabilities getting plugged is another day when you could be attacked. But if you apply 121 untested patches to the EBS servers and something breaks, you could be liable for the business’s loss of revenue.
There must be a better solution. Most businesses have backup and recovery plans that allows them to roll back to a last known good state. Instead of running prolonged pre-production tests, roll the patches out to a set of pilot servers, with an agreement with the business that you may need to roll them back if there is a problem. Base your decisions on a risk assessment and take your executives on the journey with you – to patch or not to patch should not be an IT only decision. In some cases, of course, you won’t be able to patch immediately, since the patch might break something that your bespoke application needs to carry on running. If this is the case, look at compensating controls, such as firewalls, load balancers, protective monitoring and application whitelisting as ways of mitigating the risk of not patching. However, even if you’ve decided not to patch for a good reason and introduced those compensating controls, you need that risk to be recorded on your corporate risk register and to be re-evaluated on a regular basis. Just because you controlled it today doesn’t mean it will remain acceptable in the future.
Someone once said that patching is like car maintenance; it will continue to run, but driving becomes increasingly dangerous the longer leave it. It’s time to view patching as a critical security function and something that is top of the list of things that need to be done properly.