Following yesterday’s news about the LogJam patch making more than 20,000 websites unreachable, I asked my colleagues at Fidelis for their thoughts on the news and what it means for the industry. Below are some of their thoughts:
Look Beyond the Headlines
The impact of LogJam goes far beyond what has been reported thus far. The technical details are available here but in general there are a few key takeaways from the LogJam vulnerability:
- It stands to reason that standards and code where significant bugs are revealed will be subjected to greater scrutiny, likely yielding yet more findings. This is clearly playing out with the TLS standard and various implementations. While there is some level of disruption in the short term, this only makes the Internet stronger.
- Similarly, another non-intuitive benefit is that there’s more coordinated and deliberate response to such findings, clearly involving the original researchers and other stakeholders in the process.
- Enterprises should proactively monitor for the use of weak keys in their critical communications. Even in the absence of these known vulnerabilities, such communications will always be susceptible to man-in-middle attacks and enterprises should ensure that critical business services and 3rd party connections not use weak keys.
Organizations are only as Strong as their Weakest Link
LogJam got me thinking about a recent breach case we worked last year about the time Heartbleed emerged, the organization had a seemingly robust patch management system where critical vulnerabilities were patched within hours and weaknesses found from assessments were addressed in short term roadmaps. However, the organization had test and development systems vulnerable to weaknesses exactly like those outlined in the LogJam news in their test and development networks. These systems were not centrally managed.
There were two flaws in our breach customer’s engineering:
- Their test and development networks had connections through the main ingress/egress firewalls, but we're not subject to the same level of patch management as their production network counterparts.,
- The test and development systems were administered with credentials that had administrative control over some of the production systems (i.e., share admin credentials).
Ultimately, this created a fertile situation for attackers to find a weak ingress point through the vulnerable systems and quickly escalate their privileges to gain access to the production systems.
This is like putting guards and cameras on the front door of your house (protecting your possessions), but your back door has no locks and nobody is watching that door. You've just wasted all of your investment in your security system because of a weak link.
The solution is culture and understanding that cybersecurity is a function of risk and not simply a department within IT. Every employee within an organization from production operators to developers to test engineers need to have a vested interest in understanding the security and vulnerabilities of their assets. When this becomes engrained into an organization's culture, only then will the company begin to quantitatively state that flaws, exploits or vulnerabilities won't affect them.
As for myself, my one comment is that this situation again reinforces the importance of robust network monitoring and full threat lifecycle advanced threat defense. It once again demonstrates that perfect perimeter defense is not possible and the industry needs to spend more time focusing on robust network monitoring in order to detect attackers when they exploit vulnerabilities like LogJam.