ACG Research

ACG Research
We focus on the Why before the What

Wednesday, September 7, 2011

Could the Linux kernel incident happen to your company?

In late August, the organization that maintains the Linux kernel, announced that they had been compromised. At the time of writing, the following is:

  • The attacker gained access to a user account via a developer’s compromised system.
  • An escalation in rights from this user account to a root account has occurred.
  • The vector may have been incidentally closed in the recent release candidate kernel (3.1-RC2).
  • The exact attack vector: how the attacker escalated their rights from the developer's user account to a system administration account.
  • How (and if) changes in the release candidate kernel (3.1-RC2) have resolved the vulnerability.
With so much still unknown, a deeper analysis would be futile; however, this event has raised three points in my mind regarding security:

1. Unknown vulnerabilities
A recent article by Neils Johnson, security analyst, ACG, stated that there were 6253 potentially exploitable vulnerabilities (PEVs) discovered in 2010; that's 17 a day.

With such an overwhelming number of PEVs presenting themselves, I long ago stopped paying attention. It's even worse than you think; these vulnerabilities are never discovered the instant they are created. They lay dormant in the software for extended periods of time. It's not unheard of for newly discovered flaws to have been introduced nearly a decade earlier. You want some fear, uncertainty, and doubt? I bet that the system you are using right now has dozens if not hundreds of vulnerabilities that have not been published and may or may not be altogether unknown at this time. What's more, there are likely still other flaws that are not currently exploitable, but some future change to your system will make them so.

You can't anticipate, much less fix them all. Don't even bother trying. Try not to think about it at all, it'll only depress, frustrate, and anger you.

What you can do, however, is to think about software differently. Don't count its known security flaws; just accept the inevitability of failure. Security concerns should be placed to the side for a moment. Ask: Does this tool best support core business? Do you like the software's features? Does it make your staffs’ lives easier? Use productivity as a decider in software selection, not a fear of PEVs.

Obviously we can't keep security off the table, as much as we might like to. So, now, starting with the assumption that the application will fail, how much can be leveraged from that? Can you live with it? Can you minimize and isolate it? Can you detect it? At the end of the day, this is the best you can really hope for. So, why bother with all the song and dance?

2. Ideal considerations
Is your system configured or maintained as well as is? They literally authored Linux from the ground up. You’d be hard-pressed to find a more expert, not to mention passionate, staff. Such a team would typically only be found in the wildest of CISO dreams, yet here we are discussing their failure.

Sometimes things just break. Does it matter? I am always amazed by the number of companies that have not undertaken even the most rudimentary risk assessment, much less, what I’d classify as a good one. You need to quantify your situation. In the face of the unknown, especially when feeling violated or hopeless, it's natural to assume the worst, get overwhelmed, and do nothing.

Focus on what you can fix and start with small changes that have the greatest impact. Do you have a sane password policy? Who controls what can run on the desktops? Has anyone from your security staff actually spoken to people involved with core business about what they do and how they do it?

3. Unforeseen modifications consequences
The newest release candidate kernel (3.1-RC2) appears to be immune to this attack without an explicit security patch; this is particularly interesting to me. They state that they “don't know if this is intentional or a side effect of another bug fix or change.”

The linear implication is that the software is naturally improving in overall quality, and the byproduct of this is the elimination of security-relevant flaws. However, unless this flaw was present in the original Linux release, 20 years ago, it was added in at a later point as side a effect of a prior fix/upgrade.

Smart organizations known that every attempt to fix a known problem may surreptitiously create entirely new problems (as well as fixing other unknown problems). Rapid/automatic patching is, unfortunately, somewhat of a crapshoot. Instead, seek to structure your environment in such a way that flaws are survivable/contained/isolated as they arise. Use this time to evaluate the patch. Your own modifications may also break stuff, but at least you know exactly what was changed.

Essentially, everything you have is vulnerable. A perfect effort isn't going to prevent a compromise, and any response may make you worse off. This may seem like doom and gloom if not outright hopelessness. Advice: don't panic! Ignore the incident. Also, ignore the next time you hear about attackers gaining access to Microsoft, Oracle, Google or your bank.

It doesn't impact you. To drop your OS/bank/cloud provider would be misguided and unproductive. Your best bet is to:
  • Understand your infrastructure; conduct a risk assessment.
  • Know you potential problems; accept the inevitability of failure.
  • Focus on survivability; employ defense-in-depth and isolation/minimization efforts.
  • Try to just maintain an environment of continual, incremental, improvement.
  • Learn from your own failures and those of best practices.
  • Employ a poised touch with regard to change.
  • Be proactive!
If any of the previous points are unclear or unfamiliar, a little bit of panic might be appropriate and bring in some outside help.

Comments, contact

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.