A case for preventing vulnerabilities

Venkatesh-Prasad Ranganath
4 min readSep 14, 2017

Today morning, Equifax had admitted the recent breach was indeed due to an exploit based on a vulnerability in software that Equifax failed to patch in a timely manner. This made me think about efforts to secure mobile systems and the views that I have come across in the community.

There are two common approaches to securing mobile systems (which I believe applies to other systems):

  1. Identify malicious behavior in software (to disallow them).
  2. Identify vulnerabilities in benign software (to harden them).

Looking at the efforts in this space, it seems far more of them try to identify malicious software. And, when folks talk about efforts to flag/prevent vulnerabilities, there are numerous oppositions:

  1. It is a social engineering problem that cannot be solved technologically, e.g., vulnerabilities stem from user actions.
  2. We can never get developers to follow recommendations (use tools) to prevent vulnerabilities.
  3. We can never uncover all vulnerabilities.
  4. Vulnerability detection tools provide too many false positives.

I have seen such oppositions from reviewers of proposals/publications/efforts that intend to explore ways to prevent vulnerabilities in mobile apps and I think such opposition is wrong. Specifically,

  1. Not all vulnerabilities are triggered by user actions. In fact, many vulnerabilities do not require user interactions. Equifax debacle is a great example of web-based systems. BlueBorne is a great example for mobile systems.
  2. How often do we provide enough information to developers about vulnerabilities? While recommendations are useful, the context in which recommendations are applicable is more important. Without knowing if and how a vulnerability is relevant to their context/system, it is likely that developers will reject (context-insensitive) recommendations. May be, information about how a vulnerability transpires and how it affects a system and its users (e.g., code examples embodying the vulnerabilities and corresponding exploits) could sway developers to consider corresponding recommendations.
    If you are familiar with mobile apps, then you may be thinking “well, we can checks new apps for malware when they are submitted to app stores for publication. Can we do something similar to detect vulnerabilities in submitted apps?” Yes. If we have vulnerability detection tools that developers can use, then app stores can use them as well :) In fact, this case allows app stores to work with developers to provide stronger guarantees about the absence of known vulnerabilities in apps.
  3. Yes, it is highly unlikely that we will uncover all vulnerabilities. However, this should not be a reason to not consider the ones we can prevent.
  4. This is a genuine concern as with any automation-based solution and we need to address it. Probably, the tools could improve by considering the context. Probably, the developers could consider the context to weigh the recommendations from tools.

Beyond these false concerns, here are few more reasons for why preventing vulnerability is as good as (or even better than) identifying malware.

At its core, malicious behaviors and vulnerabilities are tightly coupled. The success of malicious behavior depends on the existence of vulnerabilities. The existence of vulnerabilities hinges the possibility of malicious exploits. While malicious behavior expose vulnerabilities, uncovered vulnerabilities help identify malicious exploits. Consequently, vulnerability detection and malware detection are complementary and can benefit from each other.

In terms of what can we do, identifying vulnerabilities involves looking for issues in a system built by us. Identifying malicious behavior involves looking for issues in a system built by others. Clearly, we have far more information about and control in the former scenario than in the latter scenario. Further, by identifying vulnerabilities in our system, we will have the opportunity to fix them and prevent corresponding exploits. So, why cure if it can be prevented?

This brings us to cost. Finding and fixing vulnerabilities during development adds to development costs while finding and fixing vulnerabilities after release adds to maintenance/development and deployment costs. We only need to crack open a software engineering textbook to know which of these situations lead to larger costs in general.

Besides dollars, fixing vulnerabilities after an exploit has been detected always contributes to operational costs in terms of assessing and mitigating any issues stemming from the exploit, e.g., extent of information leak, effect of exploit on users and organizations. It also entails social cost in terms user’s trust (mind share) in our system, which is much more fluid, more important, and harder to earn and retain.

Also, there is the aspect of effectiveness. Upon detecting and fixing a vulnerability, all associated exploits will likely be thwarted without explicitly identifying such exploits. Further, given the knowledge of our system, we may be able to uncover other variants of the vulnerability in our system. On the other hand, identifying an exploit will enable us to thwart a single exploit that is associated with a specific vulnerability. Further, we have to think outside of our system to identify other variants of an exploit. So, why pursue an effort in an unknown domain of attackers as opposed to pursuing a comparable effort more effectively in the known domain of our system?

In short, both vulnerability detection+prevention and malware detection can be useful. However, the former has more benefits and possibly costs less than the latter. So, I lean towards vulnerability detection+prevention.

--

--

Venkatesh-Prasad Ranganath

Engineer / Ex-Academic / Ex-Researcher curious about software and computing.