Saturday 3 May 2014

Can we prevent another "heartbleed"?

What can a government agency that has found a better vulnerability in the latest version of a popular piece of software do to make their targets upgrade to that software version? If you believe in conspiracies or are paranoid, you might answer: publish "heartbleed"!

The recent kerfuffle over the OpenSSL's heartbleed bug brought a new reality check to all of us about the risks associated with using the Internet. Major global companies were scrambling to patch their systems, and most responsible IT security staffs would have done due diligence of checking across their company infrastructure to verify that the company is not exposed anywhere. The advise, if you are running a vulnerable version of OpenSSL is to upgrade to the latest non-vulnerable version of the software.

What if some super-powerful spy agency has managed to introduce an even more potent vulnerability into a later version of a well-used software? And, in a ploy to make various targets upgrade to that version publish a lesser "heartbleed" bug, hoping that their targets will thus of their own accord upgrade to the newer version that they could use to gain an even better access.

The hypothesis above might sound like extreme paranoia. However, this brings to the fore an issue about the general risks of operating on the Internet, which is powered by software and technologies developed by various people and companies, the majority of which the end user (be it an individual or a company) has very little or no control over. Even, if one had control over the software that powers a system that one relies upon, there is no guarantee that that system can be made free of all possible defects by going over it with a fine-tooth comb.

As someone who develops software and reviews other people's software to verify them and eliminate security vulnerabilities; I know first-hand that writing secure software is hard and multi-faceted. Even experienced software developers with the best intentions can make mistakes, which is what I believe happened in the OpenSSL's "hearbleed" issue. You not only have to be a top-notch developer with a deep domain understanding, but you also need to be able to think like a hacker and be aware of their various latest techniques to be able to write secure code that stands a chance.

While I believe that the Open Source Software model provides a potentially better architecture for obtaining a level of assurance about the security of software - not because there are "many eyes checking it", but because you have the opportunity to "review and verify yourself the properties that you want" if you have the knowledge and the resources to do so. Obtaining the necessary level of assurance will however not be possible for most people, even if they had the ability and the time to do so: finding arbitrary flaws is an almost intractable problem.

It thus seems to me that there will always be a risk associated with using software (or any technology for that matter). This risk cannot be completely eliminated. We are therefore left, in some cases, with having to trust software and various pieces of technologies. Companies that host or process sensitive data need experienced security architects and analysts to design secure systems,  and to verify and understand various security threats posed to the company within its infrastructure as well as by those of third-party providers. Whatever the case, we will have to do more risk assessments for sensitive data and systems that are Internet-connected, fortifying them where we can with limited resources, and balancing the criticality of the system against the benefits of entrusting such valuable data or critical system to an untrustworthy Internet platform.