Saturday 3 May 2014

Can we prevent another "heartbleed"?

What can a government agency that has found a better vulnerability in the latest version of a popular piece of software do to make their targets upgrade to that software version? If you believe in conspiracies or are paranoid, you might answer: publish "heartbleed"!

The recent kerfuffle over the OpenSSL's heartbleed bug brought a new reality check to all of us about the risks associated with using the Internet. Major global companies were scrambling to patch their systems, and most responsible IT security staffs would have done due diligence of checking across their company infrastructure to verify that the company is not exposed anywhere. The advise, if you are running a vulnerable version of OpenSSL is to upgrade to the latest non-vulnerable version of the software.

What if some super-powerful spy agency has managed to introduce an even more potent vulnerability into a later version of a well-used software? And, in a ploy to make various targets upgrade to that version publish a lesser "heartbleed" bug, hoping that their targets will thus of their own accord upgrade to the newer version that they could use to gain an even better access.

The hypothesis above might sound like extreme paranoia. However, this brings to the fore an issue about the general risks of operating on the Internet, which is powered by software and technologies developed by various people and companies, the majority of which the end user (be it an individual or a company) has very little or no control over. Even, if one had control over the software that powers a system that one relies upon, there is no guarantee that that system can be made free of all possible defects by going over it with a fine-tooth comb.

As someone who develops software and reviews other people's software to verify them and eliminate security vulnerabilities; I know first-hand that writing secure software is hard and multi-faceted. Even experienced software developers with the best intentions can make mistakes, which is what I believe happened in the OpenSSL's "hearbleed" issue. You not only have to be a top-notch developer with a deep domain understanding, but you also need to be able to think like a hacker and be aware of their various latest techniques to be able to write secure code that stands a chance.

While I believe that the Open Source Software model provides a potentially better architecture for obtaining a level of assurance about the security of software - not because there are "many eyes checking it", but because you have the opportunity to "review and verify yourself the properties that you want" if you have the knowledge and the resources to do so. Obtaining the necessary level of assurance will however not be possible for most people, even if they had the ability and the time to do so: finding arbitrary flaws is an almost intractable problem.

It thus seems to me that there will always be a risk associated with using software (or any technology for that matter). This risk cannot be completely eliminated. We are therefore left, in some cases, with having to trust software and various pieces of technologies. Companies that host or process sensitive data need experienced security architects and analysts to design secure systems,  and to verify and understand various security threats posed to the company within its infrastructure as well as by those of third-party providers. Whatever the case, we will have to do more risk assessments for sensitive data and systems that are Internet-connected, fortifying them where we can with limited resources, and balancing the criticality of the system against the benefits of entrusting such valuable data or critical system to an untrustworthy Internet platform.

Friday 24 August 2012

Information Erasure and Release Duality

There is a duality between quantitative information erasure and information release in the sense that the sum of both is equal to the total body of information processed by a system:

Erasure + Release = Information Content you started with.

Why might this be useful? You ask. Well, since the information security community has developed various analyses for characterising information release, we can simply turn around the result to calculate the erasure! You buy one, and get the other free.

I suppose you might also ask, why is information erasure useful? Here is an excerpt from a paper (shameless plug, it is mine) : From Qualitative to Quantitative Information Erasure. 

"There is often a need to erase information in real systems. In particular, a system that processes confidential data may be expected to remove pieces of sensitive information from the body of information that it propagates. For example, statistical databases may not propagate sensitive information, which must be erased; but the database must release sufficient non-sensitive information to be useful for statistical purposes. A more everyday example requiring information erasure is e-commerce, where various pieces of data on a credit card used must not be stored by the merchant. The Payment Card Industry, which specifies standards for payment processing, stipulates which data must not be retained by a merchant, even though the data may be required to complete a transaction. For example, the card verification code, which is used to prevent card-not-present frauds, must not be stored by the merchant. There are also restrictions on the display of the primary account number (PAN) on screens or receipts, e.g. the first six and the last four digits are the maximum allowed to be displayed - the other digits must be masked (erased). 

Note that in these examples, as with other situations where information erasure is desired, erasure often goes hand-in-hand with information release: e.g. some PAN digits may be released whereas others must be erased. So, it is reasonable to study erasure in the context of information release. It is even better if the two can be accommodated under a single uniform policy model, as we propose. As a general observation, it is desirable to be able describe security requirements as an extensional policy statement independently of the operational properties or implementation of the system that satisfies the requirement. This separation of concerns is a well-understood good design principle of allowing policies and systems to be separately developed. A verification mechanism then ensures that the implementation conforms to the desired policy. The policy model proposed in this paper is extensional and describes the information security requirements directly as constraints on information release and erasure independently of an enforcement mechanism."
 The paper goes on to develop a mathematical theory of information erasure, and the statement above is one of its conclusions. The paper appeared at the Quantitative Aspects in Security Assurance (http://www.iit.cnr.it/qasa2012/) in September 2012. A copy of the paper may be obtained from here.





Tuesday 2 November 2010

NIST 800-53 Security Controls Database

The NIST 800-53 special publication provides guidelines for selecting and specifying security controls for information systems to meet the requirements of FIPS 200 (Minimum Security Requirements for Federal Information and Information Systems). A database of the NIST 800-53 Security Controls may be downloaded from here. This database makes it easy to get at the security controls data for use within another application. The (Windows-only) zipped application from the NIST website supports data export in a variety of formats.


I was interested in populating an ontology with the security controls, so it suited me to have the data available as raw XML. A simple export of all data was all that was necessary. I later used the Apache XMLBeans tool (inst2xsd) to generate a schema of the exported XML data, and to later read back the content through Java API generated from the schema through (scomp). This makes it quite easy to simply just extract parts of the document that were relevant within a Java application. Given the versatility of XML, other alternatives exist, such as XQuery or XPath.

Saturday 30 October 2010

Kile + Okular on Mac OS X

For those that want to use Kile (2.1 beta 4)and Okular on Mac OS X, you may find here some useful information about configuring these applications. I am also hoping that someone will suggest better ways of doing things.

I currently use a MacBook Pro with Mac OS X 10.6.4 (Snow Leopard). Because I often write LaTeX documents, I needed a comfortable setup that suits my document editing work-flow. I know that there are many useful editors and (PDF) document viewers for Mac OS X and Linux, but I think the Kile and Okular combination is excellent. I have been a long-time fan of Kile on Linux, and I have used it with several DVI/PS/PDF viewers on that platform. Kile, on one hand, is full of features and its project management and auto-completion features are very useful, it is also very configurable. Okular on the other hand can auto-reload pdf when it changes on the file system, which is useful as you would like to immediately see changes to your document after compilation without having to "click" on a "reload" button. More importantly, the Kile and Okular support forward and inverse (via SyncTex) PDF search. This can be useful when editing large (multi-file) documents.

Now Kile and Okular are KDE applications, and I could not find Mac equivalents that have all the features of Kile in particular (after dabbling into many apps - TexShop almost came close, but does not match the customisability and auto-completion features of Kile that I had gotten so used to). My temporary solution was to run Linux within a virtual machine (VirtualBox) for my document editing work. But then, the launching and shutting down of a virtual machine soon became burdensome - I wanted an app that is easily launched like every other Mac app. There were also other annoyances with permission issues in the folder sharing between the ext4 Linux file-system in the virtual machine and the Mac file-system: the stable Kile version (2.0.3) would happily create files and projects on the shared folders, whereas the latest version (2.1 beta 4) would not - complaining of lack of permission. I know it is beta, but then the stable version is for KDE3, and I use KDE4.

Anyway, I decided to install Kile natively on Mac OS X via MacPorts. Note that to use the embedded Konsole in Kile, kdebase4 port must be installed. One of Kile dependencies is kdegraphics4 port, which provides Okular, which is great. Once installed, initially, Okular would not open PDF files! I later found that installing poppler with +qt4 +quartz on MacPorts, and rebuilding kdegraphics4 solved this problem.

To enable forward and inverse search in Kile and Okular, use the "modern" configuration for your build tools. For example, my PDFLaTex build configuration looks like this:

My "QuickBuild" tool only contains PDFLaTeX: I took out the additional default ViewPDF, because Okular is configured to reload on detecting a change to the file on the file-system. This bypasses a current bug in Okular which always shows the "Navigation Panel" whenever Okular is re-launched - regardless of whether the navigation panel is currently hidden or not. It would have been preferable for me to add ForwardPDF to my "QuickBuild" so that the PDF view in Okular is always synchronised with the point that I am editing within Kile. However, the navigation panel bug is so annoying, that I settle for doing ForwardPDF manually whenever I need to.

To ensure automatic document reload when your PDF file changes, make sure "Reload document on file change" is selected as shown below:



For PDF inverse search, I found out that selecting "Kile" as the Okular "Editor" does not work on Mac - although it works on Linux. To make inverse PDF search work, I chose the "Custom Text Editor" and used the following command:

/Applications/MacPorts/KDE4/kile.app/Contents/MacOS/kile --line %l

If you installed Kile elsewhere, you probably only need to change the path as necessary.
That should be it. Shift + left click within Okular should activate inverse search and take you to the relevant location in your LaTeX document within Kile.

Tuesday 26 May 2009

How do you model information flow?

On the subject of secure information flow in computer programs, one of the questions that one has to answer is whether a program, which has access to confidential data, is free of unwanted information release. This leads to another question of what is information release or information flow?

The classic technique used in this field is to specify lack of information flow as a noninterference property. Noninterference intuitively means that confidential input does not affect public outputs of a program. Thus a program which satisfies this property is free of unwanted information flow. This kind of "lack of information flow" policy is suitable for military multi-level security systems, in the sense of the famous Bell-Lapadula "no-read-up" property, where information must not flow from a higher security classification to a lower one. However, in practice, many programs have to reveal some level of information as part of their functionality. For example, authentication programs, which reveal some information about the password - if authentication fails based on a guess by the attacker, then the attacker learns what the password is not. Similarly, statistical software do reveal, in the statistical result, some information about the data used in the analysis. Even encryption algorithms reveal some information about the secret keys and messages used. This is just to mention a few of the everyday applications where the noninterference approach fails.

So, how do we model more generally the notion of what information flows, as a step towards specifying what is the level of information that we actually intend to release, which is a statement of our security policy? One of the properties of information that is being alluded to here is the notion of information ordering. That is, the idea that one information is greater or more informative than another one. This simple idea provides us with a basic but a quite general way to specify the notion of secure information flow, whereby we say that the release of information by a program is safe or secure if and only if the level of information that the program releases is smaller than what we permit it to release. This statement relies only on the information ordering property as the basis of the security enforcement.

For this purpose, we can view information as elements of a partially ordered set, where the partial order captures the notion of degree of informativeness. So, our objective is to see how programs transform an attackers knowledge within this set and to check whether such a transformation is permitted.

We say that information flows when the attacker's knowledge is transformed from one level of information to a higher level. In this view, noninterference is a special case, which corresponds to the situation when the attacker's knowledge remains the same even after observing the result of the program that is processing sensitive data. More generally, we can accommodate situations when the observer is allowed to gain some specified level of information. It is the job of our information flow policy to specify this acceptable level, and our enforcement mechanism must ensure that programs cannot violate this policy by releasing more than the policy allows.

A recent work of mine (LNCS paper titled: A Policy Model for Secure Information Flow) looks into this problem, starting from a model of information, which is based on complete lattices (a special kind of partially-ordered set). Then techniques to extract the information released by a program from the operational semantics, under a given attacker model, is then presented. Finally, the lattice-based policies are used to enforce secure information flows. The ideas in this paper are quite simple and I hope to elaborate one some of these to the non-specialists, who might be interested in this very interesting subject.

In future editions, under this topic, I will explain how information flow may be modeled qualitatively and quantitatively, and how this fits into the lattice model mentioned earlier.

Language-based Security

Being a computer security researcher, I am interested in various aspects of computer security. A particular goal I have is to research, and if possible, contribute to the development of techniques whereby we may protect computers automatically through enforceable high-level security policies.

One specific area of interest is in ensuring that programs don't violate local usage policies. For example, an accounting software, or say a tax return calculator, which must necessarily be granted access to confidential information, should not reveal information that we do not intend to release. What is to prevent such a program from encrypting all the financial data and sending the result to an unintended recipient?

This is where Language-based Security can be of use. Although much work still needs to be done with respect to writing high-level security policies that can intuitively capture our intended information release policies and which can automatically enforce the policies at some appropriate level (e.g. inside the operating system, or a virtual machine, or web browser), but this field has promise.

There are of course many aspects to this problem and there are many interesting techniques already developed which have merits and some obstacles. Needless to say that this is a generally difficult problem. I hope to be documenting some of my thoughts and experience in this area and on the general topic of computer security on this blog.