jeudi 4 mars 2010

Guess my password!

This article is somewhat linked to the tagging game simulation, but will take a more security-oriented point of view. We will show some theoretical results on the security of password protected resources, such as your favorite email account, your online banking account, or more simply your operating system's account. We will consider two problems here, with the last one explaining the logs you observe when you connect a ssh server to the open Internet - or why your credential choices matter a lot.
  • Suppose you are the bad guy and want to hack into a password protected resource by brute force guessing. What is your expected number of tries?
Well, that depends on the probability distribution of the apparition frequencies of the passwords in use. In an ideal world, all $n$ passwords would be unique so that the distribution would be uniform. Thus, your expected number of tries would be $n/2=O(n)$. So far, so good.

However, this uniformity assumption is unrealistic in many cases. A simple example is when people chose the name of their beloved ones as a password. I know what you are thinking: "hey, everybody knows that you should not use dictionary words or otherwise easily guessable passwords". First of all, everybody don't know, and some people do not even want to hear about it. Second, there are cases when these are exactly the kind of passwords you will get or create. Just think of those account retrieval questions: "what is the name of your mother/your favorite pet?".

So, in most of the cases (and the studies), the probability distribution is highly non-uniform: a few passwords are far more common than all the others. Once again, we will postulate a power law for this distribution: the Zipf distribution, which is the quantitative custard tart human generated content. And according to this inspiring post on the Security Research Lab of Cambridge blog human names also follow this power law.

For $i=1..n$ passwords ordered by decreasing popularity and $s \geq 1$, the associated Zipf distribution is defined by:

$$p_i=\frac{1}{Z}\frac{1}{i^s}$$

where $p_i$ is the probability of occurrence of password $i$ among all the accounts, $Z$ is the normalization factor such as all $p_i$ sum to $1$:

$$Z=\sum_{i}\frac{1}{i^s}$$

Suppose you, the bad guy, have a list of those passwords in decreasing popularity order - which is not hard to compile for names. Then, your strategy would be to try these one by one in that order. Thus, your expected number of tries is now:

$$E=\sum_i i p_i=\frac{1}{Z}\sum_{i}\frac{1}{i^{s-1}}$$

We'll suppose that $s \in ]1;2[$ - otherwise our model is not as interesting (usually, $s$ is taken to be $1.1$). Under these conditions, we obtain:

$$E=\frac{1}{Z(2-s)}n^{2-s}+O(1)$$

So, $E=O(n^{2-s})$ which is not so good compared to $O(n)$. For $s=1.1$, $E=O(n^{0.9})$ which is or is not acceptable, depending on your philosophy of life.

But wait, this is not how bad guys act on the web: they rarely focus on a single target. Their problem is more something like the following:
  • I know the $k$ most common passwords for a given service. How many accounts will I have to probe before succeeding?
It turns out that the answer is surprisingly low as we shall now see. The expected number of probed accounts is now:

$$E=\sum_i i(1-\bar p_k)^{i-1} \bar p_k$$

where $\bar p_k=\sum_{i \leq k} p_i$ is the probability of hacking into an account with one of the $k$ most common passwords. Unrolling the computation yields a surprise. It turns out that this expected number is nothing else than:

$$E \sim \frac{1}{\bar p_k}$$

when $n>>1$. Now substituting with the actual value of $\bar p_k$, we obtain:

$$E \sim \frac{Z}{\sum_{i \leq k}{i^s}}$$.

Here comes the final result: take $k=3$ (3 tentatives on each account), $n=1,000,000$ (number of English proper names?) and $s=1.1$. You'll get $E \sim 6$. That's right, 6 accounts on average.

Edit: Obviously, 6 is an underestimate because the distribution of passwords do not exactly follow a power law, especially for the most common passwords. The bare Zipf distribution has a tendency to overestimate the probabilities of the head events. A more realistic model would be a Zipf-Mandelbrot distribution. Do you think people are not so foolish? See how people choose their passwords.

Despite its simplicity, the model gives the basic intuition why script kiddies can penetrate a decent amount of systems in a small period of time.

mardi 2 mars 2010

Notes on the Therac-25 case

The Therac-25 device is now a classic example of lethal failure for a complex system with ill-designed software control over a physical process. The technical causes of failure are extensively known today, as engineers learned (somewhat slowly) from their past mistakes. These aspects truly deserve to be known by anyone interested in the safety of critical systems but another equally decisive factor in the failure was proper human response, or the lack of it.

The Therac-25 was a GeV medical linear accelerator used for radiotherapy during the 80's, until the device was recalled for major changes when its unsafeness became all too blatant. Basically, the device can operate in two modes. In the first one, the accelerated electron beam is shaped and then directly targeted to the patient for skin-level treatment. In the second one, the accelerated electrons are collided on a tungsten target, producing X-rays which are shaped and directed to the patient for in-depth treatment. In the latter mode, the beam energy easily reached a few GeV, and in the former one, maximum beam energy was of 25 MeV.

Guess what happened? From 1985 to 1987, not one or two, but at least 6 patients unfortunately acted as the tungsten target for the high energy beam, ultimately leading to two radiation induced deaths and various permanent incapacities for all the others. Their disturbing stories are shortly presented in Nancy Levenson's report of the case for the IEEE Computer journal (1995 update).

Of course, there are technical explanations for this failure. The software (written in assembly language - i.e. the lowest possible level which is not '0's and '1's) was largely reported as a shoddy reuse of an earlier version and presented several critical race conditions. To put it in a nutshell, if instructions where given too rapidly to the machine (e.g. switch from electron mode to X-ray mode and fire), the tungsten target would not have enough time to position correctly, meanwhile the electron accelerator would already be firing at full power, as if in X-ray mode.

Interestingly, the previous model Therac-20 which had its code reused by the Therac-25 also had similar race conditions. However, the big difference is that the Therac-20 had built-in hardware locks which would prevent the accelerator from firing at full power had the tungsten target been misaligned. Thus, software malfunction occured, but merely resulted in loss of time and nothing more (restart the device). On the Therac-25, there were no such mechanisms, so that the software alone was expected to ensure total safety - after all, software is pure logic and pure logic never produces wrong results, right?

Moreover, the whole design of the interlocking between software and hardware was highly dysfunctional, as there was no position sensor which could have reported a wrong alignment in the X-ray mode.

It is nonetheless puzzling that 6 accidents were needed to recognize the Therac-25 as a fundamentally unsafe machine. Actually in the first of the cases, such recognition never happened at all, nor from the hospital or the manufacturer. The very slow learning process appears to be a result of overconfidence in software, overconfidence in product reliability and last but not least, overconfidence in manufacturer's experts and practices in this case.

To finish with, here is a memorable quote from N. Levenson's report:
Virtually all complex software can be made to behave in an unexpected fashion under some conditions: there will always be another software bug. [...] We cannot eliminate all software errors, but we can often protect against their worst effects, and we can recognize their likelihood in our decision making.
And don't take for granted that reused code is safer: it all depends on how you are using it.

Following are two free links about security for today's software developer: