mercredi 13 octobre 2010

The Social Network: a flawed voting scheme

As it happens, I saw The Social Network. For my defense, I should mention that this is solely the unfortunate result of my friends' perfidy: a moment of inattention and bim! you're suddenly learning about the romantic life of Mark Zuckerberg - or whatever. Nothing really made me go "wait a second", except in the very beginning of the film when our protagonist happily codes a voting scheme for ranking Harvard's girls - by "hotness" of course. As the story goes, he calls his best friend over for him to explain "the chess playing algorithm". I am clueless as to what chess has to do with this, but let's move on. (EDIT: They are probably talking about the Elo rating system which does indeed make some sense.)

Back at my university, we had the same thing on our local network periodically going on (at each beginning of the academic year - I let you guess why). The concept is fairly primitive. You are presented two girls' pictures and you click on which of the two you like best. Then two other girls are randomly paired and presented to you, and so on. In the end, after many many votes, you magically end up with some ranking of the girls. No comments on the social value of this. So, what is really going on behind the scene?

Let's get more precise. Basically, you have $n$ voters and $m$ choices. Each voter has a complete ordering over $\{1,...,m\}$, meaning that for any two choices $x,y$, one has either $x < y$ or $x > y$. Is complete ordering a realistic assumption? Well, it does not matter, as long as you are only given the $<$ or $>$ alternative in the voting scheme, so we will restict ourselves to this case. There are exactly $2^{\frac{m(m-1)}{2}$ possible distinct complete orderings, some of them are not transitive: you can have $a>b$ and $b>c$ but $a < c$. But is it possible that if I prefer $a$ to $b$ and $b$ to $c$, then I actually prefer $c$ to $a$? The answer is negative if I am being self-consistent. Thus, an opinion is a transitive complete ordering, i.e. a permutation. Therefore, voters' opinions are reasonably accounted for by permutations over the $m$ choices, and there is exactly $m!$ possible distinct opinions.

Thus, if $S_m$ denotes the group of permutations of size $m$, the voting scheme is adequately represented by a function $f$ which type is:

$$f : S_m^n \rightarrow S_m$$
Functions such as $f$ are called aggregation functions, because their aim is to aggregate voters' information into a single, hopefully fair and representative global choice. Now, unpleasant things start happening here. Even without knowing how $f$ works, we have some important limitations on what can $f$ achieve at best. Perhaps the simplest thing to think of is Condorcet's voting paradox: there exist configurations of voters' rankings such that $f$ will contradict a majority of voters' preferences.

Take 3 voters having personal preferences $a>b>c$, $b>c>a$ and $c>a>b$. We see that a strict majority of 2 voters prefer $a$ to $b$, $b$ to $c$ and $c$ to $a$. Thus, the majority ordering is not commutative, but $f$ is restricted to deliver a permutation. Thus any resulting aggregated ranking will contradict at least one of the 3 majority preferences.
Arrow's impossibility theorem extends and precises this puzzling result: basically there exists no fair (several natural requirements) aggregating functions for this problem. So, whatever the actual algorithm does, it can not claim to find the majority ranking of candidates - because no such thing exists in general. That's fairly poor for something supposed to give the general opinion of voters, right?

But in all honesty, we did not need Arrow's theorem to figure out that the idea does not work so well. Basically, the easiest way to implement this stuff is to keep track of the number of clicks on each candidate and to rank them accordingly. Thus, cheating is extremely easy by repeatedly clicking on someone's face. And the more you click, the stronger your voice is. Well, from my perspective, it seems that this is the number one reason why after a short initial enthusiastic phase, such applications invariably die out. And that's for the best :-)

mardi 12 octobre 2010

A poor man's fractal

Self-similarity is more common than one could think. Below is a succession of four x5 zooms in the middle part of a symmetric 1 dimensional random walk with a million steps. This extensively studied stochastic process is defined by the following simple iteration:

$$X_{t+1}=X_t + B_t$$
where the $B_t$ are independent random variables taking +1, -1 values with probability $\frac{1}{2}$. Basically you flip a coin at each time $t$ and go up or down accordingly.
Each shaded green region is reproduced in the graph immediately below it. The interesting thing to notice is how similar does the $X$ curve behaves at any zoom level. Of course, there are some artifacts due to the anti-aliasing algorithm - most notably in the variation of the perceived thickness of the curve - but overall, the eye notices the same general uneveness at any level. Put in other words, if you ignore the labels on the $x$ and $y$ axis, looking at any such graph of a region of $X$ is not sufficient for you to determine at what scale you are observing $X$. It could be on a thousand time steps as it could be on billions of billions! Benoît Mandelbrot already gave this example quite a long time ago but I thought it would be nice (and easy) to get a better looking picture than in the early publications.

By the way, if you have ever glanced at charts of stock market prices, you'll probably have noticed that they too look the same on many scales - and between different stocks. There are some elegant and appealing (but far from mainstream) mathematical developements in fractal methods applied to finance - developements that are rooted in this observation of self-similarity. The big buzzword here is stable distributions: probability distributions that do not necessarily have means or standard deviations, but still possess the wonderful property of being stable by addition, namely the addition of two random variables that follow a stable distribution will follow a stable distribution itself. But I am already off-topic.

A last word about the symmetric 1 dimensional walk. No matter where you start ($X_0$), you'll get back to this value with probability 1 some time in the future. The little surprise is that you may have to wait an infinite amount of time before getting back: the expectation of the return time is infinite. You mileage may vary, infinitely in this case. Actually, it is possible to prove that the time to return $T$ follows asymptotically a scaling distribution:

$$\mathbb{P}(T \geq t) = O(\frac{1}{\sqrt{t}})$$

$T$ is heavy tailed: its tail probablity decreases very slowly. If stock prices follow symmetric 1 dimensional random walks (it does not appear to be so), then no matter what happens, you could always get your money back, for some infinitely remote time in the future, provided the stock still exists...