Searching for Sets: Jaccard Index and MinHash

Here’s a well-written intro to audio fingerprinting. One part of the article contains a clever trick that seems generally useful and interesting to think about. I will attempt to quickly describe the problem below and explain the solution.

The Problem: Fingerprints of Sparse Sets

To summarize the article in an extremely lossy fashion, the problem is that we want to take an audio clip, and find similar clips in a database. To do so, we can convert each clip into low dimensional “fingerprints”, and utilize nearest neighbor algorithms to find clips with the most similar fingerprints. Given any clip, one can cut it into small segments, run FFT to get spectrograms, then binarize the resulting images to get bit vectors. But even with, say, 128 x 32 = 4096 bits per fingerprint, it’s still too many to run nearest neighbors. How can we further reduce dimensions to facilitate faster matching?

It is important to note that in this context, the 0s and the 1s in the bit vectors aren’t symmetric. Instead, it is far more useful to think of the bit vectors as sets of 1s. Imagine if clip A has a C note and clip B has a D note. You would think they’re completely different because they don’t share any common notes; it would be silly to say they’re mostly the same because they have a lot of common missing notes.

If you think about it, finding similar sets is a general problem: you have a universe of unique items (pixels in the spectrogram), you have many sparse sets of them (spectrograms), and you wish to reduce each set’s dimensionality while preserving pairwise similarity. It’s just like comparing people’s book lists, movie lists, or interests.

The Trick: MinHash

First, we need to define a metric of similarity between two sets. It seems like a natural definition would be: the size of the intersection divided by the size of the union. If both sets are equal, you get 1; if both sets share no common elements, you get 0. It turns out this is called the Jaccard Index.

Here’s the interesting part: say you have two n-bit vectors. If you randomly permute both vectors in the same way, and then find the index of the first 1 bit (this is called the MinHash), then the probability that the two indices are equal is exactly equal to their Jaccard Index. Let’s go through a quick example before explaining why this works.

A = 0101 0001
B = 0011 0001

Random permutation 1:
abcdefgh becomes -> daefbcgh (1st position -> 2nd position, 2nd -> 5th, etc)
A becomes -> 1000 1001, first 1: 1
B becomes -> 1000 0101, first 1: 1, equal to A's

Random permutation 2:
abcdefgh becomes -> bcadfehg
A becomes -> 1001 0010, first 1: 1
B becomes -> 0101 0010, first 1: 2, different from A's

In the above example, the Jaccard Index is 2/4 (intersection / union), which means we should expect the test to return “equal” 50% of the time. So, why is this true?

First, note that matching zeros in both vectors will not affect the result of the test. This is because no matter where they end up after permutation, they will not affect the test result. Therefore, we can safely drop them from the inputs without changing the result.

A drops matching 0s -> 1011
B drops matching 0s -> 0111

After dropping matching zeros, we are left with matching 1s or differing bits. It is now easy to see that the first 1’s index will be equal iff an index with matching 1s is selected as the first digit in the random permutation, hence the probability will be equal to the Jaccard Index.

In a universe of n things, any n-permutation will yield a MinHash function. Now, we just have to precompute a bunch of these functions, and apply them to each bit vector to get the fingerprints. Given a pair of fingerprints, we just have to count the number of matching MinHash outputs to get an estimate of the similarity. One cool observation is that more hashes only gives you more accurate similarity estimates, which means even when the universe becomes larger, or more sets are added, you still don’t really have to linearly scale up the total fingerprint size.

Given the fingerprints, we still have to figure out how to search efficiently, but that’s another complicated subject for another day.

Precisely Compare Ints and Floats

Here’s a seemingly trivial task that I ran into recently – given a 64 bit int i and a 64 bit floating point f, how can we tell which one is larger?

Well, duh! In a language with implicit type casting, this is hardly even a question. Just do i < f and it magically works, right?

This is almost correct, but the issue is that by turning the int into a float, we are dropping precision for large integers. This is because only 52 bits of the float is used for representing the “mantissa”, i.e. the binary digits after the “1.”; in other words, we can only keep 53 binary significant figures in a float. So if you have an int 253+1, the closest float will be 253, so naively your code will think the values are equal, while one is actually numerically larger than the other. How hard is it to do this comparison exactly?

Let’s say we’re writing a compare function that returns a positive number when the int is larger, a negative number if the float is larger, and 0 if they’re equal. And for simplicity let’s assume we already have a compare function for ints and floats respectively. Here’s a seemingly clever way to do it.

function compare_int_float(int i, float f)
  f_cmp = compare_float(int_to_float(i), f)
  if f_cmp != 0
    return f_cmp
    return compare_int(i, float_to_int(f))

There are a few observations here. I’ll just state them for now, and will explain them in comments in the final version of the pseudocode.

  • If the float comparison doesn’t say the numbers are equal, then we can trust the result.
  • If they are “float equal”, then f must be numerically an integer. Therefore, we can compare them as if they were ints.

But actually this code has a bug. Can you spot it?

The bug is that for certain inputs, this function can raise, specifically through calling float_to_int(f) on an out of bounds f. This happens when f = 2**63 and i rounds to f when converted to a float. Below is the final pseudocode:

function compare_int_float(int i, float f)
  f_cmp = compare_float(int_to_float(i), f)
  if f_cmp != 0
    // If i rounds to a float less/greater than f,
    // i must be less/greater than f, because otherwise
    // f would be a float that is closer to i,
    // and i should have rounded to f.
    return f_cmp
  else if f = 2**63
    // Large integers round up to 2**63, which is larger than
    // max int, 2**63-1. We need to handle this case, otherwise
    // float_to_int will raise.
    return -1
    // When i is converted to a float, its significant digits
    // can be dropped. Regardless, it will still be an integer,
    // so f (which is equal to i rounded) is also an integer.
    // Therefore we can turn f into an int and compare exactly. 
    return compare_int(i, float_to_int(f))

Paper Reading: Efficient Path Profiling

Recently I’ve been going through CS 6120 from Cornell (compilers), and one of the papers listed in the course was quite interesting, namely Efficient Path Profiling. Once in a while you see a solution so neat that it almost feels like the problem was created in order to make such a solution useful; this paper gave me that feeling. This blog post will give a high level understanding of the problem, the algorithm and some intuitions, while leaving out all the technical details.

The problem setting is that you have a control flow graph (CFG), where each node is a block of code that always executes together (no branches), and each edge is a branch/jump instruction. With huge loss of generality, we assume there will be an ENTRY node and an EXIT node, and the CFG will be a directed acyclic graph (DAG) always going from ENTRY to EXIT. This is clearly unrealistic for normal programs due to the lack of loops, but the paper provides workarounds that aren’t very interesting. The task is to record the paths taken in each execution (from ENTRY to EXIT), so that we can compute statistics about which paths are the most common and make compiler optimizations accordingly.

In other words, we’re doing something like the following. Say we give each node a unique identifier (e.g. 0, 1, 2 …). Each time the program runs, we maintain a list of these identifiers, appending to it every time we visit a new node. And by the end we can add the resulting list to some sort of aggregating data structure.

But that’s a horribly inefficient way to do it. Both appending to the list in each node and aggregating the resulting lists at the end of each execution are going to be expensive. Here, the authors propose: what if we could instead somehow give integer weights to the edges in the CFG such that each path from ENTRY to EXIT has a unique sum, to replace the list of node identifiers? What if those path sums are small enough numbers that you could just make an array and increment the element at the index equal to the path sum?? What if you can pick the edges that are triggered the most and set those weights to 0, so you don’t even need to do anything in the hot paths???

Compact Unique Path Sums

It turns out all of those are possible. First, it’s actually easy to weight the edges such that each ENTRY->EXIT path gives a different sum. You could just pick unique powers of 2, which could give you really large weights. But you can actually do much, much better and make the sums “compact”, meaning they form a range between 0 and the number of unique paths – 1, so the path sums are as small as possible. It’s also really simple to do so.

First, we define NumPaths(v) as the number of unique paths from node v to EXIT. This takes linear time to compute. Then, for each node, say we have a list of outgoing edges. For the ith edge, we simply take all edges from 0 to i – 1, find their destinations, add up their NumPaths, and use that as the weight. The intuition behind this is that for each outgoing edge, there are NumPaths way to get to the EXIT, and each path has a unique sum between 0 and NumPaths – 1 (by induction). Since the first edge already claimed sums 0 to NumPaths – 1, the second edge has to start counting from NumPaths, and we can achieve that by adding NumPaths to the second edge’s weight.

If the maximum NumPaths of a CFG is small enough, we can just maintain an array of length NumPaths to count the frequency each path is taken across many runs. Otherwise, we can still maintain a hash table, incurring a larger overhead.

Choosing Weights to Zero Out

So far it’s been very simple. Notice that for each node, one of the outgoing edges has weight 0. Of course, at those edges, we don’t actually need to add an instruction to add the weight to the path sum. So we can actually just pick the most frequent edge and assign it 0, to minimize the overhead we’re adding to the program.

But we can actually have more flexibility than picking one outgoing edge per node to zero out. This part is a bit more involved to understand, and this paper basically says “just look at that other paper”. While that other paper has a proof, it still didn’t quite explain why it works. I think I have an intuition, which I will lay out below, but I’m Not a Computer Scientist, so it might be wrong, etc.

The way it works is that: you start off with some estimations for the relative frequencies of each edge being taken. You add an edge from EXIT to ENTRY with frequency 1 (fires every time the program runs), and compute the spanning tree of the resulting graph with the maximum weight (sum of edge frequencies), ignoring directions. All edges in that spanning tree will have an updated weight 0. Note that this is never worse than zeroing out the most frequent edge of each node as described above, because taking one outgoing edge per node also forms a spanning tree (n – 1 edges in total, all nodes are linked to EXIT).

For any edge not in the spanning tree, we call it a “chord”. Each chord f, when added to the spanning tree, forms a cycle C(f), also ignoring direction. Now, we have the weight assignments for any edge e, W(e), from the previous section’s algorithm (for the added edge, W(EXIT->ENTRY) = 0). The new weight of any chord f, W'(f), is the sum of W over C(f), but we negate W for edges that are in the opposite direction of f in the cycle. For example, say the chord A -> B has C(A -> B) = A -> B <- C -> A, then W'(A -> B) = W(A -> B) – W(C -> B) + W(C -> A). The claim is that, for any given path from ENTRY to EXIT, W and W’ yield the same path sum.

But why is that? Here’s the handwavy part. The intuition is that every program execution, when appended with the edge EXIT->ENTRY, becomes a loop. A directed program execution loop D must contains chords, since all loops contain edges not in the spanning tree, and D is really just a “sum” of all C(f) for chords f in D. So the sum of W over D is equal to the sum of W over all C(f) for f in D. The sum of W over any C(f) is by definition equal to the sum of W’ over C(f).

sum W over D
= sum W over (C(f) for chord f in D)
= sum W’ over (C(f) for chord f in D)
= sum W’ over D

Here’s a simple example, not a proof, since I don’t have one.

program       spanning tree     execution loop
  ENTRY         ENTRY             ENTRY
   / \             \               /   \
  A   |         A   |             A     |
  | \ |         | \ |             |     |
  B   C         B   C             B     |
   \ /             /               \   /
   EXIT          EXIT              EXIT

In our execution loop, we have 3 chords, ENTRY->A, B->EXIT, and EXIT -> ENTRY.

C(B->EXIT) = B – EXIT – C – A – B

Joining all three together (at A and EXIT), we have:


Cancelling out opposite edges, this simplifies to:

ENTRY – A – B – EXIT – ENTRY – C – EXIT – C – A – C – ENTRY

Which is exactly the execution loop.

With these two steps – compute weights, optimize the locations of the zero weight edges – we can insert instructions at the edges in the given program to efficiently compute the unique sum of the path taken each time a program finishes executing. In retrospect, the algorithm to assign weights to give compact path sums seems almost obvious, but that’s more a sign that we’ve asked the right question than that the problem is really trivial.

Random Heap Updates Are Cheap

A while ago I encountered an algorithmic challenge at work. Basically, the idea is that we have a bag of numbers, and we’d like to be able to update each number as well as insert and remove, and also occasionally pop the smallest number. All of these are simple and typical heap operations. But in our use case, we’re going to be updating numbers much more frequently than popping the smallest number. Recall from your data structure classes that removing from a heap costs O(log n), and updating is just removing followed by inserting, so logically if we make n updates followed by one pop min, we’re going to pay O(n log n).

Consider an alternative approach, where we put all numbers in an unordered array. To update, we just overwrite the old number, and to pop min we scan the array. Then, if we make n updates followed by one pop min, the total cost is now just O(n). The problem with that is that the worst case could grow to O(n2) when we pop min a lot more than expected in production. Hence the question: is there a way to do roughly O(1) work per update, but still end up with O(log n) worst case for pop min?

In general, this is impossible. No heaps support O(1) update because update is strictly harder than pop min, due to the fact that updating the min element to infinity achieves the same effect as popping. Perhaps we can further relax the requirements to make progress. One way to do so is to assume that the updates are “random”.

It’s not entirely clear what the definition of random updates ought to be. To start, one reasonable definition would be that for each update, (A) an existing element is chosen uniformly at random, and (B) its updated rank is also independently chosen uniformly at random.

When I got to this point, I dived in and devised some complicated data structures which achieved the desired behaviors. But I later figured out that in fact the existing data structures I knew already satisfy the above requirements. Let’s take a look.

Binary Heap

The simplest heap in existence is the binary heap, where we have a binary tree embedded in an array. The element at index i has children at indices 2i+1 and 2i+2, and we maintain the heap property that a child must be no less than its parent. To update an element in the heap, we can just overwrite the old element in the array, and simply recursively swap elements until the heap property holds. The time complexity of update is just how many swaps we need. In the worst case, we need to make O(log n) swaps, e.g. when the min element is updated to become the max.

What about the “average” case given our assumptions of randomness? First, for updates that increase an element, we have to swap it with its children recursively. The worst case is that we have to swap it all the way down. In that case, assuming the element is randomly picked, the expected number of swaps is roughly:
0 * (1/2) + 1 * (1/4) + 2 * (1/8) + 3 * (1/16) + …
= (1/4 + 1/8 + 1/16 + …) + (1/8 + 1/16 + …) + (1/16 + …) + …
= 1/2 + 1/4 + 1/8 + …
= 1
This is because roughly half the elements are already at the bottom so they never need to be swapped down, then the remaining half are one level up, and so on.

Then, for updates that decrease the element, it takes some reasoning to see that it’s symmetric with the previous case. Say in heap H1, we’re decreasing an element at rank R1 to rank R2. After that’s done, we have H2, and if we were to change the rank back to R1, we actually have to do the exact same swaps to move it back to its original position (this might not be very obvious, but you can work out an example to convince yourself). Now, we claim that H1 and H2 are be equally probable configurations, since the probability distribution from which we drew H1 should be invariant through random updates. Hence, the expected number of swaps needed to decrease a rank is the same as that to increase a rank, which is 1. (By the way, I feel like there ought to be a better argument. This argument relies on H1 and H2 being in the same probability distribution, which might not hold when other heap operations are carried out.)

All in all, randomly updating in place for a binary heap is actually O(1). In other words, binary heaps support O(1) random updates and O(log n) worst case for everything, which is exactly what we desire.

Pairing Heap

That’s great, except that I only had access to a pairing heap implementation. Pairing heap is this cool data structure where we have a tree (no limit on number of children per node) that lazily rebalances itself on pop min.

Here’s an extremely simplified description. We start with a tree with (only) the heap property. To “meld” (combine) two trees, we just take the tree with the smaller root, and stick the other tree under that root as an immediate child. Inserting an element is melding with a tree of size 1. To pop min, we first remove the root, and now we have to merge a whole bunch of trees, which were the immediate children of the root. The naive way of melding all of them in one go will result in a bad time complexity, since we might have to go through all of them again for the next pop min. The trick is to first meld the trees in pairs, then meld all those results in (reverse) order. This cuts down the number of immediate children for the next round by at least half. Lastly, removing any given node is just: cut it out from its parent, pop min from the detached branch, then meld the rest of it back.

The exact time complexity of all operations of pairing heap is still an open problem, but for our purposes, let’s just say insert takes O(1), and removing any node has amortized worst case O(log n). The naive way to update an element would be to remove the old value and then insert the new value. To remove a randomly picked element, the expected amount of work is proportional to the expected number of children, which is less than 1. Insert is also O(1), so in total, a random update is O(1). Note that this analysis only assumes (A).

Again, we get what we want: O(1) for random updates, O(log n) for amortized worst case pop min and updates.


While I was figuring this out, I learned that there are quite a variety of these data structures out there. Fibonacci heap used to be the poster child of being theoretically great but not practical, but these days we have rank pairing heap that achieves the same asymptotic bounds and claims to be competitive in practice as well. Aside, there are a bunch of variants of pairing heap. I’m not sure whether all these different heaps have similar properties as discussed here, but at this point I don’t care enough to find out, since most of these heaps are probably never used in real life anyway.

Building an AVL Tree From a Sorted Sequence in One Pass

Recently I came across the function Map.of_increasing_sequence in the base library of OCaml. It might sound like a very simple and common function, but the implementation is actually quite cool. Let’s dive in. (Spoiler: it’s related to a weird number system.)

First Impressions

A sequence is like an iterator – you can either get the next value or reach the end. A map is an immutable AVL tree of key value pairs. An AVL tree is a binary tree with two properties – each node is larger than all nodes in its left subtree and smaller than all nodes in its right, and both subtrees can only have heights differing by at most 1. From an algorithmic standpoint, making a BST from a sorted array is trivial. You can achieve O(n) time complexity, and O(log(n)) space excluding the return value, just by simple recursion. But you can’t do that to a sequence, since sequences don’t permit random access. Now, we can always turn the sequence into an array first, but that would require O(n) extra space. Can we do better?

It turns out that library function is implemented with only one pass through the sequence, using only log(n) extra space. The code had almost no documentation, and I couldn’t find any description online (although I didn’t try very hard), so here I’ll attempt to motivate and derive the algorithm.

First Attempt: Try to Build a Tree

Imagine yourself with the task of building a balanced tree and are handed one number at a time, and you need to incrementally build a BST as quickly as possible. That might look like this.

I get a 1 – that’s easy. I’ll have a tree with one node.

2 – OK, that’s bigger than 1, I can make that the new root, and make 1 the left child.

3 – Let’s put that as the right child. So far so good.

4 – Hmm, maybe we could make that the new root, and have the tree rooted at 2 as the left child?

5, 6, 7 – That looks like 1, 2, 3 all over again, we can put those in the right subtree. Now we have a complete BST, looking good!

8 – That’s awkward again, let’s just say it’s the new root again.

9-15 – Looking like 1-7 again…

Maybe you can see the recursive pattern here. This looks like a procedure that produces reasonably balanced trees, and the height is always bounded by O(log(n)). That seems good, right?

It would be acceptable, but only if your map library only has two functions – build the tree, then look up values and never change it again. The problem is, the BST also has to support adds and deletes as well. The most common BST types, like red black trees and AVL trees, all have their own invariant, and unfortunately we’re not meeting those standards with our almost-balanced trees. For example, At step 8, we have a root (8), a left subtree of size 7, and an empty right subtree. That’s not a valid red black tree or AVL tree, and you can’t just return that, since that would break the rest of your library. How can we fix this?

Second Attempt: Build Branches Instead

One might think that we could make this work somehow with some clever ordering of insertions to the tree. But the fundamental issue here is that with only one tree, we can never change the root – at the moment we make the newest element the root, the tree must become heavily imbalanced. So perhaps we can instead maintain a bag of branches, and quickly assemble them into a tree when we hit the end of the sequence.

Here, the defining characteristic of a branch would be its composability. Let’s define a branch (called a fragment in the source code) like the trees we had in step 4 and 8 in the last attempt. In other words, a branch would be a tree with a complete left subtree and an empty right subtree. To merge two branches into a tree, you could put one branch as another branch’s right child. A branch of height n has 2^n nodes.

You could also merge two branches of the same size into a new branch. Here’s one way to do it: to merge X with Y, take the left subtree of branch Y and move it to the right subtree of branch X. Now X becomes a complete binary tree, and you can set it to be the left subtree of Y. This fits our branch definition, while also preserving order in the tree.

Merge branches as tree:
    X       Y        X
   /   +   /   =    / \
  A       B        A   Y

Merge branches as branch:
    X       Y        Y
   /   +   /   =    /
  A       B        X
                  / \
                 A   B

Now let’s try again:

1 – One node by itself is a branch.

2 – We could make 2 a branch and merge with 1.

3 – We could have 3 be its own branch. Now we have branches of size 2 and 1.

4 – Let’s merge 4 with 3, and then we have 2 branches of size 2. We could merge those two again into one branch.

5 – That’s a new branch.

6 – Add that to 5’s branch…

That starts to look recursive again. Now we always have a bunch of branches. And when we need to generate the final tree, we could just iteratively merge them together, from small to large. Is that good enough?

We Are Building a Binary Number

It’s not. To see this, we can frame this algorithm a bit more abstractly.

Consider the heights of our branches. For each integer n, each time we can only have either 0 or 1 branch of height n, because once we have 2, we merge them together. We can visualize the branch building in this table.

# branches of size 4# branches of size 2# branches of size 1# Nodes
Our branches sizes correspond to the binary representation of total tree size.

From here, we can see that we’re abstractly incrementing a binary counter. Now the problem is we can have a lot of gaps, or 0s, in the binary number. For example, for 17 nodes, we’ll have a large branch of size 16 and a small branch of 1 node. Now if the sequence terminates, we’ll have to merge those branches into a tree – but that again will be a heavily one-sided tree.

Third and Final Attempt: Keep 2 Branches at Each Level

Since gaps are causing us problems, maybe we could just, like, not have them. And in fact we could. This is hinted at in the code – “using skew binary encoding”. (Although from Wikipedia, skew binary system actually refers to a slightly different definition.)

In this new “binary” encoding, we could use the digits 0, 1 and 2, as opposed to just 0 and 1. Each position in the number would still have the same weight. So for example, 212 = 2*4 + 1*2 + 2 = 12. Here’s how to count in this new number system.

Counting in the new system

Basically, to add one, we flip 0 to 1 and 1 to 2, but we flip 2 back to 1 and carry forward. There are never any 0s in any number.

Translating this back to our branch building, that means we don’t merge when we have two branches. We merge when we have three – and we merge the older two branches, “carrying” it forward to the next level, and always keep one branch for each height.

Let’s convince ourselves that at every point in the process, we can merge all branches and end up with a valid AVL tree (i.e. the algorithm is correct).

Say we are n steps into the branch building process, and we have to make a tree. We can convert n into a string of 1 and 2 in this number system. Starting from the least significant digit, we either have 1 or 2. Together, that gives us either a tree of height 1 or 2. Moving onto the next digit, we again have either 1 or 2 branches of size 2. At the max, we have 3 branches, each of size 2. If we first merge the left two branches into a branch, then merge that with the right branch to a tree, that leaves us with a maximum tree height of 3 (while minimum is 2). At the next level, we can at max have 3 branches of height 3. Similarly, we end up with a tree with height between 3 and 4.

Illustrating the 222 case, with 14 nodes.
Level 1:
13 + 14 

Level 2:
 10  +  12  +  14
09     11     13
  10      14
09  11  13

Level 3:
      04          08          12
  02      +   06      +   10      14
01  03      05  07      09  11  13
      04              12
  02      06      10      14
01  03  05  07  09  11  13  

In this process, we always create trees that preserve order. And after level n, the tree that we end up with always have height n or n+1. That satisfies the AVL tree invariant that two subtrees should have heights differ by at most 1.

That’s It!

Now we should be reasonably convinced that this algorithm produces valid BST. But there were a lot of details that were glossed over. To be completely rigorous, we would need to formalize the observations into claims, and prove them by induction. But I believe this process captures the key ideas already.

We also skipped the time/space complexity discussion. There are two slightly nontrivial details here. First, each insertion could lead to a cascade of branch merges (or carries), so we need to argue that insertion has an amortized cost of O(1). Then, we need to realize that the final tree merging takes O(log(n)) branches, and each tree merge is O(1) as well. As an aside, this number system has a unique representation for each number, which is perhaps not totally obvious.

I am not able to identify the inventor of this algorithm. It doesn’t seem particularly likely that the author of this code was also the inventor.

There are still some aspects of that source file that I don’t quite understand, but is perhaps not closely related. In particular, the invariant is that subtrees have heights differing by at most 2, not 1, like normal AVL trees. Maybe I’ll find out why another day.

What I Learned in Two Years’ Tech Work in Finance

It’s been two years since I started working full time as a software engineer. As I accumulated experience, I became a lot more hesitant to write, because I start to feel that I can’t contribute anything new on top of what everyone else already knows. And I even felt bad for having written some of the old posts, since they now seem quite silly and naive.

In some sense those feelings must reflect a lot of truth, but that still shouldn’t stop me from writing. Perfect is the enemy of good, and if I wait until I know everything, I’ll never write again; hence this post. Random thoughts will be laid out in no particular order.


One mistake that I’ve repeated is to optimize prematurely. As a recent college grad having done a lot of brain teasers, it’s really tempting to work clever algorithms into the job. But a lot of the times, it’s just unnecessary. In coding competitions, we are only rewarded for writing correct, fast and short programs. Nothing else mattered. In professional work, we need to add a few more terms to the equation – cost of human effort (to write, test and review the code), flexibility for future modifications, and simplicity of the solution. A simple solution that gets 90% of the cases right is perhaps even better than a complicated solution that gets 99.99%, if humans can much more easily understand failure modes and be able to manually fix things in the former case. After all, the alternative is to spend a lot of time debugging when the one edge case happens, and breaks the system.

I think this is important enough to deserve an emphasis – simplicity is valuable. A prediction system that gives you a slightly inaccurate number in a predictable way is much better than one that gives you a more accurate magic number that no one understands. In financial markets, complexity creeps in wherever competition is fierce, but the simplicity of many models would probably still be surprising to outsiders (hint: it’s not all machine learning).

There’s another form of short-sighted cleverness, which is to tweak the system very slightly to achieve what I want without learning about the whole system or understanding the full consequences of those tweaks. The smallest diff isn’t necessarily the best fix. Adding a patch that isn’t well thought out and that doesn’t fit in with the rest of the code is just incurring tech debt. Perhaps we could call this under-engineering.

Don’t Pretend You Understand

One thing that is common, perhaps more so among newer folks, is to pretend to understand, whether listening to a conversation or getting answers from teammates. Having been on both sides, I believe that this is a really bad habit. Of course it is only human nature to hide your inexperience, and I’ve also heard criticism about people asking for help before spending a lot of time on the issues. But no, I think a lot of the times, asking questions eagerly is way more productive over all, especially for new teammates.

I say this multiple times to my interns: if a problem can possibly take you half an hour to figure out, and I already know the answer, then the decision here is between one minute of my time versus thirty minutes of yours. Is my time 30x more valuable? I wish! If it’s a tough question and the mentor doesn’t already know the answer, then it seems even sillier for the intern to struggle alone for a long time.

There are times when I’m answering questions from newer folks, and I know that there’s no way they understood some statements I made. However they were still nodding and reacting as if they did. Invariably they will return in a few days with the same questions. This is counter productive.

This problem is less common, although perhaps more serious for more experienced people, since the pride has built up. I’ve told myself “this is something I should know by now” multiple times and stopped myself from asking my coworkers. But the truth is no one knows every corner of the system, and everyone knows that.

There is no shame in not knowing things. People expect that. Just ask.

Aligned Incentives

One idiosyncrasy about the finance industry is that a lot of compensation comes in the form of variable and unpredictable year end bonuses, as compared to stock offerings in tech companies. Of course people like certainty. But I think there’s a case for an opaque process of reward in the form of bonuses.

From first principle, employees have different incentives, and they usually aren’t the same as the company’s goals. Whenever incentives diverge or even conflict, we can get serious issues.

There are countless examples in real life. One example in finance is the reward curve for some hedge funds. Hedge funds are roughly companies that take investor money and help them pick investments. Some hedge funds collect a fixed fee, plus a significant fraction in additional return. That means when the investments increase in value, they make more money. That’s good incentive, right? The problem is when the investments lose money, they aren’t affected – they still take the fixed fee, regardless of how much was lost. (This is not entirely accurate, since they will lose clients if they keep losing money.) Therefore the funds will tend to make riskier investments. If your investment is going to make on average 10% a year, you’d rather make 100% this year and lose ~80% next year, as you can collect a much larger fee. This is worse for the clients.

Now let’s look at employee compensation. We want to reward employees in a way that encourages them to help the company make more money (assuming making money is the goal of the company). One thing we can do is to measure the amount of contributions for everyone, and reward accordingly. For example, we could measure hours spent in the office, or number of lines of code written, or survey people for their estimations of their teammates’ contributions, etc. The problem is these are only proxy measurements, and once you start measuring them, people will optimize for the proxies instead of the actual goal. If you measure lines of code written, you’ll encourage verbose and redundant code; if you measure hours in the office, people will stay longer but not necessarily work at the same speed, and so on.

But the problem is fundamental – you can’t measure the actual contribution and hard work, and by measuring proxies you’ll encourage cynical behavior. A fix, if not a complete solution, is through obfuscating the reward function. If I tell you that I’ll give you an unknown amount of money by the end of the year based on How Well You DidTM, and let you fill in the rest, then you won’t be encouraged to write bad code, or only focus on projects that had Impact, or other things bad for the firm.

I feel that this has worked quite well in my company. But there are a lot of assumptions for this to work. One is that employees have to be OK with not knowing how much they’ll make. There also needs to be a lot of trust between employees and managers, so that employees can trust that they’ll be evaluated fairly by the end of the year.

And More…

This post is getting long and messy, so maybe I’ll call it a day for now. There are a lot of smaller lessons that come from trading and recruiting. Trading is arguably the best arena to hone one’s rational decision making skills, and interesting stories come up every now and then. Maybe I’ll write a follow up some day.

Thoughts on Fooled by Randomness

Just finished Nassim Nicholas Taleb’s well-known book, Fooled by Randomness. Here are some brief thoughts, in no particular order.

The Birthday Irony

Despite the author’s years working in trading and writing a book on probability, in one of the few cases where he did actual math, he did it wrong. Here’s the original:

If you meet someone randomly, there is a one in 365.25 chance of your sharing their birthday, and a considerably smaller one of having the exact birthday of the same year.

Nassim Nicholas Taleb, Fooled by Randomness

It seems like he was trying to say – on average, there are 365.25 days a year (first order approximation of leap years), so you have a \frac{1}{365.25} chance of meeting someone of the same birthday.

If you do the math though, here’s the actual probability: every four years (365 \times 4+1 = 1461 days), there are 1460 days in which your probability of sharing a birthday is \frac{4}{1461}, and 1 day in which it is \frac{1}{1461}. So, the probability is \frac{1460}{1461} \times \frac{4}{1461} + \frac{1}{1461} \times \frac{1}{1461} \approx \frac{1}{365.44}. That’s significantly off from 365.25 that you can’t really say “I just made a first order approximation”.

To fully understand this error, let’s say there is one extra day in n years, instead of 4. Then the number, instead of 365.25 or 365.44, will be (\frac{365n^2}{(365n+1)^2} + \frac{1}{(365n+1)^2})^{-1}. After taking Taylor series expansions, we get 365 + \frac{2}{n} - \frac{364}{365n^2} + O(n^{-3}), or 365 + \frac{2}{n} + O(n^{-2}), instead of the 365 + \frac{1}{n} that the author had guessed.

Let’s spend a little time to gain intuitions on why it’s 365 + \frac{2}{n} instead of \frac{1}{n}. Consider Alice and Bob, and a year is exactly 365 days. Then the chance of sharing a birthday is 1 in 365. Now say we add x days to Bob’s calendar only, so Bob’s birthday has 365+x possible choices while Alice still has 365. Then, the probability that they have the same birthday is 1 in 365+x. At this point, it is clear that if we add x days to Alice’s calendar, the chance of sharing a birthday goes down, therefore we know that the author’s estimate of probability is too high. Then, add x to Alice’s calendar. If x is small, we can ignore the probability that their shared birthday is on one of the days in x (that probability is second order). Then, approximately we have the probability of sharing a birthday as \frac{365}{(365+x)^2}, which is close to \frac{1}{365 + 2x}, again ignoring the second order term. Substituting x for \frac{1}{n}, we have arrived at the desired result. The factor 2 comes from the fact that we added a leap day not only to Bob, but also to Alice.

Anyway, on a higher level, the lesson is that you should fully justify your simplifying assumptions, instead of jumping to conclusions.

Wittgenstein’s Ruler

This idea has never explicitly come to my mind, so I thought it was interesting. It says something like if you don’t have a reliable ruler, and you use it against a table, you might be measuring your ruler with the table. One example he mentioned was that some people in finance claimed that a ten sigma event happened. Using the principle – if you measured a ten sigma event, your ruler (mathematical model) is probably seriously flawed.

One takeaway from this is that statistics is merely a language to simplify and describe the real world, the world does not run according to the rules. It would be ridiculous to plot data points under a bell shape, and say that the world is wrong when the new data point doesn’t fit under it.

Another way of saying the same thing is conditional probability. Relevant xkcd:

One way I’ve seen it in real life is the current political situation in Hong Kong. Say there’s a certain probability that one citizen goes nuts and riot in the street, and there’s a certain probability that the government has done something terribly wrong. If you have very few people rioting, then the ruler tells you that those guys are probably at fault. But if you have a majority of citizens supporting the riots or rioting, then those guys become the ruler, and you’re measuring the government.

Think about All Possibilities

One very valid point in the book is that you should think of the world as taking one sample path in infinitely many possibilities. When you evaluate an outcome, you should think of all the things that could have happened. For example, if your friend did a thing and made a huge success, it doesn’t mean he made a good decision or that you should’ve done the same, or even that you should follow suit. We have only one data point, you don’t know what the probability distribution looks like. Maybe he could have lost it all. When you think about all that could have happened, you will have less jealousy to the lucky and more sympathy to the unfortunate.

Happiness is Relative

This is a tangential point to randomness, but still important to keep in mind. Given that you have basic human needs fulfilled, your happiness often doesn’t depend on how much you have, but how much more you have compared to those around you. More generally, it’s not the absolute well-being that matters, but the changes. So to be happy, don’t be the medium fish in the big pond, go to the small pond and be a king. If you start out at the top, tough luck, because chances are your status will revert to mean over time.

Limit Your Loss

If there’s one actionable item from the book, that’s to always remember to limit your worst case scenario. Between a steady increase in personal well-being with no risk of going bankrupt and more income but also a chance of losing everything, you should prefer the former, because eventually the unfortunate thing will happen. That’s called the ergodicity – any event with a nonzero probability will eventually happen, mathematically.

The Author’s Conspicuous Faults

I believe most readers will often find the author’s comments controversial and provocative, if not arrogant and overgeneralizing. There’s a bunch of stuff he said that is just plain wrong.

He said in the beginning of the book that he didn’t rewrite according to his editor’s suggestions, because he didn’t want to hide his personal shortcomings. But the point of a nonfiction book that is non-autobiographical is not to convey who you are, but to give readers inspirations and positive influence. If you say a bad thing in the book that you believe in, you’re not “being true”, you’re bad influence! I don’t know what exactly he was referring to, but I suspect they should include my following points.

He’s exceptionally arrogant, way off the charts. You’ll see him saying things like “I know nothing about this, despite having read a lot into it” and “I know nothing, but I am the person that knows the most about knowing nothing”. He just couldn’t write one sentence that ends in a defeated tone. Before he puts a period down, he must add another clause to the sentence to remind the readers that he’s just being humble, he didn’t mean it. It’s quite funny when you look for it.

He also loves stereotyping people to the extreme. He would say things like “journalists are born to be fooled by randomness”, “MBAs don’t know what they’re doing”, “company executives don’t have visible skills” and “economists don’t understand this whatever concept”. One thing he said in the beginning of the book was that he didn’t need data to back up his claims, because he’s only doing “thought experiments”. I think he mistook that for “unfounded personal opinions”. When you make claims about journalists and economists being dumb, that’s hardly a thought experiment. You absolutely need to back up your claims.

Overall, this book has some good ideas, but not that many. If you already have a decent background in math, maybe you can skip this book without harm.

Fast RNG in an Interval – Fast Random Integer Generation in an Interval

Just read this interesting little paper recently. The original paper is already quite readable, but perhaps I can give a more readable write-up.

Problem Statement

Say you want to generate a random number in range [0, s), but you only have a random number generator that gives you a random number in range [0, 2^L) (inclusive, exclusive). One simple thing you can do is to first generate a number x, divide that by s and take the remainder. Another thing you can do is to scale the range down by something like this: x * (s / 2^L), with some floating point math, casting, whatever that works. Both ways will give you a resulting integer in the specified range.

But these are not “correct”, in a sense that they don’t generate random integers with a uniform distribution. Say, s = 3 and 2^L = 4, then you will always end up with one number being generated with probability 1/2, the other two numbers 1/4. Given 4 equally likely inputs, you just cannot convert that to 3 cases with equal probability. More generally, these simple approaches cannot work when s is not a power of 2.

First Attempt at Fixing Statistical Biases

To fix that, you will need to reject some numbers and try again. Like in the above example, when you get the number 3, you shuffle again, until you get any number from 0 to 2. Then, all outcomes are equally likely.

More generally, you need to throw away 2^L mod s numbers, so that the rest will be divisible by s. Let’s call that number r, for remainder. So you can throw away the first r numbers and use the first approach of taking remainder, as shown in this first attempt (pseudocode):

r = (2^L - s) mod s // 2^L is too large, so we subtract s
x = rand()
while x < r do
x = rand()
return x mod s

That’s a perfectly fine solution, and in fact it has been used in some popular standard libraries (e.g. GNU C++). However, division is a slow operation compared to others like multiplication, addition and branching, and in this function we are always doing two divisions (mod). If we can somehow cut down on our divisions, our function may run a lot faster.

Reducing number of divisions

It turns out we can do just that, with just a simple twist. Instead of getting rid of the first r numbers, we get rid of the last r numbers. And we can verify whether x is in the last r numbers like so:

x = rand ()
x_mod_s = x mod s
while x - x_mod_s > 2^L - s do
x = rand ()
x_mod_s = x mod s
return x_mod_s

The greater-than comparison on line 3 is a little tricky. It’s mathematically the same as comparing x - x_mod_s + s with 2^L, but we do this instead because you can’t express 2^L with L number of bits. So basically, the check is saying if the next multiple of s after x is larger than 2^L, then x is in the last r numbers and must be thrown away. We never actually calculate r, but with a little cleverness we manage to do the same check.

How many divisions are we doing here? Well, at least one on line 2, and possibly 0 or many more, depending on how many times the loop is run. Since we’re rejecting less than half of the possible outcomes (we’re at least keeping s and at most rejecting s - 1), we have at least 1/2 chance of breaking out of the loop each time, which means the expected number of loops is at most 1 (0 * 1/2 + 1 * 1/4 + 2 * 1/8 … = 1). So we know that the expected number of divisions is at worst 2, equal to that of the previous attempt. But most of the time, the expected number is a lot closer to 1 (e.g. when s is small), so this can theoretically be almost a 2x speed up.

So that’s pretty cool. But can we do even better?

Finally, Fast Random Integer

Remember other than taking remainders, there’s also the scaling approach x * (s / 2^L)? It turns out if you rewrite that as (x * s) / 2^L, it becomes quite efficient to compute, because computers can “divide” by a power of two by just chopping off bits from the right. Plus, a lot of hardware has support for getting the full multiplication results, so we don’t have to worry about x * s overflowing. In the approach using mod, we inevitably need one expensive division, but here we don’t anymore, due to quirks of having a denominator of power of 2. So this direction seems promising, but again we have to fix the statistical biases.

So let’s investigate how to do that with our toy example of s = 3, 2^L = 4. Let’s look at what happens to all possible values of x.

xs * x(s * x) / 2^L(s * x) mod 2^L

Essentially we have s intervals of size 2^L, and each interval maps to one single unique outcome. In this case, [0,4) maps to 0, [4, 8) maps to 1, and [8, 12) maps to 2. From the third column, we have two cases mapping to 0, and we’d like to get rid of one of them.

Note that the fundamental reason behind this uneven distribution is because 2^L is not divisible by s, so any contiguous range of 2^L numbers will contain a variable number of multiples of s. That menas we can fix that by rejecting r numbers in each range! More specifically, if we reject the first r numbers in each interval, then each interval will contain the same number of multiples of s. In the above example, the mapping becomes [1, 4) maps to 0, [5, 8) maps to 1, and [9, 12) maps to 2. Fair and square!

Let’s put that in pseudocode:

r = (2^L - s) mod s
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L // equivalent to x_s mod 2^L
while x_s_mod < r do
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
return shift_right x_s L // equivalent to x_s / 2^L

Now that would work, and it would take exactly 1 expensive division on line 1 to compute r every single time. That beats both of the above algorithms! But wait, we can do even better! Since r < s, we can first check x_s_mod against s, and only compute r if that check fails. This is the algorithm proposed in the paper. It looks something like this:

x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
if x_s_mod < s then
r = (2^L - s) mod s
while x_s_mod < r do
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
return shift_right x_s L

Now the number of expensive divisions is either 0 or 1, with some probability depending on s and 2^L. This looks clearly faster than the other algorithms, and experiments in the paper confirmed that. But as often is the case, performance comes at the cost of less readable code. Also in this case, we’re relying on hardware support for full multiplication results, so the code is less portable and in reality looks pretty low level and messy. Go and Swift have adopted this, deciding the tradeoff worthy, according to the author’s blog (, C++ may also use this soon.

How Many Divisions Exactly?

There’s still one last part we haven’t figured out – we know the expected number of divisions is between 0 and 1, but what exactly is it? In other words, how many multiples of s, in the range [0, s * 2^L), has a remainder less than s when dividing by 2^L? To people with more number theory background, this is probably obvious. But starting from scratch, it can take quite a lot of work to prove, so I’ll just sketch the intuitions.

It’s a well known fact that if p and q are co-prime (no common factors other than 1), then the numbers { 0, p mod q, 2p mod q, 3p mod q ... (q-1) p mod q } will be exactly 0 to q-1. This is because if there is any repeated number, then we have a * p mod q = b * p mod q (assuming a > b), which indicates (a - b) * p mod q = 0. But we know that 0 < a - b < q, and p has no common factor with q, so if we multiply those two together, it cannot be a multiple of q. So it’s impossible to have duplicates, and multiples of p will evenly distribute among [0, q) when taken mod q.

Now if s and 2^L are co-prime, there will be exactly s number of multiples of s that has a remainder ranging from 0 to s - 1. That means the expected number of divisions in this case is s / 2^L.

If they aren’t co-prime, that means s is divisible by some power of 2. Say s = s' * 2^k, where s' is odd. Then, s * 2^(L-k) = s' * 2^L will be 0 mod 2^L. So your multiples of s mod q will go back to 0 after 2^(L-k) times. And you have 2^k iterations of that. So if you go through the final count, it goes 2^k, followed by 2^k - 1 number of 0s, rinse and repeat. How many are below s? You have s' number of nonzero counts, each one equal to 2^k – it’s again, unsurprisingly, s. So the expected number of divisions is still indeed s / 2^L.

Final Thoughts

Earlier I said each time you need to throw away 2^L mod s numbers to make an even distribution, but that’s not completely necessary. For example, if s = 5 and 2^L = 8, you don’t have to fully reject 3 cases. In fact, you can save up those little randomness for the next iteration. In the next iteration, say you get into 1 of the 3 cases again. Then, combined with the 3 cases you saved up last time, you are now in 9 equally likely events. If you are in the first 5, then you can safely return that value without introducing biases. However, this is only useful when generating the random bit strings are really expensive, which is totally not the case in non-cryptographic use cases.

One last note – we have established that the expected number of divisions is s / 2^L. As s gets close to 2^L, it seems like our code can become slower. But I think that’s not necessarily the case, because the time division takes is probably variable as well, if the hardware component uses any sort of short-circuiting at all. When s is close to 2^L, 2^L mod s is essentially one or two subtractions plus some branching, which can theoretically be done really fast. So, given my educated guess/pure speculation, s / 2^L growing isn’t a real concern.

Catenary Inversion: Curves of Sagrada Familia

Sagrada Familia is stunning and beautiful. If you ever go visit, don’t miss out on the bottom level: there are exhibitions about the constructions and history of this masterpiece by Gaudi. When I visited a while ago, I was surprised to find a model that explained how the curves of the arches of the church were designed, and it was really cool.

To start out, imagine you’re building the roof of a house. Usually they are like this: /\. Modern houses also look like this: Π. The point is, a flat roof is hard to support, so the older houses are all angled. If you hold a dumbbell horizontally to your side, you’ll feel tired a lot faster than holding it angled upwards. This is because materials in general are a lot better at handling compression than bending forces. By holding your arm at an angle, you are supporting part of the weight by compressing your arm along its own direction, reducing the amount of force perpendicular to that direction. Back to the roof: a flat one is fine if made by concrete and steel, but if we use a long piece of wood, maybe not.

Anyway, using the same materials, an arch shaped building will last much longer than a flat topped one, simply because the bricks are subject to bending forces to a less extent. The problem then becomes: how can we find the shape that minimizes bending force at every point on the arch (to zero, actually)?

If you remember high school physics, we can dive into it. Say we draw an arch like this: ∩, and we pick any brick on the arch (say on the left half). Let’s pretend this is the ideal curve, such that there is no bending force anywhere. This little brick you picked is going to have a tiny bit of mass, and the slope of the arch changes a little bit before and after it. Then we have three forces acting on this dot: gravity which points down, force from the left brick supporting it, and force to the right brick. The latter two forces have slightly different slopes, and the three add up to 0 (otherwise the arch will collapse. Note that we don’t have forces perpendicular to the arch between bricks, which is the whole point.) Oh no, we have a differential equation! It’s been 2 days since I took that exam, I have forgotten everything! What do I do?

The Catenary Curve

Consider this seemingly unrelated physical problem: given a string with multiple beads on it on regular intervals. Now you hold the two ends loosely such that it forms a U shape. What properties does this shape satisfy? Similarly, consider a single bead (again, on the left side, but as they say: WLOG) there are three forces acting on this bead: gravity pulling it down, force from the left bead pulling it up, and force to the right bead. You probably saw this coming: these three forces are same as those we just talked about on bricks, but pointing to exactly the opposite directions! (I am too lazy to draw a picture, you can try to get the idea). What’s more, these three forces also add up to 0, since the beads are not moving. This means that we can take the shape of the string, put it upside down, and make a perfect arch out of it. To see why this is true, imagine you draw the perfect string curve on paper, drawing out the forces. Now you rotate the paper 180 degrees and negate all three forces on the beads. (1) The three forces still sum to 0; (2) gravity still points down with the right amount; (3) the remaining two forces still balance out with the forces on the bricks nearby, since those are also negated. Hence, this curve satisfies all of our requirements. We have found the answer! If I recall correctly, architects actually used this method to draw the curves for blueprints. This curve of beads on a string is called the catenary curve, and the solution is in the form of the hyperbolic cosine, which is essentially a sum of two exponential functions, having equal but opposite signs of exponents.

As a personal anecdote, a physics professor of mine once called a problem “physically solved” after he wrote out the equations which uniquely determine the answer, because the rest can be solved by mathematics, either analytically or numerically. This can lead to very uninteresting tests, for example simply writing out the Maxwell equations for every single EM problem. In our case, the arch problem is not only “physically solved” because we can derive the curve using the same differential equation as catenary curves, but also that we can “physically solve” it using beads, a string and actual physics.

Theoretically Or Practically Fast: Two Types of Hash Tables

When people say a program is fast, they usually mean it takes a short time to execute; but when computer scientists say an algorithm is efficient, they mean the growth rate of the run-time with respect to the input size is slow. It is often the case that these two notions approximately mean the same thing, that a more efficient algorithm yields a faster program, even though it is not always true. I recently learned about two types of hash tables that show this difference: Cuckoo hashing and Robin Hood hashing. Even though in theory Cuckoo is more efficient, Robin Hood is in practice a lot faster.

Vanilla Hash Tables

Before going into details, let’s revisit how a hash table works. Simple thing, you have a hash function that takes a key and yields a random (but deterministic) number as the “hash”, and you use this hash to figure out where to put this item in the table. If you want to take something out, just compute the hash again and you can find exactly where the item is stored.

That’s not the whole story though. What if you want to insert item A into spot 1, but item B is already there? You can’t just throw away item B, what are you, a savage? There are multiple ways to deal with this problem called collisions. The easiest way is chaining, which is to make each slot into a list of items, and inserting is just appending to the corresponding list. Sure, that would work, but now your code is a lot slower. This is because each operation takes a look up to get the list, and another look up to get the contents of the list. This is really bad because the table and the lists will not align nicely in cache, and cache misses are BAD BAD things.

Linear Probing

To improve cache performance, people started to throw out bad ideas. One bad idea that kind of worked was, if this slot is occupied, just take the next one! If the next one is taken, just take the next next one! This is called linear probing, and is pretty commonly used. Now we solved the caching problem because we’re only accessing nearby memory slots, our programs now run a lot faster. Why is this bad? Well, it turns out as the table gets filled up, the probe count can get really high. Probe count is how many spots need to be checked before an an item is found, or how far away the item is from where it should be. Imagine out of 5 spots, the first 3 are occupied. Then, the next insertion has a 60% chance of collision. If it does collide, the probe count is on average 2 (probe count = 3 if the hashed index is the first slot, 2 if second, and 1 if third). And after a collision, the size of the occupied segment increases by one, making the next insertion even worse.

Robin Hood to the Rescue

This brings us to Robin Hood hashing. It is actually very similar to linear probing, but with one simple trick. Say in the example above, we have item A sitting at slot 1, B at 2, C at 3 like this: [A B C _ _] and we have another collision trying to insert D at slot 1. Instead of placing D like [A B C D _], we arrange them as [A D B C _], then the probe count would be [0 1 1 1 _] instead of [0 0 0 3 _]. Now there are two things to understand: (1) What exactly happened? (2) Why is this better?

To answer (1), imagine we’re inserting D to slot 1. We see that A is there, then just like linear probing, we move on. Now B is there with probe count 0, while D already has probe count 1. That’s not fair! How dare you have a lower probe count than I do! So D kicks out B, and now B has to find a place to live. It looks at C and yells and kick C out, now C has to find a place, which happens to be slot 4. In other words, during probing, if the current item has a lower probe count, it is swapped out and probing continues.

But (2) why is this better? If you think about it, the total probe count (which is 3) is unchanged, so the average probe count is the same as just linear probing. It should have exactly the same performance! This approach is superior because the highest probe count is much lower. In this case, the highest probe count is reduced from 3 to 1. If we can keep the max probe count small, then all probing can be done with almost no cache misses. This makes Robin Hood hash tables really fast. Having a low maximum probe count also solves a really painful problem in linear probing, which is when you remove an item in the hash table and leave a gap. When we look for items, we can’t just declare missing when we first probe an empty spot, because maybe our item is stored further ahead, and this spot was just removed later on. With the maximum probe count really low in Robin Hood hashing, we can then say “I’m just going to look at these n spots, if you can’t find it it ain’t there.” Which is nice and simple. (It also enables shift back deletion instead of tombstone, which is another boost to the performance, but I don’t want to get too deep.)

A Theoretically Better Solution

Even though the highest probe count is small, it still grows as we have more items in the hash table (probe count goes up when load factor goes up). What if I tell you I don’t want it to grow? Meet Cuckoo hashing, which has a maximum probe count of 1.

The algorithm is slightly more complicated. Instead of one table, we have two tables, and one hash function each, chosen randomly among the space of all hash functions (not practical, but close enough). When we look up an item, we just need to compute the hash for the first and the second table, and look at those two spots. Done!

I mean, sure, but that’s easy – how do you insert? What if both spots are full? That’s where the name cuckoo comes from. You pick one of the spots, kick that guy out and put your item there. Now with the new guy, you find its other available spot, kick that guy out, so on and so forth. If you end up in a loop (or spend too long and decide to give up), you pick another two hash functions, and rebuild the hash table. As far as at least half of the slots are empty, it is highly likely that the whole operation takes constant time in expectation. I don’t have any simple intuitions about why this is true. If you do, please let me know.

Theoretically, Cuckoo hashing beats Robin Hood because the worst case look up time is constant, unlike in Robin Hood. Why is it still slower than Robin Hood in practice? As it turns out, the constant is both a blessing and a curse. When looking up missing items, there has to be two look ups; even when looking up existing items, on average there still has to be 1.5 look ups. To make matters worse, since the two spots are spatially uncorrelated, they usually cause two cache misses. Once again, cache performance makes a big difference in the overall speed.

Having said all that, performance in real life is a very complicated matter. Cache performance is just one of the unpredictable components. There are also various compiler optimizations, chip architectures, threading/synchronization issues or language designs that can affect performance, even given then same algorithm. Writing a fast program is all about profiling, fine tuning, and finding the balance*.

*My coworker once said, “at the end of almost any meeting, you can say ‘it’s all about the balance’, and everyone would agree with you.