Building an AVL Tree From a Sorted Sequence in One Pass

Recently I came across the function Map.of_increasing_sequence in the base library of OCaml. It might sound like a very simple and common function, but the implementation is actually quite cool. Let’s dive in. (Spoiler: it’s related to a weird number system.)

First Impressions

A sequence is like an iterator – you can either get the next value or reach the end. A map is an immutable AVL tree of key value pairs. An AVL tree is a binary tree with two properties – each node is larger than all nodes in its left subtree and smaller than all nodes in its right, and both subtrees can only have heights differing by at most 1. From an algorithmic standpoint, making a BST from a sorted array is trivial. You can achieve O(n) time complexity, and O(log(n)) space excluding the return value, just by simple recursion. But you can’t do that to a sequence, since sequences don’t permit random access. Now, we can always turn the sequence into an array first, but that would require O(n) extra space. Can we do better?

It turns out that library function is implemented with only one pass through the sequence, using only log(n) extra space. The code had almost no documentation, and I couldn’t find any description online (although I didn’t try very hard), so here I’ll attempt to motivate and derive the algorithm.

First Attempt: Try to Build a Tree

Imagine yourself with the task of building a balanced tree and are handed one number at a time, and you need to incrementally build a BST as quickly as possible. That might look like this.

I get a 1 – that’s easy. I’ll have a tree with one node.

2 – OK, that’s bigger than 1, I can make that the new root, and make 1 the left child.

3 – Let’s put that as the right child. So far so good.

4 – Hmm, maybe we could make that the new root, and have the tree rooted at 2 as the left child?

5, 6, 7 – That looks like 1, 2, 3 all over again, we can put those in the right subtree. Now we have a complete BST, looking good!

8 – That’s awkward again, let’s just say it’s the new root again.

9-15 – Looking like 1-7 again…

Maybe you can see the recursive pattern here. This looks like a procedure that produces reasonably balanced trees, and the height is always bounded by O(log(n)). That seems good, right?

It would be acceptable, but only if your map library only has two functions – build the tree, then look up values and never change it again. The problem is, the BST also has to support adds and deletes as well. The most common BST types, like red black trees and AVL trees, all have their own invariant, and unfortunately we’re not meeting those standards with our almost-balanced trees. For example, At step 8, we have a root (8), a left subtree of size 7, and an empty right subtree. That’s not a valid red black tree or AVL tree, and you can’t just return that, since that would break the rest of your library. How can we fix this?

Second Attempt: Build Branches Instead

One might think that we could make this work somehow with some clever ordering of insertions to the tree. But the fundamental issue here is that with only one tree, we can never change the root – at the moment we make the newest element the root, the tree must become heavily imbalanced. So perhaps we can instead maintain a bag of branches, and quickly assemble them into a tree when we hit the end of the sequence.

Here, the defining characteristic of a branch would be its composability. Let’s define a branch (called a fragment in the source code) like the trees we had in step 4 and 8 in the last attempt. In other words, a branch would be a tree with a complete left subtree and an empty right subtree. To merge two branches into a tree, you could put one branch as another branch’s right child. A branch of height n has 2^n nodes.

You could also merge two branches of the same size into a new branch. Here’s one way to do it: to merge X with Y, take the left subtree of branch Y and move it to the right subtree of branch X. Now X becomes a complete binary tree, and you can set it to be the left subtree of Y. This fits our branch definition, while also preserving order in the tree.

Merge branches as tree:
    X       Y        X
   /   +   /   =    / \
  A       B        A   Y
                      /
                     B

Merge branches as branch:
    X       Y        Y
   /   +   /   =    /
  A       B        X
                  / \
                 A   B

Now let’s try again:

1 – One node by itself is a branch.

2 – We could make 2 a branch and merge with 1.

3 – We could have 3 be its own branch. Now we have branches of size 2 and 1.

4 – Let’s merge 4 with 3, and then we have 2 branches of size 2. We could merge those two again into one branch.

5 – That’s a new branch.

6 – Add that to 5’s branch…

That starts to look recursive again. Now we always have a bunch of branches. And when we need to generate the final tree, we could just iteratively merge them together, from small to large. Is that good enough?

We Are Building a Binary Number

It’s not. To see this, we can frame this algorithm a bit more abstractly.

Consider the heights of our branches. For each integer n, each time we can only have either 0 or 1 branch of height n, because once we have 2, we merge them together. We can visualize the branch building in this table.

# branches of size 4# branches of size 2# branches of size 1# Nodes
0011
0102
0113
1004
1015
1106
Our branches sizes correspond to the binary representation of total tree size.

From here, we can see that we’re abstractly incrementing a binary counter. Now the problem is we can have a lot of gaps, or 0s, in the binary number. For example, for 17 nodes, we’ll have a large branch of size 16 and a small branch of 1 node. Now if the sequence terminates, we’ll have to merge those branches into a tree – but that again will be a heavily one-sided tree.

Third and Final Attempt: Keep 2 Branches at Each Level

Since gaps are causing us problems, maybe we could just, like, not have them. And in fact we could. This is hinted at in the code – “using skew binary encoding”. (Although from Wikipedia, skew binary system actually refers to a slightly different definition.)

In this new “binary” encoding, we could use the digits 0, 1 and 2, as opposed to just 0 and 1. Each position in the number would still have the same weight. So for example, 212 = 2*4 + 1*2 + 2 = 12. Here’s how to count in this new number system.

0011
0022
0113
0124
0215
0226
1117
Counting in the new system

Basically, to add one, we flip 0 to 1 and 1 to 2, but we flip 2 back to 1 and carry forward. There are never any 0s in any number.

Translating this back to our branch building, that means we don’t merge when we have two branches. We merge when we have three – and we merge the older two branches, “carrying” it forward to the next level, and always keep one branch for each height.

Let’s convince ourselves that at every point in the process, we can merge all branches and end up with a valid AVL tree (i.e. the algorithm is correct).

Say we are n steps into the branch building process, and we have to make a tree. We can convert n into a string of 1 and 2 in this number system. Starting from the least significant digit, we either have 1 or 2. Together, that gives us either a tree of height 1 or 2. Moving onto the next digit, we again have either 1 or 2 branches of size 2. At the max, we have 3 branches, each of size 2. If we first merge the left two branches into a branch, then merge that with the right branch to a tree, that leaves us with a maximum tree height of 3 (while minimum is 2). At the next level, we can at max have 3 branches of height 3. Similarly, we end up with a tree with height between 3 and 4.

Illustrating the 222 case, with 14 nodes.
Level 1:
13 + 14 
-------
  14
13

Level 2:
 10  +  12  +  14
09     11     13
-----------------
      12
  10      14
09  11  13

Level 3:
      04          08          12
  02      +   06      +   10      14
01  03      05  07      09  11  13
------------------------------------
              08
      04              12
  02      06      10      14
01  03  05  07  09  11  13  

In this process, we always create trees that preserve order. And after level n, the tree that we end up with always have height n or n+1. That satisfies the AVL tree invariant that two subtrees should have heights differ by at most 1.

That’s It!

Now we should be reasonably convinced that this algorithm produces valid BST. But there were a lot of details that were glossed over. To be completely rigorous, we would need to formalize the observations into claims, and prove them by induction. But I believe this process captures the key ideas already.

We also skipped the time/space complexity discussion. There are two slightly nontrivial details here. First, each insertion could lead to a cascade of branch merges (or carries), so we need to argue that insertion has an amortized cost of O(1). Then, we need to realize that the final tree merging takes O(log(n)) branches, and each tree merge is O(1) as well. As an aside, this number system has a unique representation for each number, which is perhaps not totally obvious.

I am not able to identify the inventor of this algorithm. It doesn’t seem particularly likely that the author of this code was also the inventor.

There are still some aspects of that source file that I don’t quite understand, but is perhaps not closely related. In particular, the invariant is that subtrees have heights differing by at most 2, not 1, like normal AVL trees. Maybe I’ll find out why another day.

What I Learned in Two Years’ Tech Work in Finance

It’s been two years since I started working full time as a software engineer. As I accumulated experience, I became a lot more hesitant to write, because I start to feel that I can’t contribute anything new on top of what everyone else already knows. And I even felt bad for having written some of the old posts, since they now seem quite silly and naive.

In some sense those feelings must reflect a lot of truth, but that still shouldn’t stop me from writing. Perfect is the enemy of good, and if I wait until I know everything, I’ll never write again; hence this post. Random thoughts will be laid out in no particular order.

Over-engineering

One mistake that I’ve repeated is to optimize prematurely. As a recent college grad having done a lot of brain teasers, it’s really tempting to work clever algorithms into the job. But a lot of the times, it’s just unnecessary. In coding competitions, we are only rewarded for writing correct, fast and short programs. Nothing else mattered. In professional work, we need to add a few more terms to the equation – cost of human effort (to write, test and review the code), flexibility for future modifications, and simplicity of the solution. A simple solution that gets 90% of the cases right is perhaps even better than a complicated solution that gets 99.99%, if humans can much more easily understand failure modes and be able to manually fix things in the former case. After all, the alternative is to spend a lot of time debugging when the one edge case happens, and breaks the system.

I think this is important enough to deserve an emphasis – simplicity is valuable. A prediction system that gives you a slightly inaccurate number in a predictable way is much better than one that gives you a more accurate magic number that no one understands. In financial markets, complexity creeps in wherever competition is fierce, but the simplicity of many models would probably still be surprising to outsiders (hint: it’s not all machine learning).

There’s another form of short-sighted cleverness, which is to tweak the system very slightly to achieve what I want without learning about the whole system or understanding the full consequences of those tweaks. The smallest diff isn’t necessarily the best fix. Adding a patch that isn’t well thought out and that doesn’t fit in with the rest of the code is just incurring tech debt. Perhaps we could call this under-engineering.

Don’t Pretend You Understand

One thing that is common, perhaps more so among newer folks, is to pretend to understand, whether listening to a conversation or getting answers from teammates. Having been on both sides, I believe that this is a really bad habit. Of course it is only human nature to hide your inexperience, and I’ve also heard criticism about people asking for help before spending a lot of time on the issues. But no, I think a lot of the times, asking questions eagerly is way more productive over all, especially for new teammates.

I say this multiple times to my interns: if a problem can possibly take you half an hour to figure out, and I already know the answer, then the decision here is between one minute of my time versus thirty minutes of yours. Is my time 30x more valuable? I wish! If it’s a tough question and the mentor doesn’t already know the answer, then it seems even sillier for the intern to struggle alone for a long time.

There are times when I’m answering questions from newer folks, and I know that there’s no way they understood some statements I made. However they were still nodding and reacting as if they did. Invariably they will return in a few days with the same questions. This is counter productive.

This problem is less common, although perhaps more serious for more experienced people, since the pride has built up. I’ve told myself “this is something I should know by now” multiple times and stopped myself from asking my coworkers. But the truth is no one knows every corner of the system, and everyone knows that.

There is no shame in not knowing things. People expect that. Just ask.

Aligned Incentives

One idiosyncrasy about the finance industry is that a lot of compensation comes in the form of variable and unpredictable year end bonuses, as compared to stock offerings in tech companies. Of course people like certainty. But I think there’s a case for an opaque process of reward in the form of bonuses.

From first principle, employees have different incentives, and they usually aren’t the same as the company’s goals. Whenever incentives diverge or even conflict, we can get serious issues.

There are countless examples in real life. One example in finance is the reward curve for some hedge funds. Hedge funds are roughly companies that take investor money and help them pick investments. Some hedge funds collect a fixed fee, plus a significant fraction in additional return. That means when the investments increase in value, they make more money. That’s good incentive, right? The problem is when the investments lose money, they aren’t affected – they still take the fixed fee, regardless of how much was lost. (This is not entirely accurate, since they will lose clients if they keep losing money.) Therefore the funds will tend to make riskier investments. If your investment is going to make on average 10% a year, you’d rather make 100% this year and lose ~80% next year, as you can collect a much larger fee. This is worse for the clients.

Now let’s look at employee compensation. We want to reward employees in a way that encourages them to help the company make more money (assuming making money is the goal of the company). One thing we can do is to measure the amount of contributions for everyone, and reward accordingly. For example, we could measure hours spent in the office, or number of lines of code written, or survey people for their estimations of their teammates’ contributions, etc. The problem is these are only proxy measurements, and once you start measuring them, people will optimize for the proxies instead of the actual goal. If you measure lines of code written, you’ll encourage verbose and redundant code; if you measure hours in the office, people will stay longer but not necessarily work at the same speed, and so on.

But the problem is fundamental – you can’t measure the actual contribution and hard work, and by measuring proxies you’ll encourage cynical behavior. A fix, if not a complete solution, is through obfuscating the reward function. If I tell you that I’ll give you an unknown amount of money by the end of the year based on How Well You DidTM, and let you fill in the rest, then you won’t be encouraged to write bad code, or only focus on projects that had Impact, or other things bad for the firm.

I feel that this has worked quite well in my company. But there are a lot of assumptions for this to work. One is that employees have to be OK with not knowing how much they’ll make. There also needs to be a lot of trust between employees and managers, so that employees can trust that they’ll be evaluated fairly by the end of the year.

And More…

This post is getting long and messy, so maybe I’ll call it a day for now. There are a lot of smaller lessons that come from trading and recruiting. Trading is arguably the best arena to hone one’s rational decision making skills, and interesting stories come up every now and then. Maybe I’ll write a follow up some day.

Thoughts on Fooled by Randomness

Just finished Nassim Nicholas Taleb’s well-known book, Fooled by Randomness. Here are some brief thoughts, in no particular order.

The Birthday Irony

Despite the author’s years working in trading and writing a book on probability, in one of the few cases where he did actual math, he did it wrong. Here’s the original:

If you meet someone randomly, there is a one in 365.25 chance of your sharing their birthday, and a considerably smaller one of having the exact birthday of the same year.

Nassim Nicholas Taleb, Fooled by Randomness

It seems like he was trying to say – on average, there are 365.25 days a year (first order approximation of leap years), so you have a \frac{1}{365.25} chance of meeting someone of the same birthday.

If you do the math though, here’s the actual probability: every four years (365 \times 4+1 = 1461 days), there are 1460 days in which your probability of sharing a birthday is \frac{4}{1461}, and 1 day in which it is \frac{1}{1461}. So, the probability is \frac{1460}{1461} \times \frac{4}{1461} + \frac{1}{1461} \times \frac{1}{1461} \approx \frac{1}{365.44}. That’s significantly off from 365.25 that you can’t really say “I just made a first order approximation”.

To fully understand this error, let’s say there is one extra day in n years, instead of 4. Then the number, instead of 365.25 or 365.44, will be (\frac{365n^2}{(365n+1)^2} + \frac{1}{(365n+1)^2})^{-1}. After taking Taylor series expansions, we get 365 + \frac{2}{n} - \frac{364}{365n^2} + O(n^{-3}), or 365 + \frac{2}{n} + O(n^{-2}), instead of the 365 + \frac{1}{n} that the author had guessed.

Let’s spend a little time to gain intuitions on why it’s 365 + \frac{2}{n} instead of \frac{1}{n}. Consider Alice and Bob, and a year is exactly 365 days. Then the chance of sharing a birthday is 1 in 365. Now say we add x days to Bob’s calendar only, so Bob’s birthday has 365+x possible choices while Alice still has 365. Then, the probability that they have the same birthday is 1 in 365+x. At this point, it is clear that if we add x days to Alice’s calendar, the chance of sharing a birthday goes down, therefore we know that the author’s estimate of probability is too high. Then, add x to Alice’s calendar. If x is small, we can ignore the probability that their shared birthday is on one of the days in x (that probability is second order). Then, approximately we have the probability of sharing a birthday as \frac{365}{(365+x)^2}, which is close to \frac{1}{365 + 2x}, again ignoring the second order term. Substituting x for \frac{1}{n}, we have arrived at the desired result. The factor 2 comes from the fact that we added a leap day not only to Bob, but also to Alice.

Anyway, on a higher level, the lesson is that you should fully justify your simplifying assumptions, instead of jumping to conclusions.

Wittgenstein’s Ruler

This idea has never explicitly come to my mind, so I thought it was interesting. It says something like if you don’t have a reliable ruler, and you use it against a table, you might be measuring your ruler with the table. One example he mentioned was that some people in finance claimed that a ten sigma event happened. Using the principle – if you measured a ten sigma event, your ruler (mathematical model) is probably seriously flawed.

One takeaway from this is that statistics is merely a language to simplify and describe the real world, the world does not run according to the rules. It would be ridiculous to plot data points under a bell shape, and say that the world is wrong when the new data point doesn’t fit under it.

Another way of saying the same thing is conditional probability. Relevant xkcd: https://www.xkcd.com/1132/

One way I’ve seen it in real life is the current political situation in Hong Kong. Say there’s a certain probability that one citizen goes nuts and riot in the street, and there’s a certain probability that the government has done something terribly wrong. If you have very few people rioting, then the ruler tells you that those guys are probably at fault. But if you have a majority of citizens supporting the riots or rioting, then those guys become the ruler, and you’re measuring the government.

Think about All Possibilities

One very valid point in the book is that you should think of the world as taking one sample path in infinitely many possibilities. When you evaluate an outcome, you should think of all the things that could have happened. For example, if your friend did a thing and made a huge success, it doesn’t mean he made a good decision or that you should’ve done the same, or even that you should follow suit. We have only one data point, you don’t know what the probability distribution looks like. Maybe he could have lost it all. When you think about all that could have happened, you will have less jealousy to the lucky and more sympathy to the unfortunate.

Happiness is Relative

This is a tangential point to randomness, but still important to keep in mind. Given that you have basic human needs fulfilled, your happiness often doesn’t depend on how much you have, but how much more you have compared to those around you. More generally, it’s not the absolute well-being that matters, but the changes. So to be happy, don’t be the medium fish in the big pond, go to the small pond and be a king. If you start out at the top, tough luck, because chances are your status will revert to mean over time.

Limit Your Loss

If there’s one actionable item from the book, that’s to always remember to limit your worst case scenario. Between a steady increase in personal well-being with no risk of going bankrupt and more income but also a chance of losing everything, you should prefer the former, because eventually the unfortunate thing will happen. That’s called the ergodicity – any event with a nonzero probability will eventually happen, mathematically.

The Author’s Conspicuous Faults

I believe most readers will often find the author’s comments controversial and provocative, if not arrogant and overgeneralizing. There’s a bunch of stuff he said that is just plain wrong.

He said in the beginning of the book that he didn’t rewrite according to his editor’s suggestions, because he didn’t want to hide his personal shortcomings. But the point of a nonfiction book that is non-autobiographical is not to convey who you are, but to give readers inspirations and positive influence. If you say a bad thing in the book that you believe in, you’re not “being true”, you’re bad influence! I don’t know what exactly he was referring to, but I suspect they should include my following points.

He’s exceptionally arrogant, way off the charts. You’ll see him saying things like “I know nothing about this, despite having read a lot into it” and “I know nothing, but I am the person that knows the most about knowing nothing”. He just couldn’t write one sentence that ends in a defeated tone. Before he puts a period down, he must add another clause to the sentence to remind the readers that he’s just being humble, he didn’t mean it. It’s quite funny when you look for it.

He also loves stereotyping people to the extreme. He would say things like “journalists are born to be fooled by randomness”, “MBAs don’t know what they’re doing”, “company executives don’t have visible skills” and “economists don’t understand this whatever concept”. One thing he said in the beginning of the book was that he didn’t need data to back up his claims, because he’s only doing “thought experiments”. I think he mistook that for “unfounded personal opinions”. When you make claims about journalists and economists being dumb, that’s hardly a thought experiment. You absolutely need to back up your claims.

Overall, this book has some good ideas, but not that many. If you already have a decent background in math, maybe you can skip this book without harm.

Fast RNG in an Interval

https://arxiv.org/abs/1805.10941 – Fast Random Integer Generation in an Interval

Just read this interesting little paper recently. The original paper is already quite readable, but perhaps I can give a more readable write-up.

Problem Statement

Say you want to generate a random number in range [0, s), but you only have a random number generator that gives you a random number in range [0, 2^L) (inclusive, exclusive). One simple thing you can do is to first generate a number x, divide that by s and take the remainder. Another thing you can do is to scale the range down by something like this: x * (s / 2^L), with some floating point math, casting, whatever that works. Both ways will give you a resulting integer in the specified range.

But these are not “correct”, in a sense that they don’t generate random integers with a uniform distribution. Say, s = 3 and 2^L = 4, then you will always end up with one number being generated with probability 1/2, the other two numbers 1/4. Given 4 equally likely inputs, you just cannot convert that to 3 cases with equal probability. More generally, these simple approaches cannot work when s is not a power of 2.

First Attempt at Fixing Statistical Biases

To fix that, you will need to reject some numbers and try again. Like in the above example, when you get the number 3, you shuffle again, until you get any number from 0 to 2. Then, all outcomes are equally likely.

More generally, you need to throw away 2^L mod s numbers, so that the rest will be divisible by s. Let’s call that number r, for remainder. So you can throw away the first r numbers and use the first approach of taking remainder, as shown in this first attempt (pseudocode):

r = (2^L - s) mod s // 2^L is too large, so we subtract s
x = rand()
while x < r do
x = rand()
return x mod s

That’s a perfectly fine solution, and in fact it has been used in some popular standard libraries (e.g. GNU C++). However, division is a slow operation compared to others like multiplication, addition and branching, and in this function we are always doing two divisions (mod). If we can somehow cut down on our divisions, our function may run a lot faster.

Reducing number of divisions

It turns out we can do just that, with just a simple twist. Instead of getting rid of the first r numbers, we get rid of the last r numbers. And we can verify whether x is in the last r numbers like so:

x = rand ()
x_mod_s = x mod s
while x - x_mod_s > 2^L - s do
x = rand ()
x_mod_s = x mod s
return x_mod_s

The greater-than comparison on line 3 is a little tricky. It’s mathematically the same as comparing x - x_mod_s + s with 2^L, but we do this instead because you can’t express 2^L with L number of bits. So basically, the check is saying if the next multiple of s after x is larger than 2^L, then x is in the last r numbers and must be thrown away. We never actually calculate r, but with a little cleverness we manage to do the same check.

How many divisions are we doing here? Well, at least one on line 2, and possibly 0 or many more, depending on how many times the loop is run. Since we’re rejecting less than half of the possible outcomes (we’re at least keeping s and at most rejecting s - 1), we have at least 1/2 chance of breaking out of the loop each time, which means the expected number of loops is at most 1 (0 * 1/2 + 1 * 1/4 + 2 * 1/8 … = 1). So we know that the expected number of divisions is at worst 2, equal to that of the previous attempt. But most of the time, the expected number is a lot closer to 1 (e.g. when s is small), so this can theoretically be almost a 2x speed up.

So that’s pretty cool. But can we do even better?

Finally, Fast Random Integer

Remember other than taking remainders, there’s also the scaling approach x * (s / 2^L)? It turns out if you rewrite that as (x * s) / 2^L, it becomes quite efficient to compute, because computers can “divide” by a power of two by just chopping off bits from the right. Plus, a lot of hardware has support for getting the full multiplication results, so we don’t have to worry about x * s overflowing. In the approach using mod, we inevitably need one expensive division, but here we don’t anymore, due to quirks of having a denominator of power of 2. So this direction seems promising, but again we have to fix the statistical biases.

So let’s investigate how to do that with our toy example of s = 3, 2^L = 4. Let’s look at what happens to all possible values of x.

xs * x(s * x) / 2^L(s * x) mod 2^L
0000
1303
2612
3921

Essentially we have s intervals of size 2^L, and each interval maps to one single unique outcome. In this case, [0,4) maps to 0, [4, 8) maps to 1, and [8, 12) maps to 2. From the third column, we have two cases mapping to 0, and we’d like to get rid of one of them.

Note that the fundamental reason behind this uneven distribution is because 2^L is not divisible by s, so any contiguous range of 2^L numbers will contain a variable number of multiples of s. That menas we can fix that by rejecting r numbers in each range! More specifically, if we reject the first r numbers in each interval, then each interval will contain the same number of multiples of s. In the above example, the mapping becomes [1, 4) maps to 0, [5, 8) maps to 1, and [9, 12) maps to 2. Fair and square!

Let’s put that in pseudocode:

r = (2^L - s) mod s
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L // equivalent to x_s mod 2^L
while x_s_mod < r do
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
return shift_right x_s L // equivalent to x_s / 2^L

Now that would work, and it would take exactly 1 expensive division on line 1 to compute r every single time. That beats both of the above algorithms! But wait, we can do even better! Since r < s, we can first check x_s_mod against s, and only compute r if that check fails. This is the algorithm proposed in the paper. It looks something like this:

x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
if x_s_mod < s then
r = (2^L - s) mod s
while x_s_mod < r do
x = rand ()
x_s = x * s
x_s_mod = lowest_n_bits x_s L
return shift_right x_s L

Now the number of expensive divisions is either 0 or 1, with some probability depending on s and 2^L. This looks clearly faster than the other algorithms, and experiments in the paper confirmed that. But as often is the case, performance comes at the cost of less readable code. Also in this case, we’re relying on hardware support for full multiplication results, so the code is less portable and in reality looks pretty low level and messy. Go and Swift have adopted this, deciding the tradeoff worthy, according to the author’s blog (https://lemire.me/blog/2019/09/28/doubling-the-speed-of-stduniform_int_distribution-in-the-gnu-c-library/), C++ may also use this soon.

How Many Divisions Exactly?

There’s still one last part we haven’t figured out – we know the expected number of divisions is between 0 and 1, but what exactly is it? In other words, how many multiples of s, in the range [0, s * 2^L), has a remainder less than s when dividing by 2^L? To people with more number theory background, this is probably obvious. But starting from scratch, it can take quite a lot of work to prove, so I’ll just sketch the intuitions.

It’s a well known fact that if p and q are co-prime (no common factors other than 1), then the numbers { 0, p mod q, 2p mod q, 3p mod q ... (q-1) p mod q } will be exactly 0 to q-1. This is because if there is any repeated number, then we have a * p mod q = b * p mod q (assuming a > b), which indicates (a - b) * p mod q = 0. But we know that 0 < a - b < q, and p has no common factor with q, so if we multiply those two together, it cannot be a multiple of q. So it’s impossible to have duplicates, and multiples of p will evenly distribute among [0, q) when taken mod q.

Now if s and 2^L are co-prime, there will be exactly s number of multiples of s that has a remainder ranging from 0 to s - 1. That means the expected number of divisions in this case is s / 2^L.

If they aren’t co-prime, that means s is divisible by some power of 2. Say s = s' * 2^k, where s' is odd. Then, s * 2^(L-k) = s' * 2^L will be 0 mod 2^L. So your multiples of s mod q will go back to 0 after 2^(L-k) times. And you have 2^k iterations of that. So if you go through the final count, it goes 2^k, followed by 2^k - 1 number of 0s, rinse and repeat. How many are below s? You have s' number of nonzero counts, each one equal to 2^k – it’s again, unsurprisingly, s. So the expected number of divisions is still indeed s / 2^L.

Final Thoughts

Earlier I said each time you need to throw away 2^L mod s numbers to make an even distribution, but that’s not completely necessary. For example, if s = 5 and 2^L = 8, you don’t have to fully reject 3 cases. In fact, you can save up those little randomness for the next iteration. In the next iteration, say you get into 1 of the 3 cases again. Then, combined with the 3 cases you saved up last time, you are now in 9 equally likely events. If you are in the first 5, then you can safely return that value without introducing biases. However, this is only useful when generating the random bit strings are really expensive, which is totally not the case in non-cryptographic use cases.

One last note – we have established that the expected number of divisions is s / 2^L. As s gets close to 2^L, it seems like our code can become slower. But I think that’s not necessarily the case, because the time division takes is probably variable as well, if the hardware component uses any sort of short-circuiting at all. When s is close to 2^L, 2^L mod s is essentially one or two subtractions plus some branching, which can theoretically be done really fast. So, given my educated guess/pure speculation, s / 2^L growing isn’t a real concern.

Catenary Inversion: Curves of Sagrada Familia

Sagrada Familia is stunning and beautiful. If you ever go visit, don’t miss out on the bottom level: there are exhibitions about the constructions and history of this masterpiece by Gaudi. When I visited a while ago, I was surprised to find a model that explained how the curves of the arches of the church were designed, and it was really cool.

To start out, imagine you’re building the roof of a house. Usually they are like this: /\. Modern houses also look like this: Π. The point is, a flat roof is hard to support, so the older houses are all angled. If you hold a dumbbell horizontally to your side, you’ll feel tired a lot faster than holding it angled upwards. This is because materials in general are a lot better at handling compression than bending forces. By holding your arm at an angle, you are supporting part of the weight by compressing your arm along its own direction, reducing the amount of force perpendicular to that direction. Back to the roof: a flat one is fine if made by concrete and steel, but if we use a long piece of wood, maybe not.

Anyway, using the same materials, an arch shaped building will last much longer than a flat topped one, simply because the bricks are subject to bending forces to a less extent. The problem then becomes: how can we find the shape that minimizes bending force at every point on the arch (to zero, actually)?

If you remember high school physics, we can dive into it. Say we draw an arch like this: ∩, and we pick any brick on the arch (say on the left half). Let’s pretend this is the ideal curve, such that there is no bending force anywhere. This little brick you picked is going to have a tiny bit of mass, and the slope of the arch changes a little bit before and after it. Then we have three forces acting on this dot: gravity which points down, force from the left brick supporting it, and force to the right brick. The latter two forces have slightly different slopes, and the three add up to 0 (otherwise the arch will collapse. Note that we don’t have forces perpendicular to the arch between bricks, which is the whole point.) Oh no, we have a differential equation! It’s been 2 days since I took that exam, I have forgotten everything! What do I do?

The Catenary Curve

Consider this seemingly unrelated physical problem: given a string with multiple beads on it on regular intervals. Now you hold the two ends loosely such that it forms a U shape. What properties does this shape satisfy? Similarly, consider a single bead (again, on the left side, but as they say: WLOG) there are three forces acting on this bead: gravity pulling it down, force from the left bead pulling it up, and force to the right bead. You probably saw this coming: these three forces are same as those we just talked about on bricks, but pointing to exactly the opposite directions! (I am too lazy to draw a picture, you can try to get the idea). What’s more, these three forces also add up to 0, since the beads are not moving. This means that we can take the shape of the string, put it upside down, and make a perfect arch out of it. To see why this is true, imagine you draw the perfect string curve on paper, drawing out the forces. Now you rotate the paper 180 degrees and negate all three forces on the beads. (1) The three forces still sum to 0; (2) gravity still points down with the right amount; (3) the remaining two forces still balance out with the forces on the bricks nearby, since those are also negated. Hence, this curve satisfies all of our requirements. We have found the answer! If I recall correctly, architects actually used this method to draw the curves for blueprints. This curve of beads on a string is called the catenary curve, and the solution is in the form of the hyperbolic cosine, which is essentially a sum of two exponential functions, having equal but opposite signs of exponents.

As a personal anecdote, a physics professor of mine once called a problem “physically solved” after he wrote out the equations which uniquely determine the answer, because the rest can be solved by mathematics, either analytically or numerically. This can lead to very uninteresting tests, for example simply writing out the Maxwell equations for every single EM problem. In our case, the arch problem is not only “physically solved” because we can derive the curve using the same differential equation as catenary curves, but also that we can “physically solve” it using beads, a string and actual physics.

Theoretically Or Practically Fast: Two Types of Hash Tables

When people say a program is fast, they usually mean it takes a short time to execute; but when computer scientists say an algorithm is efficient, they mean the growth rate of the run-time with respect to the input size is slow. It is often the case that these two notions approximately mean the same thing, that a more efficient algorithm yields a faster program, even though it is not always true. I recently learned about two types of hash tables that show this difference: Cuckoo hashing and Robin Hood hashing. Even though in theory Cuckoo is more efficient, Robin Hood is in practice a lot faster.

Vanilla Hash Tables

Before going into details, let’s revisit how a hash table works. Simple thing, you have a hash function that takes a key and yields a random (but deterministic) number as the “hash”, and you use this hash to figure out where to put this item in the table. If you want to take something out, just compute the hash again and you can find exactly where the item is stored.

That’s not the whole story though. What if you want to insert item A into spot 1, but item B is already there? You can’t just throw away item B, what are you, a savage? There are multiple ways to deal with this problem called collisions. The easiest way is chaining, which is to make each slot into a list of items, and inserting is just appending to the corresponding list. Sure, that would work, but now your code is a lot slower. This is because each operation takes a look up to get the list, and another look up to get the contents of the list. This is really bad because the table and the lists will not align nicely in cache, and cache misses are BAD BAD things.

Linear Probing

To improve cache performance, people started to throw out bad ideas. One bad idea that kind of worked was, if this slot is occupied, just take the next one! If the next one is taken, just take the next next one! This is called linear probing, and is pretty commonly used. Now we solved the caching problem because we’re only accessing nearby memory slots, our programs now run a lot faster. Why is this bad? Well, it turns out as the table gets filled up, the probe count can get really high. Probe count is how many spots need to be checked before an an item is found, or how far away the item is from where it should be. Imagine out of 5 spots, the first 3 are occupied. Then, the next insertion has a 60% chance of collision. If it does collide, the probe count is on average 2 (probe count = 3 if the hashed index is the first slot, 2 if second, and 1 if third). And after a collision, the size of the occupied segment increases by one, making the next insertion even worse.

Robin Hood to the Rescue

This brings us to Robin Hood hashing. It is actually very similar to linear probing, but with one simple trick. Say in the example above, we have item A sitting at slot 1, B at 2, C at 3 like this: [A B C _ _] and we have another collision trying to insert D at slot 1. Instead of placing D like [A B C D _], we arrange them as [A D B C _], then the probe count would be [0 1 1 1 _] instead of [0 0 0 3 _]. Now there are two things to understand: (1) What exactly happened? (2) Why is this better?

To answer (1), imagine we’re inserting D to slot 1. We see that A is there, then just like linear probing, we move on. Now B is there with probe count 0, while D already has probe count 1. That’s not fair! How dare you have a lower probe count than I do! So D kicks out B, and now B has to find a place to live. It looks at C and yells and kick C out, now C has to find a place, which happens to be slot 4. In other words, during probing, if the current item has a lower probe count, it is swapped out and probing continues.

But (2) why is this better? If you think about it, the total probe count (which is 3) is unchanged, so the average probe count is the same as just linear probing. It should have exactly the same performance! This approach is superior because the highest probe count is much lower. In this case, the highest probe count is reduced from 3 to 1. If we can keep the max probe count small, then all probing can be done with almost no cache misses. This makes Robin Hood hash tables really fast. Having a low maximum probe count also solves a really painful problem in linear probing, which is when you remove an item in the hash table and leave a gap. When we look for items, we can’t just declare missing when we first probe an empty spot, because maybe our item is stored further ahead, and this spot was just removed later on. With the maximum probe count really low in Robin Hood hashing, we can then say “I’m just going to look at these n spots, if you can’t find it it ain’t there.” Which is nice and simple. (It also enables shift back deletion instead of tombstone, which is another boost to the performance, but I don’t want to get too deep.)

A Theoretically Better Solution

Even though the highest probe count is small, it still grows as we have more items in the hash table (probe count goes up when load factor goes up). What if I tell you I don’t want it to grow? Meet Cuckoo hashing, which has a maximum probe count of 1.

The algorithm is slightly more complicated. Instead of one table, we have two tables, and one hash function each, chosen randomly among the space of all hash functions (not practical, but close enough). When we look up an item, we just need to compute the hash for the first and the second table, and look at those two spots. Done!

I mean, sure, but that’s easy – how do you insert? What if both spots are full? That’s where the name cuckoo comes from. You pick one of the spots, kick that guy out and put your item there. Now with the new guy, you find its other available spot, kick that guy out, so on and so forth. If you end up in a loop (or spend too long and decide to give up), you pick another two hash functions, and rebuild the hash table. As far as at least half of the slots are empty, it is highly likely that the whole operation takes constant time in expectation. I don’t have any simple intuitions about why this is true. If you do, please let me know.

Theoretically, Cuckoo hashing beats Robin Hood because the worst case look up time is constant, unlike in Robin Hood. Why is it still slower than Robin Hood in practice? As it turns out, the constant is both a blessing and a curse. When looking up missing items, there has to be two look ups; even when looking up existing items, on average there still has to be 1.5 look ups. To make matters worse, since the two spots are spatially uncorrelated, they usually cause two cache misses. Once again, cache performance makes a big difference in the overall speed.

Having said all that, performance in real life is a very complicated matter. Cache performance is just one of the unpredictable components. There are also various compiler optimizations, chip architectures, threading/synchronization issues or language designs that can affect performance, even given then same algorithm. Writing a fast program is all about profiling, fine tuning, and finding the balance*.

*My coworker once said, “at the end of almost any meeting, you can say ‘it’s all about the balance’, and everyone would agree with you.

Variants of the Optimal Coding Problem

A Variant of Optimal Coding Problem
Prerequisites: binary prefix code trees, Huffman coding

Here’s a problem: say we are given n positive numbers, and you are allowed to each time pick two numbers (a and b) that are in the list, add them together and put it back to the list, and pay me an amount of dollars equal to the sum (a+b). Obviously your optimal strategy would be not to play this stupid game. Here’s the twist: what if I point a gun at your head and force you to play until we end up with only one number, then what would be your optimal strategy, and what would be the minimum amount of money you have to pay me?

At the first glance this seems quite easy, of course you would just pick the smallest two numbers each time and add them, because that would be the best option you have each time. It would minimize your cost every round, and I will end up with the minimum amount possible. However if you paid attention in your greedy algorithm lecture in college, you will notice the fallacy of this statement – picking the best option each time does not necessarily lead to the best overall outcome.

I claim that it is not obvious that this algorithm is right, not only because it is greedy, but also because this problem is equivalent to the classic optimal coding problem, which is not that easy. For those who haven’t heard, optimal coding problem is the problem where you’re given a bunch of symbols and the average frequencies for each of them, and you try to design a set of binary prefix codes for these symbols such that using this coding scheme, the average sequence of symbols will be represented by the shortest expected number of bits.

What? Yeah, this number summing and merging problem is equivalent to the optimal coding problem. I’ll spare you and myself a formal proof, but here’s an example to convince you. Say we have 3 numbers a, b and c, and we first sum a+b, then sum (a+b)+c. Your total cost will be 2a+2b+c. Now if we draw out the summation as a tree:

 a+b+c
/\
/\ c
a b

(pardon my ascii art) you will see that a and b sit at depth=2, and c sits at depth=1. This gives another way to come up with the total cost: multiply each number with its tree depth, and sum these products together. This is always the same as the total cost, because for each number, as you go up a level in the tree, you add that number to the total cost once.

Now if you paid attention in your algorithm lecture on binary prefix code trees, you will see this looks exactly like a binary prefix code tree. To complete the analogy, say we are given three symbols A, B and C, and they have frequencies a, b and c. Had we given them the same binary code tree as above, the average code length weighted by frequencies will be 2a+2b+c. If you don’t see why, you should pause here and think until you get it (or not, you decide).

Now that we established these two problems are the same (same objective function, same solution space), it follows that the optimal algorithms for both are the same. The solution to the optimal coding problem is Huffman Coding, as spoiled from the very beginning of this post. Now, this algorithm is exactly the greedy algorithm above that we weren’t sure was right or not. So – I just spent this much time convincing you that algorithm might not be right, and then convincing you it is indeed correct. Yay progress!

Variants of that Variant

(Technically, that wasn’t really a variant because that was exactly the same problem.) After talking to a few people in real life about this, I found it hard to convince people that this innocent looking algorithm could possibly be wrong, and it certainly didn’t help that the algorithm ended up being right. One line I tried was: “even Shannon tried this problem and came up with a suboptimal Shannon-Fano coding scheme,” but people didn’t seem to appreciate that. In fact, it is quite stunning that when we turn the optimal coding problem into the summing problem, the solution seems so much more obvious.

So I came up with a second attempt: what if we tweak the problem a bit so that greedy wouldn’t work? In the original problem, when we merge two numbers, it costs us the sum of the two, and we reinsert the sum to the list of numbers. What if the cost function or the merge function is not an addition? For example, we can imagine changing the cost function to take minimum, or the merge function to return the square of the sum, etc. Then, would the greedy algorithm still work? Now people are much less sure – which kind of proves the point that they shouldn’t have been so confident in the first place.

But would it? It turns out, perhaps unsurprisingly, in most cases it won’t work anymore. Just to raise a few counter examples, (1 1 1 1) yields an optimal merging of 1 + (1 + (1 + 1)) if the cost function is taking minimum. (1 1 1 2 3) yields an optimal merging of ((1 + 1) + 2) + (1 + 3) if we take maximum instead. And if we take cost as the square of the sum, for the case (1 1 2 2), the optimal way is (1 + 2) + (1 + 2). In all these cases, the greedy way will give a sub-optimal total cost. It happens that if we take both cost and merging functions to taking min/max, the greedy approach works, but they are a lot more trivial than the other cases.

At this point, it seems like we really lucked out that the optimal coding problem corresponds to a case where greedy works. Otherwise, computers would either spend so many more cycles computing the optimal codes for compressing files, or we would have ended up with larger file sizes for various kinds of compressed files.

To end this post with a trivia: did you know that the Huffman tree for a gzip file is also encoded using a Huffman tree? Since every file has a different frequency count of symbols, each file has a different Huffman tree that needs to be saved in the compressed file as well. This tree is then encoded, again, using a standard Huffman tree that is the same for all compressed files, so the decoder knows how to decode, which is pretty meta.

Stars Falling From the Sky (and how to capture them)

This post is about how I made this photo.

Things I used

Hardware: Pixel 2 XL, Ubuntu desktop

Software: Camera FV-5, Snapseed (both free from Play Store), python (with libraries of course)

Taking the pictures

Go to somewhere dark where you can see stars obviously, bring a tripod, point your phone’s camera at the stars. Open the Camera FV-5 app, set ISO to 3200 (max), shutter speed to 4.0″ (max). Shooting utilities -> Intervalometer -> mode: Interval + shooting duration, Every 4 seconds, 20:00 shooting time. Press Start Now and wait for 20 minutes…

One thing to note though, is that there is something I don’t understand in this process. I always end up getting half of the number of photos I’m supposed to get. And between each successive photo, there seems to be a time gap. I suspect the reason to be the processing time of the camera, but it could also be the case that I don’t know what I’m doing.

Stacking

I’ve tried to find other people’s tools before, but they mostly didn’t work. Well there is only one solution.


import glob
import argparse
import cv2
import numpy as np
from tqdm import tqdm

parser = argparse.ArgumentParser()
parser.add_argument('img_dir')
parser.add_argument('output_file')
parser.add_argument('reversed', help='reverse default image order')
parser.add_argument('attenuation', type=float, help='dimming images')

args = parser.parse_args()

images = sorted(glob.glob(args.img_dir + '*.png'))
n = len(images)
print('{} images found.'.format(n))

if (args.reversed == 'True'):
    images.reverse()

buf = cv2.imread(images[0]).astype(np.float64)

for i in tqdm(range(n - 1)):
    new_image = cv2.imread(images[i + 1]).astype(np.float64)
    buf = np.maximum(buf * args.attenuation, new_image)

cv2.imwrite(args.output_file, buf)

print('Finished.')

pip install whatever you don’t have (cv2 is installed as “pip install opencv-python”). The idea of image stacking is very simple. In the most basic case, you only want to take the maximum brightness of every pixel and every color channel. In this case, once the star has brightened up a pixel, it will stay bright in the final picture. However to achieve the “falling stars” effect, you need to dim the brightness of stars in their high positions. Therefore there is an “attenuation” parameter in the above code. In this case, I used 0.99 to produce the final image.

Before stacking, the photos always look very noisy. But it’s ok because the noise will be averaged out in the stacking process.

Fine tuning

My current favorite way to tune photos is the Snapseed app. Send back the photo to your phone, open it up in Snapseed. I usually play with the Curves, White Balance, Crop, Selective and Vignette. Obviously if you are slightly less amateur, you would use a more professional software. But I find this app sufficient for what I do.

If You Really Want A Professional Photo Though

You obviously need a legit camera and a nice view on the ground to complement the sky. And probably use Photoshop for post processing as well. But it’s ok you can still use this python script to stack photos.

Longest Sequence Without Duplicate Substring of Constant Length

Around 9 years ago I was working on a multiple choice exam and this problem came up. Recently I thought about it again and it’s actually very simple.

The problem is: say you are writing a sequence of letters A, B, C, D, with one constraint. Whatever substring of length 3 that already appeared in your sequence cannot appear again. What is the longest sequence length you can possibly come up with?

As an example, the sequence “ABCDACBB” is valid, but the sequence “ABCDABCB” is not, since “ABC” appears twice. By the same token, “AAAAB” is also invalid, since “AAA” appears twice.

There is an easy generalization of this problem to k choices of letters instead of 4, and m as the length of substring instead of 3. Quick maths: there are k^m substring combinations, so there can be k^m unique starting points, and the answer is upper bounded by k^m + m – 1.

9 years ago I didn’t know anything so I thought it was hard. But now it just looks like a graph traversal problem, which is either trivial or intractable (depending on whether it’s Eulerian or Hamiltonian). With this in mind, the problem is trivial. You can either read my solution below or spend a few minutes to figure it out.

Construct a directed graph where each node represents a unique substring made up of the k choices of letters of length m – 1. There are k outgoing edges and k incoming edges for each node. The k outgoing edges represent the k different ways you can append a letter to the current substring of length m – 1. So for example for the node “AB”, the outgoing edge “C” would point to the node “BC”, since after adding a letter “C”, you get to the new prefix “BC”. Then each edge corresponds to one unique substring of length m, and the problem becomes to find an Eulerian path in this graph. Since this graph is always connected and each node has the same number of in degrees as out degrees, the path always exists and must be a cycle, regardless of m and k. There will be many possible cycles as well. Now this problem is theoretically solved, and also programmatically solved.

As I later found out from my roommate’s math homework, this graph construction is called de Bruijn graph. Of course everything has to have a name.

I wish I could go back in time and explain this to the little me.

Having fun with queues

Recently I’ve been reading Okasaki’s Purely Functional Data Structures. In the first few chapters, the book discusses several really interesting ideas, that I will attempt to summarize and introduce below.

TLDR: persistence; lazy evaluation; amortization; scheduling.

Before getting into algorithms, there are some prerequisites to understanding the context of functional programming.

Immutable objects, persistence

Normally in imperative programming, we have variables, instances of classes and pointers that we can read or write. For example we can define an integer and change its value.

1
2
3
int x = 4;
x = x + 1;
cout << x << endl;

However in the functional version of the same code, even though it looks as if it does the same thing, variables are not modified.

1
2
3
let x = 4 in
let x = x + 1 in
printf "%d\n" x

What the above OCaml code does, is it first binds the name x to the number 4, then binds the name x to the number 5. The old value is not overwritten but shadowed, which means the old value is theoretically still there, just that we cannot access it anymore, since we lost the name that refers to it. In purely functional code, everything is immutable.

But if we use a new name for the new value, we can keep the old value, obviously:

1
2
3
let x = 4 in
let y = x + 1 in
printf "%d %d\n" x y

Now let’s look at some nontrivial example, modifying a list.

1
2
3
4
5
6
let x = [ 1; 2; 3 ] in
let y = 0 :: x in
print_list x;
print_list y
(* 1 2 3
   0 1 2 3 *)

Note that the double colon operator adds a new value on the head of a singly linked list. What this means is that after adding a new element to the linked list, not only do we have a linked list of [ 0; 1; 2; 3 ] called y, but we also still have the old list of [ 1; 2; 3 ] called x. We did not modify the original linked list in any way; all the values and pointers in it are still unchanged.

Now this, in itself, is already interesting: first, you can have y = 0 :: x and z = -1 :: x, essentially creating 3 linked list in total. But since they all share the same tail of length 3, so only 5 (-1, 0, 1, 2, 3) memory locations are being allocated, instead of 3+4+4 = 11. It is also worth the time to go through common list operations (reverse, map, reduce, fold, iter, take, drop, head, tail…) and verify that you can implement them in the same time complexity without making modifications to the original list. You cannot apply the same techniques to random access arrays, and that is why lists are the basic building blocks of functional programs the same way arrays are the basic building blocks of imperative programs.

Lazy evaluation, memoization

One of the advantages of having immutable objects is that imagine for a function f that takes in only a list as input, the output of the function will always be the same as far as the list has the same physical address. Then we do not have to repeat the same work, given the technique of memoization: after executing a pure function with an immutable input, we can write down the input output pair in a hash table and simply look it up the next time. What is even better is we can do this at the language/compiler level, so memoization is hidden from the programmer. A nice consequence is that we can simply write a recursion to solve problems that automatically turn into DP solutions.

Lazy evaluation capability is also built into many functional languages. The point of lazy evaluation is to delay computation as late as possible until we actually need the values. With this technique, you can define a sequence of infinite length, such as a sequence of all the natural numbers. There are two basic operations here: creating a lazy computation, and forcing it to get the output. This will be important later in our design of queues.

Queue: 1st gen

Now let’s get to the point: implementing purely functional queues. The first challenge: actually have a queue, regardless of time complexity. Given building blocks of lists, how do we implement a queue? Note that it is not desirable to build from arrays, because they are intrinsically mutable, using them will not help to create an immutable data structure. Fortunately we have a classic algorithm for this: implement a queue with two lists, F and R. We pop from the front of F and push to the front of R, and do (F, R) = (List.rev R, []), reverse the list R and put it in the front, whenever F becomes empty. Here’s the obligatory leetcode link.

This implementation claims to have amortized costs of O(1) for push, pop and top operators. This is obvious for push and top, and pop takes a little bit of analysis. Sometimes when popping, we will reverse a linked list of n elements, which takes time O(n). However in order to trigger this O(n) operation, we need to already have done n O(1) operations. That means approximately every n operations, we can at most have done n*O(1) + O(n) work, which is still O(n). Therefore on average we still only had O(1) work per operation. This is a very brief description, assuming prior knowledge in readers.

Amortization breaking down

However this amortization bound does not work for our functional version of queue. This is perhaps obvious to see: say we have one element in F, and we have n elements in R. These two lists together form a functional queue instance, called x. Calling y = Queue.pop x would be O(n), then calling y = Queue.pop (Queue.push x 1) would be another O(n). In fact, we can create many such O(n) operations. Obviously, the cost now for Queue.pop could be as bad as O(n), even in the sense of amortized cost. This is because now that we can access the entire history of the data structure, we can force it to perform the worse operation over and over again, hence breaking time complexity bound. Because of this way of repeating the expensive operation, it seems unlikely that persistent data structures in general can utilize amortization in run time analysis.

Queue: 2nd gen

The key idea to fixing the amortized cost is, spoiler alert, lazy evaluation. And this is supposedly the aha moment of this post as well. Instead of reversing the R when F becomes empty, we “reverse the list R and append it to F” when R’s length becomes greater than F’s length. The operation of F = F @ (List.rev R), where @ means concatenation, is written in quotes, because it is computed lazily. The intuition is that, say we have F and R both of length n, and pushing one element on R (or popping one element from F) will create a lazy computation of reversing R in O(n) time. However, this reversal will not actually be carried out until O(n) more pops are executed, such that the first element of (List.rev R) becomes exposed. Then, the O(n) cost of reversal can be amortized to the O(n) steps forcing the reversal, hence all operations become O(1).

Let’s compare this amortization with the 1st gen amortization. How come this works and the 1st gen didn’t work? The key distinction is we need to amortize expensive operations into future operations, not those in the history. The old method of forcing an expensive operation to happen multiple times to break amortization does not work anymore, because we must call pop O(n) times before we force a lazy reversal of O(n) to happen, and once it is reversed, we memoize that result, so there is no way to force that O(n) to duplicate itself. Meanwhile, memoization of list reversal doesn’t help with the 1st gen design, because we can add any element to R to make it a different list, which would constitute a different input to the List.rev function.

Sketch of proof

Having a working data structure is great, but what’s better is a framework for proving the amortization cost rigorously for not only queues, but also other persistent data structures. Traditionally we have 2 methods of proving amortized cost, the banker’s method of gaining and spending credits and the physicist’s method of assigning potential to data structures. Both of them are similar in that they calculate how much credit we have accumulated in the history up till a point, and then we can spend them on expensive operations that follow.

To prove amortization into the future for persistent data structures, we accumulate debits and try to pay for them in the future one step at a time. Here, debits would be the suspended costs of lazy computation. Adding n debit is essentially saying, “hey I need to do something O(n) here, please execute O(n) other operations before asking for my result.” Hence, you cannot ask for a particular result (like the reversed list R) if your debit account is positive; you must pay off all your debt first.

Then, proving these amortization bounds is simply a matter of choosing the right debits and payments for each operator in the banker’s method, and choosing the right debit potential function for each instance of the data structure. There are rigorous results given in the book, which will not be reproduced here. The curious reader can… just look them up in the book.

Scheduling

Lastly, another important point: we don’t really need amortization at all, instead we can just make everything O(1) worst case. The idea is that instead of lazily delaying the work until it is actually needed, we just actually complete it one step at a time. If reversing the list takes n steps, and we are amortizing that cost on n operations, why don’t we just actually do one step per operation? Since the operations come after the lazy computation is created, we can for sure do that. It was not possible before when the operations being amortized on came before the expensive computation. Of course, the code is going to be a little trickier and has more performance overhead, but theoretically the queue with scheduling achieves a worst case time complexity the same as the ephemeral (non-persistent) counterpart, everything is O(1).

This is not always possible, as for some other data structures it is not easy to find out what pieces of expensive computations we need to pay off per operation. Fortunately for queues it is not hard to imagine how to implement one.

What I left out

Of course, this is still a TLDR version of all the technical details of the implementation, which are all explained in length in the book. The proofs are still nontrivial to complete even given the ideas, and it takes some effort to make sure the scheduling is efficient. I also left out all actual implementations of the data structure in ML code in the book, since explaining the syntax would be heavy and would not contribute to understanding the key ideas.

Anyway, this is about the amount I want to discuss in this post; it’s also interesting to think about implementing maps and sets (like c++ STL ones) as functional data structures with the exact same asymptotic time complexity on common operators like add, remove, find for both and set, get for maps. Happy functional programming 🙂