Variants of the Optimal Coding Problem
A Variant of Optimal Coding Problem
Prerequisites: binary prefix code trees, Huffman coding
Here’s a problem: say we are given n positive numbers, and you are allowed to each time pick two numbers (a and b) that are in the list, add them together and put it back to the list, and pay me an amount of dollars equal to the sum (a+b). Obviously your optimal strategy would be not to play this stupid game. Here’s the twist: what if I point a gun at your head and force you to play until we end up with only one number, then what would be your optimal strategy, and what would be the minimum amount of money you have to pay me?
At the first glance this seems quite easy, of course you would just pick the smallest two numbers each time and add them, because that would be the best option you have each time. It would minimize your cost every round, and I will end up with the minimum amount possible. However if you paid attention in your greedy algorithm lecture in college, you will notice the fallacy of this statement - picking the best option each time does not necessarily lead to the best overall outcome.
I claim that it is not obvious that this algorithm is right, not only because it is greedy, but also because this problem is equivalent to the classic optimal coding problem, which is not that easy. For those who haven’t heard, optimal coding problem is the problem where you’re given a bunch of symbols and the average frequencies for each of them, and you try to design a set of binary prefix codes for these symbols such that using this coding scheme, the average sequence of symbols will be represented by the shortest expected number of bits.
What? Yeah, this number summing and merging problem is equivalent to the optimal coding problem. I’ll spare you and myself a formal proof, but here’s an example to convince you. Say we have 3 numbers a, b and c, and we first sum a+b, then sum (a+b)+c. Your total cost will be 2a+2b+c. Now if we draw out the summation as a tree:
a+b+c
/\
/\ c
a b
You will see that a and b sit at depth=2, and c sits at depth=1. This gives another way to come up with the total cost: multiply each number with its tree depth, and sum these products together. This is always the same as the total cost, because for each number, as you go up a level in the tree, you add that number to the total cost once.
Now if you paid attention in your algorithm lecture on binary prefix code trees, you will see this looks exactly like a binary prefix code tree. To complete the analogy, say we are given three symbols A, B and C, and they have frequencies a, b and c. Had we given them the same binary code tree as above, the average code length weighted by frequencies will be 2a+2b+c. If you don’t see why, you should pause here and think until you get it (or not, you decide).
Now that we established these two problems are the same (same objective function, same solution space), it follows that the optimal algorithms for both are the same. The solution to the optimal coding problem is Huffman Coding, as spoiled from the very beginning of this post. Now, this algorithm is exactly the greedy algorithm above that we weren’t sure was right or not. So - I just spent this much time convincing you that algorithm might not be right, and then convincing you it is indeed correct. Yay progress!
Variants of that Variant
(Technically, that wasn’t really a variant because that was exactly the same problem.) After talking to a few people in real life about this, I found it hard to convince people that this innocent looking algorithm could possibly be wrong, and it certainly didn’t help that the algorithm ended up being right. One line I tried was: “even Shannon tried this problem and came up with a suboptimal Shannon-Fano coding scheme,” but people didn’t seem to appreciate that. In fact, it is quite stunning that when we turn the optimal coding problem into the summing problem, the solution seems so much more obvious.
So I came up with a second attempt: what if we tweak the problem a bit so that greedy wouldn’t work? In the original problem, when we merge two numbers, it costs us the sum of the two, and we reinsert the sum to the list of numbers. What if the cost function or the merge function is not an addition? For example, we can imagine changing the cost function to take minimum, or the merge function to return the square of the sum, etc. Then, would the greedy algorithm still work? Now people are much less sure - which kind of proves the point that they shouldn’t have been so confident in the first place.
But would it? It turns out, perhaps unsurprisingly, in most cases it won’t work anymore. Just to raise a few counter examples, (1 1 1 1) yields an optimal merging of 1 + (1 + (1 + 1)) if the cost function is taking minimum. (1 1 1 2 3) yields an optimal merging of ((1 + 1) + 2) + (1 + 3) if we take maximum instead. And if we take cost as the square of the sum, for the case (1 1 2 2), the optimal way is (1 + 2) + (1 + 2). In all these cases, the greedy way will give a sub-optimal total cost. It happens that if we take both cost and merging functions to taking min/max, the greedy approach works, but they are a lot more trivial than the other cases.
At this point, it seems like we really lucked out that the optimal coding problem corresponds to a case where greedy works. Otherwise, computers would either spend so many more cycles computing the optimal codes for compressing files, or we would have ended up with larger file sizes for various kinds of compressed files.
To end this post with a trivia: did you know that the Huffman tree for a gzip file is also encoded using a Huffman tree? Since every file has a different frequency count of symbols, each file has a different Huffman tree that needs to be saved in the compressed file as well. This tree is then encoded, again, using a standard Huffman tree that is the same for all compressed files, so the decoder knows how to decode, which is pretty meta.