“What does the logarithm do?”, asked a Professor a bunch of us back in 1990. “It is the inverse of “, I said. “No” he replied, “I ask again, what does the logartihm do?”. A friend started to say something along the lines of the Wikipedia definition. “You tell me how it is calculated, but not what it stands for; why do we need it?”. He then proceeded asking us the value of the logarithm of some numbers:
| x | |
| 1 | 0 |
| 10 | 1 |
| 100 | 2 |
| 1000 | 3 |
| 10000 | 4 |
– Do you see a pattern now?
“The length of the number is “, one of us observed.
– Exactly!
I was reminded of this discussion when stumbling on “Counting large numbers of events in small registers“ written by Robert Morris (father of the Robert T. Morris). Basically, what Morris observes in this paper is that with n bits available, one can count up to . He had 8 bit registers available only, so this limited counting up to 255 which did not suite the application he had at hand. So he began by simply having the value of a register representing
and reached to a generalized result of
where a controls both the maximum count that can be held in the register and the expected error.
The maximum number is . The expected error of n after n counts can be calculated from the formula
.
This is a really nice hack if you need to count large numbers of events and where accuracy is not a must. It maybe helpful to people working with embedded systems or otherwise deprived environments.

