# Computing Approximate Histograms in Parallel

Today I’m going to write a little about Approximate Histograms and how can they be used to get more insight on streamed big data feeds. I also provide a simple Java implementation and explain some parts of it.

Most of the common aggregation operations like counting and summing can be performed in parallel, as long there is a reduce phase where the result on each node can be combined. However, this is not very trivial for calculating histograms, as we need all the data on one dimension so that we can represent it in an histogram.

Having the data being processed by multiple nodes, each node is only able to construct an histogram of the partial data it receives. Ben-Haim and Tom-Tov presented a solution that uses an heap-based data structure to represent the data and a merge algorithm that allows to merge the data structures computed on different nodes into one that is an approximate histogram of all the dataset.

This technique has been applied by MetaMarkets with good accuracy for most of what an histogram can tell us about the data distribution: calculating the average and counting the quartiles and total number of data/events.

I took the liberty of doing a simple implementation of it, that is now being used in production for some months now:

Internally the histogram is represented by a set of points (count, centroid), ordered by it’s centroid. When a new point is added, if the centroid already exists we increase the count number, otherwise we add the point with count 1 to the list.

Each histogram has a limit of points to keep and when a new insert exceeds this limit, a compression takes place. The compression consists in merging the two consecutive points where the difference between its centroids is the lower. The two are replaced by a single point with centroid on a place nearer to the neighbor point that has more counts: if they have the same count, it would be on the middle. The count of the new point will be the sum of the two old ones.

As Java doesn’t have unsigned numeric types, this implementation exploits the signal in the count field to flag if that point has been originated from compression of two other or if it is from raw observations. This can help answering to questions like: how many values are below X? If the points have a positive count for every point whose centroid is below X, we can truly count them. If they are negative, we know that point is an approximation, so we calculate the count using the trapezoidal estimation of Ben-Haim and Tom-Tov. This gives more accurate results than assuming every point might be an approximation and requires no extra space in Java-based data structures.

For merging more than one histogram, which happens when we want to combine results computed on different nodes. This is done by creating a big heap with the combined values of the histograms and applying compression on that heap, as described above, until the heap has the maximum number of points.

For very disperse data, this data structure may yield bad approximations if the number of points is not high enough. This data structure is very flexible and it’s easy to use it for streams with different distributions by just tuning the number of centroids we keep.