kuniga.me > NP-Incompleteness > Paper Reading - FlumeJava
18 May 2022
In this post we’ll discuss the paper FlumeJava: Easy, Efficient, Data-Parallel Pipelines by Chambers et al . In a nutshell, FlumeJava is a Java library to simplify the execution of multi-stage MapReduce jobs and was developed at Google.
Before we start, let’s revisit MapReduce concept since it’s core to the premise of the FlumeJava library.
MapReduce is a distributed systems paradigm popularized by Google. It can be used to perform parallel computations by dividing the work across multiple machines.
A MapReduce job consists of 2 sets of machines: mappers and reducers. We provide a function to each of the mappers and another function to each of the reducers.
Each mapper receives a list of inputs and it applies the map function to each of them. Then, each result gets hashed to make up a key and depending on the key it goes to a reducer machine.
Each reducer receives the output from every single mapper that has the corresponding key. This distribution of the mappers’ outputs to the reducer’s input is called shuffling. The reducer applies the reduce function (often an aggregation) and writes the result to some persistent storage.
Suppose we want to count how many words start with each letter in a group a files, for example:
One way to solve this using MapReduce is to provide a mapper function that returns the first letter of the word, for example:
and then a reducer that counts how many entries with each letter exists
In our example, suppose we have 2 mappers and 2 reducers. File
a.txt gets processed by the first reducer, and
b.txt by the second. Let’s assume keys
'a'...'m' are processed by the first reducer and
'n'...'z' by the second.
The mappers will output:
The shuffling process will split the outputs into:
The reducers get and return:
The major value the MapReduce provides is to abstract the complexities of setting up and orchestrating a distributed system, including scaling and recovering from faults. In its most basic form, the user only needs to provide the mapper and reducer functions but it can get more complex.
A single MapReducer provides a primitive that on its own can only be used to implement a limited set of computations. However, due to its simplicity it’s also highly modular and thus can be chained with other MapReducer jobs and form a direct acyclic graph (DAG).
This graph can become very complex and difficult to write and maintain. Moreover, it’s easy to construct inefficient graphs that perform redundant computations. The proposal from FlumeJava is an abstraction that makes expressing these graphs simpler and that performs optimizations behind the scenes.
I see an analogy here between MapReducer + FlumeJava and assembly + high-level languages. Assembly is harder to maintain while high-level languages provide better abstractions, and the compiler can optimize the code in a way that outperforms naive manual assembly optimizations.
The first part of interfacing with FlumeJava is to construct a graph by providing the functions to be computed and their dependencies.
Let’s consider a basic example. First we invoke a function
readTextFileCollection() that reads strings from a file, stored in a distributed system (GFS). Note that this is not executing the actual reading, just building the graph. The
P prefix stands for parallel.
For each line we read, we’ll split them into words by passing a corresponding lambda function to
parallelDo(). The second argument to
collectionOf(strings()) which indicates the output type (
We can then turn that list into a frequency map, to start each entry contains 1. Note that we use a list of pairs as opposed to a dictionary because it need to store duplicate keys.
wordsWithOnes should really computed in one pass though. This could be a good example of how a naive implementation ends up using two MapReduce jobs when one would suffice.
Finally we can group the pairs by key and then sum their values. The aggregation function
SUM_INTS is part of the library but we could provide our own.
combineValues() could replaced with
parallelDo() but because the former assumes an aggregation, it can be optimized by also being executed by reducers as a pre-aggregation step (and reduce the amount of data on shuffling).
These basic primitives can be used to implement more complex operations common to SQL-engines like
Then we call
FlumeJava.run() which will optimize the graph, then create and run the MapReduce jobs. It’s then possible to consume the output as if the computation happened locally.
Once the graph is built, there are a few performance optimizations that happen. The goal is to minimize the number of MapReduce jobs that get generated at the end.
This consists in combining adjacent nodes corresponding to
parallelDo() functions. This is a straightforward optimization. For example, if we have functions $f()$ followed by $g()$ we can combine them into a into a composition of them $(g \circ f)()$.
Since the reducer of the final MapReduce job allows multiple heterogenous output, we can also combine a parent and its children into one node, as in Figure 1.
If two or more
B, follow a flatten operation and then fed into another
D as in Figure 2, then we can duplicate
D and “push it up” so it’s adjacent to
B. This allows
A + D and
B + D to be fused, so we turned 3 functions into 2.
The paper defines the MapShuffleCombineReduce (MSCR) abstraction, which is a 1:1 mapping with an actual MapReduce job. As the name suggests it encompasses a map, shuffle, combine and reduce sub-units.
groupBy() function will form its own MSCR and the
parallelDo() nodes adjacent to it will be fused into its MSCR. Any remaining nodes after this process are
parallelDo() ones, which go into trivial MSCRs with only a mapper.
For each MSCR the library will first determine whether to run the computation locally or to schedule a MapReduce job.
It stores the output of each MapReduce job in a temporary file (which I assume is in a distributed storage like GFS) but it deletes the file after the process is complete. These intermediate files can be also used as cache when the code is re-executed.
It also routes the stdout/stderr and exceptions of the user provided functions from the distributed machines back to the host machine.
The authors compare FlumeJava with a naive MapReduce implementation, a hand-optimized MapReduce implementation and Sawzall, a DSL used for the same purposes as FlumeJava.
The experiments show that FlumeJava programs are smaller than their MapReduce counterparts but larger than Sawzall ones. It also shows that FlumeJava generates a number of MapReduce jobs comparable to hand-optimized MapReduce implementations and a fraction of those from naive MapReduce and Sawzall implementations.
The paper mentions a project called Lumberjack, a DSL that was also aimed at managing graphs of MapReduce jobs but was later scratched in favor of the Java-based FlumeJava. The motivations for this are:
The interesting thing is that this paper was written in 2010, before Hive became popular. Hive is essentially a DSL that generates MapReduce jobs behind the scenes, but it uses SQL as its API.
It turns out that the API from FlumeJava can be naturally expressed in SQL and this is a familiar paradigm, especially for non-programmers. This explains the explosion in popularity of SQL=based distributed systems like Presto and Spark over the years.
To summarize, we learned that:
One thing the paper doesn’t seem to mention explicitly is how it handles fault-tolerance. I think it’s possible to completely defer it to the MapReduce framework which is fault-tolerant. If we assume each step of the execution is fault-tolerant I believe we can achieve fault-tolerance for the whole system though.