You didn't even try to make it through the abstract...
> An exception to this is
the Sort workload, for which MapReduce is
2
x faster than Spark.
We show that MapReduce’s execution model is more efficient for
shuffling data than Spark, thus making Sort run faster on MapRe-
duce
A lot of the differences between the systems arise from the implementation choice of how to do aggregation in Hadoop 2.4.0 and Spark 1.3. There's nothing inherent in the RDD model, for example, that says the aggregation has to be done eagerly at the mapper; nor in the MapReduce model that says it has to be done at the reducer. Either system could support the other aggregation mechanism, and the only challenge would be in choosing which one to use.
Some former colleagues wrote a nice paper about the performance trade-offs for different styles of distributed aggregation in DryadLINQ (a MapReduce-style system), and evaluated it at scale:
This is a very valid concern. There are a lot of ways to make algorithms and distributed systems scale poorly for significant numbers of computation nodes (it took Hadoop half a decade to figure out how improve this via YARN).
I guess they did not analyze joining two large datasets (I am assuming because M/R would win hands down). Someone more experienced please tell me if I am "thinking" correctly.
What does it mean to "join two large datasets"? I can think of many meanings. What part of the method description in the abstract did you consider yourself too inexperienced to understand?
Should I ever use traditional MR over Spark?