I don't understand the claim that adding new data force you to recompute everything. What requires re-computation will depend on the algorithm, but for most simples cases I can think of, at least the map part will not require computation if you record its result. I believe that's how couchdb view work, for example
Hadoop is a fairly low level framework. So it requires the programmer to write logic to incrementally calculate the map (i.e., input_data_may -> map_results_may, repeat for june).
Similarly, it's up to the programmer to write a reducer which allows incremental additions of data. And even then, you still need to make a pass over all the data.
The limitation on the map end is only a limitation for hadoop.
But the limit on the reducer is fundamental. Some reduce functions are not associative, and some don't even have type [T x U] -> T x U. In those cases, there is nothing to be done but redo the reduce.
Indeed, reduce is the difficult part. OTOH, I think this limitation is seen in many algorithms at a fairly fundamental level, and not just an artefact of MR. The only alternative framework I can think of for dealing with really large datasets in a distributed manner is sampling-based methods, with one-pass algorithms (or mostly one pass algorithm).