To get a basic understanding of MapReduce, I recommend playing around with smaller data sets using built in functionality in a language such as Python.
I've had quite a bit of experience running MapReduce jobs on Hadoop. You can write them from scratch in java, or use a scripting language like Pig. Pig can simplify your code tremendously when doing certain basic operations like filtering, grouping, or exporting into something like Hive or HBase.
if you have sufficient memory and not working on too large dataset then you may try spark "https://spark.apache.org/" which provides in memory operation for faster iterative mapreduce function.
you may also find survey of various big data analytics platform from this paper "http://www.journalofbigdata.com/content/2/1/8"