Hadoop is parallel data processing framework that has traditionally been used to run map/reduce jobs. These are long running batch jobs that take minutes or hours to complete. Spark is an alternative to the traditional batch map/reduce model that can be used for real-time stream data processing and fast interactive queries that finish within seconds. It is also based on the Hadoop framework. So, Hadoop is evolving into a general purpose framework that supports multiple models, such as traditional map/reduce and Spark.
MapReduce is a programming paradigm and SPARK is an concrete piece of software that is having it's own advantages like in-memory processing. You cannot compare them. However you can compare Hadoop and Spark.
Hadoop is parallel data processing framework that has traditionally been used to run map/reduce jobs. These are long running batch jobs that take minutes or hours to complete. Spark is an alternative to the traditional batch map/reduce model that can be used for real-time stream data processing and fast interactive queries that finish within seconds. It is also based on the Hadoop framework. So, Hadoop is evolving into a general purpose framework that supports multiple models, such as traditional map/reduce and Spark.
Apache Spark is an execution engine that broadens the type of computing workloads Hadoop can handle, while also tuning the performance of the big data framework.
Apache Spark has numerous advantages over Hadoop's MapReduce execution engine, in both the speed with which it carries out batch processing jobs and the wider range of computing workloads it can handle.
Rather than just processing a batch of stored data after the fact, as is the case with MapReduce, Spark can also manipulate data in real time using Spark Streaming.
Spark is able to execute batch-processing jobs between 10 to 100 times faster than the MapReduce engine according to Cloudera, primarily by reducing the number of writes and reads to disc.