That depends. RHadoop is one tech that relies on Hadoop streaming. This runs R code as mapper or Reducer. Reducer makes more sense though. This cannot help you with interactive analysis. Just batch offline analysis. SparkR is a beautiful tech but still in infancy. This has potential to become an interactive scalable solution - especially if integrated with notebook like tech like Zeppelin ScaleR is another tech from Microsoft. They have done extensive changes to make sure R really uses parallel compute inside and keeps things simple outside. This I gather from Their documentation.
Spark is an emerged tech for improving the big data analyses. Apache Spark has better performance than Hadoop because is 100 times faster if the distributed data are cached in main memory (RAM) and 10 time faster if are store on disk. This cache's mechanism is based on don't always read from disk like Hadoop does. Spark includes also Machine Learning library (ML or MLlib, depend on version)