Hadoop seems to be popular through ETL tooling and big data and can be considered for historical database and data warehouse systems. How it can act while OLTP transactions are required?
Hive(http://hive.apache.org/) is used as warehouse on Hadoop.
HBase(http://hbase.apache.org/) is used on top of the Hadoop as NoSQL database for real-time processing to meet BigData needs.
OLTP can be used with Hadoop, but indirectly, by this I mean that there are multiple possibilities to run additional OLTP system on top of the Hadoop that will dump data to Hadoop or something similar. It's also important to mention that different combinations of Hadoop with other technologies creating a lot of start-ups:
Speaking as someone working in a Hadoop-based startup, if you want serious OLTP, go get a distributed in-memory database.
The fact is, main memory of a few machines is easily big enough even for the largest OLTP databases. In fact, most OLTP databases can fit in memory easily on a single machine.
The Hadoop distributed file systems, HDFS, is inherently highly unsuited for OLTP use. Even implementing a transaction log on HDFS is not very satisfactory due to the very limited rate at which you can commit changes.
The MapR distribution for Hadoop includes an alternative file system which could be used for such a system.