Big data is a big challenge in semantic web. We need to store trillion of triples/quads in a store. It is more than data warehouse as data are connected to each other to form a graph. So, semantic data lake is coined. Then query/inferencing is a real challenge to achieve scalability. Hadoop is a good choice of achieving scalability using commodity hardware. But, my question focuses on the concrete technique of implementing semantic data lake using Hadoop.