There are many other solutions in case you combine appropriate databases(like MongoDB, SciDB), data structures(like Bloom Filter: http://en.wikipedia.org/wiki/Bloom_filter ) and algorithms( Like Count–min sketch: http://en.wikipedia.org/wiki/Count–min_sketch ). The problem you are trying to solve will determine which combination will work.
Update: I found a long list of tools/libraries for big data: https://github.com/onurakpolat/awesome-bigdata Many parts of it are relevant to velocity dimension of big data.
On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data.
To gain the right insights, big data is typically broken down by three characteristics:
"Velocity" means generation of data in very fast manner.... i.e.within few seconds large amount of data is generated from different websites(e-commerce, social etc...) and network(wired, wireless, mobile......)