I am not familiar with hadoop technologies but this might help you:
The mechanism sending specific key-value pairs to specific reducers is called partitioning . In Hadoop, the default partitioner is HashPartitioner, which hashes a record’s key to determine which partition (and thus which reducer) the record belongs in. The number of partition is then equal to the number of reduce tasks for the job.
They are first of grouped by key and that sorted. While processing key/value pairs in your reducer (data is passed into it in sequence fashion) you have to implement by your own 'splitting" of incoming data sequence. Here is example for you (it's in Python, but could be use with Hadoop Streaming) - https://gist.github.com/anonymous/f28f051bb2a7b1532d03