You could use an approach which looks like active an torrent (p2p) where you have an application that monitors a directory for changes in time-stamps (directory change listener) and when a file changes the synchronization program splits the file into chinks and starts sending them over to the server in a few parallel threads.
The changes should not be made momentarily on the server, but in some form of temporary cache and when the whole document is transferred then have it replaced. This is for the reason in case another change is detected to the original file while the transfer is still undergoing.
You can also have the synchronization service calculate hash for the chinks and you would transfer only the modified chinks, etc.
I do not remember the name of this algorithm (the way of sync-ing files across the network), but I believe that the implementation I saw was in python and it was accomplished without pooling through the file. There was some other way of direct access to file data locations on disk or so, but I do not remember the entirety of the process.
Live Syncing Daemon is a good solution, but I am unsure you can argument delays and priorities the way Mr Kundan's diagram shows.
If those noted delays are not required (but only a sketch) Live Syncing Daemon service is a fine way of achieving mirroring across the network.
Another possible alternative is csync2. It is more of a backup tool, but it works well in a combination with lsyncd. You can make a really fast and reliable backup (but I am unsure if this is what you want - a.k.a. a ONE-WAY mirroring)...
If you really do need one-way mirroring, rdiff-backup is a good solution as well.