If you could divide your file into smaller ones you can get some speed up in the opening process. Also, you have to have in mind that depending on your architecture any communication between processors might demand to pass through the bus, therefore slowing down your computation significantly.
Depends on what you plan on doing with that data, but generally speaking you should every MPI process should not open the same file using the serial file tools provided by your fortran/C/C++ compiler.
Either
-open the file on the master process and distribute to other processes as appropriate
-use the collective MPI routines for opening and reading the file
I would say that it depends on your underlying infrastructure which is more efficient. In a lot of cases each MPI process reading the file for themselves will be more efficient. That is because all processes can work in parallel instead of first on process reading the file and then sharing it with the others through communication (these two steps are serial). Only if the access to the file gets slowed down significantly because all processes are accessing the same disc, then the other approach might be faster.
Note that there is also a function MPI_File_open. I have not used it yet. But, in general I would expect that this is the right way to do it. Does anyone have experience with MPI_File_open? How does it work internally?
Arto and Simon pointed out two issues that must be observed while planning data input. The underlying architecture could make things better or worse depending on how data can be transferred among parallel nodes. If the transfer is relatively slow, one hypothesis is to replicate the data in all nodes.
Another issue is how the 10Gb is manipulated. If the data is a set of independent clusters you can use a parallel approach more freely. If there is some dependency, then I believe that you are restricted to a single reader, that will distribute the data accordingly.