I don't know how to prepare input and script files for GROMACS. I appreciate if you could share example for a small job. I learnt that i have to convert .pdb file to .gro but I don't know how.
just copy the content to a text editor and save file as "run.sh" or any name you wish.
To run this file just make the file executable and run using "./name of file.sh", of course in that particular folder.
Before you submit .sh file to supercomputer perform all the necessary previous steps including pdb2gmx, ediconf, solvation, ions addition, minimization. because after these steps NVT, NPT equilibration and MD run commands can run in sequence. You don't have to manually add parameters or any values.
Also it is not necessary to to convert .pdb file to .gro format as gromacs can read and write .pdb files, you just have to specify it by command -c name_of_file.pdb
It is the same as submitting the jobs on a local machine assuming that gromacs is installed on the server. You need exactly the same files and the command sequence is exactly the same. After logging into the server, you shall probably need to ssh to a compute node and then cd to the directory where the pdb file is located (or create a directory with mkdir and scp the pdb file there). Now, as for preparing the protein and the system, I would suggest you to follow the excellent tutorial by Dr. Lemkul (http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/index.html).
All the .tpr, .top, .cpt etc. files shall be created by gromacs commands. However, you shall need three .mdp files. The tutorial I suggested above shall also provide you those three .mdp files.
Finally, in order to utilize the full potential of the supercomputer, you need to assign the number of CPUs for the mdrum job. So, the last command, as suggested by Tanuj Sharma, needs an additional flag -np X, where X is the number of processors, which can be the maximum number of processor available at that compute node.
Are you using supercomputing facility from NCI. If you do you can use their guide and get along on your way. If you still face any issues just drop me a message and I will be happy to help you.
Regarding the queue submission your computer administrator can help. You might as well need to load gromacs module in your local environment. However, the initial pre-processing steps are interactive. For example, while running pdb2gmx you shall be asked to select a force field; while adding ions you shall be asked to to select which molecules to replace. So, you actually need to pass those commands one by one in the ssh window (alternatively, you can do the pre-prossessing in your local computer then transfer the files to the server). The final six commands (as mentioned by Tanuj Sharma) can be put into a script and submitted to the queue. The queue submission or batch submission processes differs in different clusters. In our server, we use qlogin to log into a free node (or check the node status using qstat -f and then ssh to a free node) and then run the commands as we do in the local computer. However, it is not the optimal use of resources. You should use the qsub as you do with gaussian.
I have been in trouble to figure out how to load executable GROMACS file. Could you help with how to load GROMACS? then I can start with pdb2gmx command?
Is it not already in the path? Have you tried gmx? Check what is written on those GMXRC files. vi GMXRC.bash. Most probably, environment variables are defined there. If so, you can try ./GMXRC.bash in a bash shell or ./GMXRC.csh in a c shell. Or try source GMXRC in a bash shell. Then try gmx.