As an example, a molecular orbital calculation job takes one day (24 hours) on an i7 four core personal computer, what is the time it takes on a specific supercomputer?
The answer to such a question depends on a lot of things:
1) Is the code to calculate the molecular orbitals parallel (if not then it will take about 24h on any machine(desktop/supercomputer), depending on the clock speed)
2) If the code runs in parallel, how well does it scale on that specific supercomputer (is it openMP parallel only (then you need to remain within a single node, or is it MPI parallel, then you may run on multiple nodes) In the case you have perfect scaling the time will be 24h/(#cpu-cores) (The example of Sayyed assumes perfect scaling) However, most (actually all) codes do not do as well, but some may get rather close (parallellisation over bands and especially k-points are very beneficial).
3) What is the size of your problem, if you are looking at a large problem. E.g. ab-initio calculation of a system with 100 atom or a system with 5 atoms. In the latter case you may get speedups to let's say 8 cores, and when you use more cores your calculation gets slower (in real time, due to overhead), while the 100 atom job may show speedup up to 64 cores.
4) Related to the size of your problem is also the memory requirement. If you need much more memory than there is available in your cache, but by using multiple cores you end up with less memory per/core than you have cache memory, the speed-up may even be super-linear (saw that once with a colleague who had x10 speedup, when using the 2x as many cores)
Although the question is quite simple, the answer is not. (however division by the number of cores is always a good 0th order approximation to start from)
The answer to such a question depends on a lot of things:
1) Is the code to calculate the molecular orbitals parallel (if not then it will take about 24h on any machine(desktop/supercomputer), depending on the clock speed)
2) If the code runs in parallel, how well does it scale on that specific supercomputer (is it openMP parallel only (then you need to remain within a single node, or is it MPI parallel, then you may run on multiple nodes) In the case you have perfect scaling the time will be 24h/(#cpu-cores) (The example of Sayyed assumes perfect scaling) However, most (actually all) codes do not do as well, but some may get rather close (parallellisation over bands and especially k-points are very beneficial).
3) What is the size of your problem, if you are looking at a large problem. E.g. ab-initio calculation of a system with 100 atom or a system with 5 atoms. In the latter case you may get speedups to let's say 8 cores, and when you use more cores your calculation gets slower (in real time, due to overhead), while the 100 atom job may show speedup up to 64 cores.
4) Related to the size of your problem is also the memory requirement. If you need much more memory than there is available in your cache, but by using multiple cores you end up with less memory per/core than you have cache memory, the speed-up may even be super-linear (saw that once with a colleague who had x10 speedup, when using the 2x as many cores)
Although the question is quite simple, the answer is not. (however division by the number of cores is always a good 0th order approximation to start from)
It can help to work harder on code optimization. A classical example is replacing function evaluation with table lookup and good interpolation (if you have lots of memory). In some quantum problems, matrices have lots of zeros and sparse matrix techniques can be helpful.
I Googled "Gaussian 09 speed" and found detailed information from the supplier on the effect of multicore & network parallelization on execution speed. I wonder why you didn't do that before asking a question here :-).