08 February 2022 5 3K Report

Hi, I am running some transition state calculations in Gaussian on a moderate system (~50 atoms including 1 d-shell metal). For this, my method includes:

#p opt=(calcall,ts,noeigen) wb97xd/def2svp

I am trying to speed things up on shared memory paralel execution cluster by using up to 64 CPU (%nprocshared=64). The thing is, I put %mem=450GB, which based on gaussian manual for large SCF, that needs even 3N^2 words, is sufficitent for ~45 CPU. Indeed, the PBS output shows, that the calculation used only mem=76 GB, but vmem=480 GB. Also in the job logfile, the number of CPU is reduced from 64 to 41 due to ecpmxn. I see, that this is due to insufficient vmem on the system that I allocated.

My question is, is there a way, how to lower the allocated memory that gaussian asks for, if it doesn't use it? This is getting very limiting for me as this restricts the use of many CPU for larger systems.

Thanks for any hints and helps.

More Adam Matěj's questions See All
Similar questions and discussions