I have recently gained access to a research cluster which has Gaussian 16. I am relatively new to both Gaussian and cluster computing. I am currently optimizing a combination of small first row metal-chalcogenide clusters with ligands using a TPSSh functional and a TZVP basis set. I currently have access to several nodes each with 48 cores and at least 8GB of ram per core.

In the .gjf file I have a bit of scratch memory, 32 cores, and 64GB of memory.

%LindaWorkers=str-c23

%NProcShared=32

%rwf=a1,20GB,a2,20GB

%NoSave

%nprocshared=32

%mem=64000MB

In the .slurm I have allocated 1 node, 32 cores, and 2GB of memory per core.

#SBATCH --nodes=1

#SBATCH --ntasks-per-node=32

#SBATCH --time=48:00:00

#SBATCH --mem-per-cpu=2GB

#SBATCH --output=%x_%j.out

Yet when running my optimizations I get dramatically different amounts of memory utilized.

For a run on a single ligand under these conditions it used 31.4 of the 64GB given.

For a larger ligand w/ same condtions used 20GB.

For the small ligand with a larger basis set (def2TZVP) it only used 12.7 GB.

For a metal cluster with 4 ligands (note: convergence failure) it used only 2-3GB.

Is there something I can do to better utilize the memory I have available?

More Jonathan Gillen's questions See All
Similar questions and discussions