I followed the below instructions to install Gromacs-5.1.4 after patching up with plumed. and then ran gromacs-plumed simulations end up with following errors. I didnt see any errors, while installation. I also ran on gpu and endup with same error.

Could you please instruct me that where i did wrong?.

Thanking you

note : attched md.log file

Error while running on HPC

[srp106@hpc1 TOPO]$ gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

:-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

GROMACS is written by:

Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar

Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch

Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen

Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner

Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff

Erik Marklund Teemu Murtola Szilard Pall Sander Pronk

Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers

Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf

and the project leaders:

Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2015, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.

GROMACS: gmx mdrun, VERSION 5.1.4

Executable: /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx_mpi

Data prefix: /home/srp106/software/gromacs-5.1.4/gmxbuild

Command line:

gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

Back Off! I just backed up md.log to ./#md.log.2#

NOTE: Error occurred during GPU detection:

CUDA driver version is insufficient for CUDA runtime version

Can not use GPU acceleration, will fall back to CPU kernels.

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs

Hardware detected on host hpc1 (the node of MPI rank 0):

CPU info:

Vendor: GenuineIntel

Brand: Intel(R) Xeon(R) CPU X5660 @ 2.80GHz

SIMD instructions most likely to fit this hardware: SSE4.1

SIMD instructions selected at GROMACS compile time: SSE4.1

Reading file topolA.tpr, VERSION 4.6.7 (single precision)

Note: file tpx version 83, software tpx version 103

Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process

NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be

removed in a future release when 'verlet' supports all interaction forms.

Back Off! I just backed up traj_comp.xtc to ./#traj_comp.xtc.5#

Back Off! I just backed up ener.edr to ./#ener.edr.5#

starting mdrun 'alanine dipeptide in vacuum'

10000 steps, 20.0 ps.

[hpc1:27984] *** Process received signal ***

[hpc1:27984] Signal: Segmentation fault (11)

[hpc1:27984] Signal code: Address not mapped (1)

[hpc1:27984] Failing at address: (nil)

[hpc1:27984] [ 0] /lib64/libpthread.so.0[0x393a60f7e0]

[hpc1:27984] *** End of error message ***

Segmentation fault (core dumped)

md log out file  while running on HPC

Log file opened on Thu Aug 3 16:13:51 2017

Host: hpc1 pid: 18034 rank ID: 0 number of ranks: 1

:-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

GROMACS is written by:

Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar

Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch

Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen

Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner

Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff

Erik Marklund Teemu Murtola Szilard Pall Sander Pronk

Roland Schulz Alexey Shvetsov Michael Shirts Alfons Sijbers

Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf

and the project leaders:

Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2015, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify itA

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.

GROMACS: gmx mdrun, VERSION 5.1.4

Executable: /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx_mpi

Data prefix: /home/srp106/software/gromacs-5.1.4/gmxbuild

Command line:

gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

GROMACS version: VERSION 5.1.4

Precision: single

Memory model: 64 bit

MPI library: MPI

OpenMP support: disabled

GPU support: enabled

OpenCL support: disabled

invsqrt routine: gmx_software_invsqrt(x)

SIMD instructions: SSE4.1

FFT library: fftw-3.3.6-pl2-fma-sse2-avx-avx2-avx2_128

RDTSCP usage: enabled

C++11 compilation: disabled

TNG support: enabled

Tracing support: disabled

Built on: Thu Jul 27 14:09:08 EDT 2017

Built by: srp106@hpc1 [CMAKE]

Build OS/arch: Linux 2.6.32-696.3.1.el6.x86_64 x86_64

Build CPU vendor: GenuineIntel

Build CPU brand: Intel(R) Xeon(R) CPU X5660 @ 2.80GHz

Build CPU family: 6 Model: 44 Stepping: 2

Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3

C compiler: /usr/local/openmpi/1.8.8/bin/mpicc Intel 15.0.3.20150407

C compiler flags: -msse4.1 -std=gnu99 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias

C++ compiler: /usr/local/openmpi/1.8.8/bin/mpic++ Intel 15.0.3.20150407

C++ compiler flags: -msse4.1 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd1782 -wd2282 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias

Boost version: 1.58.0 (external)

CUDA compiler: /usr/local/cuda-7.5/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17

CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-Xcompiler;-gcc-version=450; ;-msse4.1;-w3;-wd177;-wd271;-wd304;-wd383;-wd424;-wd444;-wd522;-wd593;-wd869;-wd981;-wd1418;-wd1419;-wd1572;-wd1599;-wd2259;-wd2415;-wd2547;-wd2557;-wd3280;-wd3346;-wd11074;-wd11076;-wd1782;-wd2282;-wd3180;-O3;-DNDEBUG;-ip;-funroll-all-loops;-alias-const;-ansi-alias;

CUDA driver: 0.0

CUDA runtime: 0.0

NOTE: Error occurred during GPU detection:

CUDA driver version is insufficient for CUDA runtime version

Can not use GPU acceleration, will fall back to CPU kernels.

Running on 1 node with total 12 cores, 12 logical cores, 0 compatible GPUs

Hardware detected on host hpc1 (the node of MPI rank 0):

CPU info:

Vendor: GenuineIntel

Brand: Intel(R) Xeon(R) CPU X5660 @ 2.80GHz

Family: 6 model: 44 stepping: 2

CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3

SIMD instructions most likely to fit this hardware: SSE4.1

SIMD instructions selected at GROMACS compile time: SSE4.1

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.

Lindahl

GROMACS: High performance molecular simulations through multi-level

parallelism from laptops to supercomputers

SoftwareX 1 (2015) pp. 19-25

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl

Tackling Exascale Software Challenges in Molecular Dynamics Simulations with

GROMACS

In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.

Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl

GROMACS 4.5: a high-throughput and highly parallel open source molecular

simulation toolkit

Bioinformatics 29 (2013) pp. 845-54

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl

GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable

molecular simulation

J. Chem. Theory Comput. 4 (2008) pp. 435-447

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.

Berendsen

GROMACS: Fast, Flexible and Free

J. Comp. Chem. 26 (2005) pp. 1701-1719

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

E. Lindahl and B. Hess and D. van der Spoel

GROMACS 3.0: A package for molecular simulation and trajectory analysis

J. Mol. Mod. 7 (2001) pp. 306-317

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

H. J. C. Berendsen, D. van der Spoel and R. van Drunen

GROMACS: A message-passing parallel molecular dynamics implementation

Comp. Phys. Comm. 91 (1995) pp. 43-56

-------- -------- --- Thank You --- -------- --------

Input Parameters:

integrator = md

tinit = 0

dt = 0.002

nsteps = 1000

init-step = 0

simulation-part = 1

comm-mode = Angular

nstcomm = 100

bd-fric = 0

ld-seed = 1993

emtol = 10

emstep = 0.01

niter = 20

fcstep = 0

nstcgsteep = 1000

nbfgscorr = 10

rtpi = 0.05

nstxout = 0

nstvout = 0

nstfout = 0

nstlog = 100

nstcalcenergy = 100

nstenergy = 100

nstxout-compressed = 100

compressed-x-precision = 1000

cutoff-scheme = Group

nstlist = 10

ns-type = Grid

pbc = no

periodic-molecules = FALSE

verlet-buffer-tolerance = 0.005

rlist = 1.2

rlistlong = 1.2

nstcalclr = 0

coulombtype = Cut-off

coulomb-modifier = None

rcoulomb-switch = 0

rcoulomb = 1.2

epsilon-r = 1

epsilon-rf = inf

vdw-type = Cut-off

vdw-modifier = None

rvdw-switch = 0

rvdw = 1.2

DispCorr = No

table-extension = 1

fourierspacing = 0.12

fourier-nx = 0

fourier-ny = 0

fourier-nz = 0

pme-order = 4

ewald-rtol = 1e-05

ewald-rtol-lj = 1e-05

lj-pme-comb-rule = Geometric

ewald-geometry = 0

epsilon-surface = 0

implicit-solvent = No

gb-algorithm = Still

nstgbradii = 1

rgbradii = 1

gb-epsilon-solvent = 80

gb-saltconc = 0

gb-obc-alpha = 1

gb-obc-beta = 0.8

gb-obc-gamma = 4.85

gb-dielectric-offset = 0.009

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05016

tcoupl = V-rescale

nsttcouple = 10

nh-chain-length = 0

print-nose-hoover-chain-variables = FALSE

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

compressibility (3x3):

compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p (3x3):

ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

refcoord-scaling = No

posres-com (3):

posres-com[0]= 0.00000e+00

posres-com[1]= 0.00000e+00

posres-com[2]= 0.00000e+00

posres-comB (3):

posres-comB[0]= 0.00000e+00

posres-comB[1]= 0.00000e+00

posres-comB[2]= 0.00000e+00

QMMM = FALSE

QMconstraints = 0

QMMMscheme = 0

MMChargeScaleFactor = 1

qm-opts:

ngQM = 0

constraint-algorithm = Lincs

continuation = FALSE

Shake-SOR = FALSE

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30

nwall = 0

wall-type = 9-3

wall-r-linpot = -1

wall-atomtype[0] = -1

wall-atomtype[1] = -1

wall-density[0] = 0

wall-density[1] = 0

wall-ewald-zfac = 3

pull = FALSE

rotation = FALSE

interactiveMD = FALSE

disre = No

disre-weighting = Conservative

disre-mixed = FALSE

dr-fc = 1000

dr-tau = 0

nstdisreout = 100

orire-fc = 0

orire-tau = 0

nstorireout = 100

free-energy = no

cos-acceleration = 0

deform (3x3):

deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

simulated-tempering = FALSE

E-x:

n = 0

E-xt:

n = 0

E-y:

n = 0

E-yt:

n = 0

E-z:

n = 0

E-zt:

n = 0

swapcoords = no

adress = FALSE

userint1 = 0

userint2 = 0

userint3 = 0

userint4 = 0

userreal1 = 0

userreal2 = 0

userreal3 = 0

userreal4 = 0

grpopts:

nrdf: 39

ref-t: 300

tau-t: 0.1

annealing: No

annealing-npoints: 0

acc: 0 0 0

nfreeze: N N N

energygrp-flags[ 0]: 0

Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process

NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be

removed in a future release when 'verlet' supports all interaction forms.

Table routines are used for coulomb: FALSE

Table routines are used for vdw: FALSE

Cut-off's: NS: 1.2 Coulomb: 1.2 LJ: 1.2

System total charge: -0.000

Generated table with 1100 data points for 1-4 COUL.

Tabscale = 500 points/nm

Generated table with 1100 data points for 1-4 LJ6.

Tabscale = 500 points/nm

Generated table with 1100 data points for 1-4 LJ12.

Tabscale = 500 points/nm

Potential shift: LJ r^-12: 0.000e+00 r^-6: 0.000e+00, Coulomb -0e+00

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije

LINCS: A Linear Constraint Solver for molecular simulations

J. Comp. Chem. 18 (1997) pp. 1463-1472

-------- -------- --- Thank You --- -------- --------

The number of constraints is 21

Center of mass motion removal mode is Angular

We have the following groups for center of mass motion removal:

0: rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

G. Bussi, D. Donadio and M. Parrinello

Canonical sampling through velocity rescaling

J. Chem. Phys. 126 (2007) pp. 014101

-------- -------- --- Thank You --- -------- --------

Gromacs-plumed installation

[srp106@hpc1 TOPO]$

./configure --prefix=/home/srp106/ software/plumed2

make -j 16

make install

cd gromacs-5.1.4

plumed patch -p --runtime -e

module load base gcc libmatheval gsl xdrfile boost fftw/3.3.6-pl2 lapack/3.7.0

module load cuda/7.5

CXX=mpic++ CC=mpicc FC=mpifort LDFLAGS=-lmpi_cxx cmake -DCMAKE_BUILD_TYPE=RELEASE -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_THREAD_MPI=OFF -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/srp106/software/gromacs-5.1.4/gmxbuild -DFFTWF_INCLUDE_DIR=/usr/local/fftw/3.3.6-pl2/include -DBoost_INCLUDE_DIR=/usr/local/boost/1_58_0/include -DBoost_DIR=/usr/local/boost/1_58_0 -DZLIB_INCLUDE_DIR=/usr/local/base/8.0/include -DZLIB_LIBRARY_RELEASE=/usr/local/base/8.0/lib/libz.so -DFFTWF_LIBRARY=/usr/local/fftw/3.3.6-pl2/lib/libfftw3f.so

make -j 16

make install

md log out file  while running on GPU

Log file opened on Thu Aug 3 16:30:51 2017

Host: gpu013t pid: 29484 rank ID: 0 number of ranks: 1

:-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

GROMACS is written by:

Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar

Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch

Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen

Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner

Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff

Erik Marklund Teemu Murtola Szilard Pall Sander Pronk

Roland Schulz Alexey Shvetsov Michael Shirts Alfons a

Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf

and the project leaders:

Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2015, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.

GROMACS: gmx mdrun, VERSION 5.1.4

Executable: /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx_mpi

Data prefix: /home/srp106/software/gromacs-5.1.4/gmxbuild

Command line:

gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

GROMACS version: VERSION 5.1.4

Precision: single

Memory model: 64 bit

MPI library: MPI

OpenMP support: disabled

GPU support: enabled

OpenCL support: disabled

invsqrt routine: gmx_software_invsqrt(x)

SIMD instructions: SSE4.1

FFT library: fftw-3.3.6-pl2-fma-sse2-avx-avx2-avx2_128

RDTSCP usage: enabled

C++11 compilation: disabled

TNG support: enabled

Tracing support: disabled

Built on: Thu Jul 27 14:09:08 EDT 2017

Built by: srp106@hpc1 [CMAKE]

Build OS/arch: Linux 2.6.32-696.3.1.el6.x86_64 x86_64

Build CPU vendor: GenuineIntel

Build CPU brand: Intel(R) Xeon(R) CPU X5660 @ 2.80GHz

Build CPU family: 6 Model: 44 Stepping: 2

Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3

C compiler: /usr/local/openmpi/1.8.8/bin/mpicc Intel 15.0.3.20150407

C compiler flags: -msse4.1 -std=gnu99 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias

C++ compiler: /usr/local/openmpi/1.8.8/bin/mpic++ Intel 15.0.3.20150407

C++ compiler flags: -msse4.1 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd1782 -wd2282 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias

Boost version: 1.58.0 (external)

CUDA compiler: /usr/local/cuda-7.5/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17

CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-Xcompiler;-gcc-version=450; ;-msse4.1;-w3;-wd177;-wd271;-wd304;-wd383;-wd424;-wd444;-wd522;-wd593;-wd869;-wd981;-wd1418;-wd1419;-wd1572;-wd1599;-wd2259;-wd2415;-wd2547;-wd2557;-wd3280;-wd3346;-wd11074;-wd11076;-wd1782;-wd2282;-wd3180;-O3;-DNDEBUG;-ip;-funroll-all-loops;-alias-const;-ansi-alias;

CUDA driver: 7.50

CUDA runtime: 7.50

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs

Hardware detected on host gpu013t (the node of MPI rank 0):

CPU info:

Vendor: GenuineIntel

Brand: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Family: 6 model: 44 stepping: 2

CPU features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3

SIMD instructions most likely to fit this hardware: SSE4.1

SIMD instructions selected at GROMACS compile time: SSE4.1

GPU info:

Number of GPUs detected: 2

#0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible

#1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.

Lindahl

GROMACS: High performance molecular simulations through multi-level

parallelism from laptops to supercomputers

SoftwareX 1 (2015) pp. 19-25

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl

Tackling Exascale Software Challenges in Molecular Dynamics Simulations with

GROMACS

In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.

Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl

GROMACS 4.5: a high-throughput and highly parallel open source molecular

simulation toolkit

Bioinformatics 29 (2013) pp. 845-54

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl

GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable

molecular simulation

J. Chem. Theory Comput. 4 (2008) pp. 435-447

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.

Berendsen

GROMACS: Fast, Flexible and Free

J. Comp. Chem. 26 (2005) pp. 1701-1719

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

E. Lindahl and B. Hess and D. van der Spoel

GROMACS 3.0: A package for molecular simulation and trajectory analysis

J. Mol. Mod. 7 (2001) pp. 306-317

-------- -------- --- Thank You --- -------- --------

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

H. J. C. Berendsen, D. van der Spoel and R. van Drunen

GROMACS: A message-passing parallel molecular dynamics implementation

Comp. Phys. Comm. 91 (1995) pp. 43-56

-------- -------- --- Thank You --- -------- --------

NOTE: GPU(s) found, but the current simulation can not use GPUs

To use a GPU, set the mdp option: cutoff-scheme = Verlet

Input Parameters:

integrator = md

tinit = 0

dt = 0.002

nsteps = 1000

init-step = 0

simulation-part = 1

comm-mode = Angular

nstcomm = 100

bd-fric = 0

ld-seed = 1993

emtol = 10

emstep = 0.01

niter = 20

fcstep = 0

nstcgsteep = 1000

nbfgscorr = 10

rtpi = 0.05

nstxout = 0

nstvout = 0

nstfout = 0

nstlog = 100

nstcalcenergy = 100

nstenergy = 100

nstxout-compressed = 100

compressed-x-precision = 1000

cutoff-scheme = Group

nstlist = 10

ns-type = Grid

pbc = no

periodic-molecules = FALSE

verlet-buffer-tolerance = 0.005

rlist = 1.2

rlistlong = 1.2

nstcalclr = 0

coulombtype = Cut-off

coulomb-modifier = None

rcoulomb-switch = 0

rcoulomb = 1.2

epsilon-r = 1

epsilon-rf = inf

vdw-type = Cut-off

vdw-modifier = None

rvdw-switch = 0

rvdw = 1.2

DispCorr = No

table-extension = 1

fourierspacing = 0.12

fourier-nx = 0

fourier-ny = 0

fourier-nz = 0

pme-order = 4

ewald-rtol = 1e-05

ewald-rtol-lj = 1e-05

lj-pme-comb-rule = Geometric

ewald-geometry = 0

epsilon-surface = 0

implicit-solvent = No

gb-algorithm = Still

nstgbradii = 1

rgbradii = 1

gb-epsilon-solvent = 80

gb-saltconc = 0

gb-obc-alpha = 1

gb-obc-beta = 0.8

gb-obc-gamma = 4.85

gb-dielectric-offset = 0.009

sa-algorithm = Ace-approximation

sa-surface-tension = 2.05016

tcoupl = V-rescale

nsttcouple = 10

nh-chain-length = 0

print-nose-hoover-chain-variables = FALSE

pcoupl = No

pcoupltype = Isotropic

nstpcouple = -1

tau-p = 1

compressibility (3x3):

compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p (3x3):

ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

refcoord-scaling = No

posres-com (3):

posres-com[0]= 0.00000e+00

posres-com[1]= 0.00000e+00

posres-com[2]= 0.00000e+00

posres-comB (3):

posres-comB[0]= 0.00000e+00

posres-comB[1]= 0.00000e+00

posres-comB[2]= 0.00000e+00

QMMM = FALSE

QMconstraints = 0

QMMMscheme = 0

MMChargeScaleFactor = 1

qm-opts:

ngQM = 0

constraint-algorithm = Lincs

continuation = FALSE

Shake-SOR = FALSE

shake-tol = 0.0001

lincs-order = 4

lincs-iter = 1

lincs-warnangle = 30

nwall = 0

wall-type = 9-3

wall-r-linpot = -1

wall-atomtype[0] = -1

wall-atomtype[1] = -1

wall-density[0] = 0

wall-density[1] = 0

wall-ewald-zfac = 3

pull = FALSE

rotation = FALSE

interactiveMD = FALSE

disre = No

disre-weighting = Conservative

disre-mixed = FALSE

dr-fc = 1000

dr-tau = 0

nstdisreout = 100

orire-fc = 0

orire-tau = 0

nstorireout = 100

free-energy = no

cos-acceleration = 0

deform (3x3):

deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}

simulated-tempering = FALSE

E-x:

n = 0

E-xt:

n = 0

E-y:

n = 0

E-yt:

n = 0

E-z:

n = 0

E-zt:

n = 0

swapcoords = no

adress = FALSE

userint1 = 0

userint2 = 0

userint3 = 0

userint4 = 0

userreal1 = 0

userreal2 = 0

userreal3 = 0

userreal4 = 0

grpopts:

nrdf: 39

ref-t: 300

tau-t: 0.1

annealing: No

annealing-npoints: 0

acc: 0 0 0

nfreeze: N N N

energygrp-flags[ 0]: 0

Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process

2 compatible GPUs detected in the system, but none will be used.

Consider trying GPU acceleration with the Verlet scheme!

NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be

removed in a future release when 'verlet' supports all interaction forms.

Table routines are used for coulomb: FALSE

Table routines are used for vdw: FALSE

Cut-off's: NS: 1.2 Coulomb: 1.2 LJ: 1.2

System total charge: -0.000

Generated table with 1100 data points for 1-4 COUL.

Tabscale = 500 points/nm

Generated table with 1100 data points for 1-4 LJ6.

Tabscale = 500 points/nm

Generated table with 1100 data points for 1-4 LJ12.

Tabscale = 500 points/nm

Potential shift: LJ r^-12: 0.000e+00 r^-6: 0.000e+00, Coulomb -0e+00

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije

LINCS: A Linear Constraint Solver for molecular simulations

J. Comp. Chem. 18 (1997) pp. 1463-1472

-------- -------- --- Thank You --- -------- --------

The number of constraints is 21

Center of mass motion removal mode is Angular

We have the following groups for center of mass motion removal:

0: rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++

G. Bussi, D. Donadio and M. Parrinello

Canonical sampling through velocity rescaling

J. Chem. Phys. 126 (2007) pp. 014101

-------- -------- --- Thank You --- -------- --------

More Srinivasa Rao Penumutchu's questions See All
Similar questions and discussions