What are the minimum computational resources in the terms of processor speed, storage capacity and number of processors required for molecular dynamics (typically a few thousand of atoms), continuum and crystal plasticity simulations?
The advice to engage a professional HPC architect is wise. Selecting an HPC environment, and creating same is something often undertaken by a research group, and often assigned to a graduate student to spec out, build and run. Unfortunately, often, said graduate student ends up with a stronger understanding of computational science, but no significant work, overall, in their original field of interest. Worse, the hardware and software selections can suffer, and therefore, research suffers, because of selections that were not necessarily based on solid premise and information.
IN general, today, HPC can be readily accomplished with nodes employing 12-16 cores/node (a node is a computer), and connected with a high performance interconnect fabric such as InfiniBand. In distributed memory systems, a general consensus has determined that either 2GB or 4GB/core is usually adequate, but profiling your jobs to determine actual requirements are usually a better guide, unless you're already very familiar with performance and requirements.
Note that I restricted the number of computational cores/node, even though there are chips out there with a lot of cores, often more than the top number I gave. Why? Depending on IO interchanges, it is possible to overwhelm the interconnect and slow down operations significantly.
One must also consider mass storage, how to manage that, and how to interconnect it to make the best use of all the hardware.
Finally, there's the selection of operating systems. For some applications, it's OK to select Windows, but to the best of my knowledge, Windows has never penetrated HPC significantly. There are a few larger HPC installations with Windows as the primary OS, but not many. More software is better suited to Linux or Unix applications. THAT said, there are some MD applications I've run across that are designed for, and only run on Windows. These, however, generally are better suited for a single, if large and well-provisioned, workstation than for an HPC cluster.
And, we've not even entered into the discussion of shared vs distributed memory.
The power of computing and also the type of software capability like parallel computing or distributed computing, memory requirements, IO speed of the bus, spindle speed of HDD, Graphics card capability & its memory to interpret the results are important... I have studied few years before these parameters and their importance and interactions.
If you are intending to do "Rigid Body Dynamics" or "Flexi Body Dynamics" decides further on the computing needs.
You did make a mention of 'Plasticity', which adds further the demanding needs for computing due to non-linearity in material...
However, pl. be confident however big problem is, it is solvable... You will produce an wonderful research output.
Good Luck, if you need any further inputs, might be of use, let me know.
it's quite hard to answer such questions with no more details. But to give you some order of magnitude, in large scale simulations (several million of atoms), I use typically a few thousand of atoms PER core. That's means that, if you limit your system to less than 10 000 atoms, it can be possible to run simulations on personal workstation (which contain generally 2 or 4 cores).
By the way, Keep in mind that the potential you use has a critical influence on the computational time.
Thanks for your response and making me understand the issue of number of cores required for a particular simulation. But I have following questions in context of HPC:
1. What should be the typical processor speed?
2. What should be the memory size?
3. What should be memory speed to ensure the compatibility between memory and processor?
4. How to ensure the compatibility between high speed processor and mother-board?
I would also be interested to know about the other important factors involved in computer hardware for HPC.
On hardware and software - you should pay attention to whether or not your software can run on GPU - in general, you can get results much faster and cheaper if you have hardware GPU and your software can utilize it.
The advice to engage a professional HPC architect is wise. Selecting an HPC environment, and creating same is something often undertaken by a research group, and often assigned to a graduate student to spec out, build and run. Unfortunately, often, said graduate student ends up with a stronger understanding of computational science, but no significant work, overall, in their original field of interest. Worse, the hardware and software selections can suffer, and therefore, research suffers, because of selections that were not necessarily based on solid premise and information.
IN general, today, HPC can be readily accomplished with nodes employing 12-16 cores/node (a node is a computer), and connected with a high performance interconnect fabric such as InfiniBand. In distributed memory systems, a general consensus has determined that either 2GB or 4GB/core is usually adequate, but profiling your jobs to determine actual requirements are usually a better guide, unless you're already very familiar with performance and requirements.
Note that I restricted the number of computational cores/node, even though there are chips out there with a lot of cores, often more than the top number I gave. Why? Depending on IO interchanges, it is possible to overwhelm the interconnect and slow down operations significantly.
One must also consider mass storage, how to manage that, and how to interconnect it to make the best use of all the hardware.
Finally, there's the selection of operating systems. For some applications, it's OK to select Windows, but to the best of my knowledge, Windows has never penetrated HPC significantly. There are a few larger HPC installations with Windows as the primary OS, but not many. More software is better suited to Linux or Unix applications. THAT said, there are some MD applications I've run across that are designed for, and only run on Windows. These, however, generally are better suited for a single, if large and well-provisioned, workstation than for an HPC cluster.
And, we've not even entered into the discussion of shared vs distributed memory.
for the Mesoscale simulations, the scale is about millions and millions freedom for the FEM, The peak floating point of the CPU should be reach 100G to 1T ,and the storage should be reach 10T-100T, memory should be about 100G to 1T. cluster should be the best choice, and SMP can also do this work but it is too expensive