your system specifications are good. I would recommend to double the RAM capacity. And I do not know if you already considered to buy also some Tesla cards for GPU computing. Indeed, you could actually buy two Tesla K80 for cheap and in the case you wanted to go full power you could even ponder Tesla V100, but in the latter case the price would go up a lot. As a matter of fact, with GPU computing you can swap those expensive CPUs for something less powerful and save your lab money.
These specifications may or may not be sufficient for you depending on your needs/code. Code structure/dimensions define your need of resources. additionally how many parallel computations you need to perform and how much data you are going to save on disk for interpretation. HDD may be small in most of the cases but you can manage if you periodically move your results and all other files out of that computer to make room for new calculations. M. Alvioli :you do not need entirely different set of coding techniques, you just need to tweak your code to enable it to be used with GPU, and need some libraries which make it happen. CUDA is very easy to be used in FORTRAN or C, and if tuned properly it can greatly reduce computation time. (I have no experience with AMD GPUs, due to extensive use of Nvidia in research, i prefer to stick with its GPUs)
Hi, as others says it depends of your software and purpose. Having 2 servers runing in paralell requires to take into account the software processes latency requirements. For some applications Gigabit ethernet does not cut it. In our lab we got 2 servers on this price range for climate experiments (about 10k each). BTW, for physics i don't think disk speed will be a bottleneck. High speed sas or even sata are fast enough and a lot cheaper. Beware of GPU unless your lab make the code/have people experienced on this or the software is designed to it.