I mean in the general CAE programs such as ANSYS, ABACUS, Matlab and etc the more cores, the faster computing or the more maximum frequency of each core, the faster computing.
The clock rate of current CPU cores is not a really important feature nowadays due to parallelization and their current performance. The number of cores is important in this regard, but the efficiency of parallelization mainly comes from the software implementation as well, and will greatly depend on your target application.
If you look closely at the current CPU performances, more time is spent on data access than actual computation for finite element softwares. Critical parameters thus also include the cache sizes and the memory transfer speed of your CPU, (the time needed to tranfer data from different cache levels to the unit). You should also check clock rate of your RAM for the same memory access time reasons.
Depending on your application mutlithreading capabilities (virtual parallelization at one core level), like for recent Intel i7, allows memory sharing that can accelerate some operations such as matrix vector products. Dual RAM channeling capabilities speed up the memory access and can also greatly improve performance in some cases. These capabilities are worth checking.
Commercial codes have made a lot of progress regarding the exploitation of parallelization but their generality make it impossible to provide best performance everywhere. Since every CPU architecture have their pro's an con's regarding final performance it is very difficult to get best performance for several applications.
You will nevertheless get good performance with the most recent workstation hardware setups that usually provide 4 to 8 cores (if you have enough RAM not to swap). Paying attention to the cache memory sizes and transfer speeds will definitively help making a relevant choice.
It depends on your analysis in terms of parallel or serial. If you perform a parallel analysis then both are important regarding the overall computational efficiency. If you are solving your problem by using a serial approach (one core) then the computational strength of your core is the important factor that will determine the speed of your performed calculations.
Can I adjust the CAE programs that I mentioned before to parallel analysis or should I use C++ , fortran and any other programming languages to perform parallel computation?
I tested it in Matlab by one simple heat transfer analysis and found out that it used one of my CPU cores at the running time.
It depends on the software you are using if that is made to use multiple cores or single core. Your intervention is not going to make a difference. If you write a program in C++ or any programming language, then it depends upon the compiler whether it uses parallel programming or not. So if you are using a single core software then the processor clock helps.
From my own experience, if you want to use your own code, there is always a way to use parallel processing. Also Ansys has the ability to use multiprocessors. And in MATLAB you can use "matlabpool toolbox". therefore, then number of processors is very important in computational work. But the frequency of cpu is also important especially when you do not want to use parallel. Therefore, I think you should make a balance.
Dear Moradkhani, the RAM (Random Access Memory) is the important workspace which allows you to run your program. Some old PC's where unable to allocate large sizes in the beginning.
The clock rate of current CPU cores is not a really important feature nowadays due to parallelization and their current performance. The number of cores is important in this regard, but the efficiency of parallelization mainly comes from the software implementation as well, and will greatly depend on your target application.
If you look closely at the current CPU performances, more time is spent on data access than actual computation for finite element softwares. Critical parameters thus also include the cache sizes and the memory transfer speed of your CPU, (the time needed to tranfer data from different cache levels to the unit). You should also check clock rate of your RAM for the same memory access time reasons.
Depending on your application mutlithreading capabilities (virtual parallelization at one core level), like for recent Intel i7, allows memory sharing that can accelerate some operations such as matrix vector products. Dual RAM channeling capabilities speed up the memory access and can also greatly improve performance in some cases. These capabilities are worth checking.
Commercial codes have made a lot of progress regarding the exploitation of parallelization but their generality make it impossible to provide best performance everywhere. Since every CPU architecture have their pro's an con's regarding final performance it is very difficult to get best performance for several applications.
You will nevertheless get good performance with the most recent workstation hardware setups that usually provide 4 to 8 cores (if you have enough RAM not to swap). Paying attention to the cache memory sizes and transfer speeds will definitively help making a relevant choice.
Arithmetic and logic operations are performed by the processor (cpu) which can have more than one core : this is equivalent to parallel cpu's but it is not a physical design such that in workstations with more than one processor. Registers within the cpu are designed to handle the corresponding information. If the RAM is doubled and plus this becomes a complicated process.