Meshing is difficult for complex geometries with a low/medium end workstations. Which has the major share in meshing - CPU or RAM or a combination of both?
It depends on the algorithm and the structure of your hardware (CPU, RAM, mainboard). Usually, the speed of the RAM will always be a restricting factor. If you do few computations but access the RAM often the RAM will be the restricting factor. Sometimes it is a good idea to not store intermediate results for later re-use, but to recompute intermediate results every time if they don't need much RAM access. In this case the CPU has more to do and your algorithm might be faster, but also it might be slower when it takes longer to recompute the results instead of fetching them from main memory. Finally, there is also the chance of parallel algorithms and multi-core CPUs. In this case it depend on the configuration of your hardware. With modern processor architectures different RAM modules can be accessed by different CPU cores at the same time. If this is possible depends on the algorithm and the operating system. If multiple cores are assigned memory in the same RAM module then parallel algorithms don't have much advantage. One final thing is that because the RAM can be the restricting factor CPUs have several layers of caches. There is fast, small cache close to the core and slower, but larger cache 'further away'. There is always a trade-off between size and speed. Here, your algorithm is really important: Caches are only useful if your algorithm operates locally in memory, i.e. successive accesses to RAM are at neighbouring addresses. Even two implementations of the same algorithm might show different behaviour because algorithms always leave some freedom for the implementation. I would say that for every implementation of an algorithm that is memory-bound I can write a worse implementation of the same algorithm that is CPU-bound.
This all means that there is no general answer to your question. The answer highly depends on the implementation of the algorithm. The algorithm itself can only give you a hint if it is expected to be CPU-bound or memory-bound. To figure out if your implementation of the algorithm is CPU-bound or memory-bound you need to profile it.
From my experience both are important. The CPU dictates how fast the processing is done. For complex structures a good processing capacity is extremely useful to get fast and reliable output, especially with a small mesh size. In case this is not there the computer tends to hang. RAM dictates the amount of volatile memory that is accessible for the process. The RAM usage depends on the algorithm followed. It is crucial that the minimum RAM requirements be met for otherwise the meshing will not happen citing insufficient memory. Even though both are very important I would say that the RAM should be as high as possible( depending on the need of course).
Yes, Krishan, both factors are important. Increasing the memory of both types of positive impact on the efficiency of mesh generation. However, in each case, preference selection strategy will depend on the complexity of the object. This multiparametric process, which seems to me, can not be described as unambiguous algorithm.
Both are equally important but ram play an important role as compare to cpu speed if you have a high cpu speed but dont have adequate ram then there is not any use of cpu speed so ram is more important.
It depends on the algorithm and the structure of your hardware (CPU, RAM, mainboard). Usually, the speed of the RAM will always be a restricting factor. If you do few computations but access the RAM often the RAM will be the restricting factor. Sometimes it is a good idea to not store intermediate results for later re-use, but to recompute intermediate results every time if they don't need much RAM access. In this case the CPU has more to do and your algorithm might be faster, but also it might be slower when it takes longer to recompute the results instead of fetching them from main memory. Finally, there is also the chance of parallel algorithms and multi-core CPUs. In this case it depend on the configuration of your hardware. With modern processor architectures different RAM modules can be accessed by different CPU cores at the same time. If this is possible depends on the algorithm and the operating system. If multiple cores are assigned memory in the same RAM module then parallel algorithms don't have much advantage. One final thing is that because the RAM can be the restricting factor CPUs have several layers of caches. There is fast, small cache close to the core and slower, but larger cache 'further away'. There is always a trade-off between size and speed. Here, your algorithm is really important: Caches are only useful if your algorithm operates locally in memory, i.e. successive accesses to RAM are at neighbouring addresses. Even two implementations of the same algorithm might show different behaviour because algorithms always leave some freedom for the implementation. I would say that for every implementation of an algorithm that is memory-bound I can write a worse implementation of the same algorithm that is CPU-bound.
This all means that there is no general answer to your question. The answer highly depends on the implementation of the algorithm. The algorithm itself can only give you a hint if it is expected to be CPU-bound or memory-bound. To figure out if your implementation of the algorithm is CPU-bound or memory-bound you need to profile it.
What was said above is all correct. I give another example from which you might take also a guidance. I early times of for example heat conduction, the explicit methods to solve a set of differential equation were almost a standard approach because of low RAM available. The speed of calculations was also slow, but the limiting was without no doubt RAM. The speed when using for example Gauss Seidel was "one interation = one node", I mean the speed at which boundary conditions were transfered into the domain. As the capacity was increased, fully implicit methods were gaining, i.e. the solution of large set of equations was possible.
So the answer is more a case based. If you don´t care about time in which you need to get a solution (weeks??), RAM is important for large jobs. For simple jobs (geometry, single phase flow, etc.), small RAM might be sufficient.
I have found that the biggest factor is to have enough core RAM available to keep all the solver data in core during the iteration process, this is most common in structural and dynamic analyses using iterative solver techniques to large stiffness matrix problems. If the RAM is insufficient, then the read/write speed to disk storage will cripple execution times (ie disk swap time). If you have sufficient RAM for the size of problems you intend to solve (often this will be 40 to 60 GB for larger problems) then the processor speed becomes relevant to the overall job solution time.
@Nathan: Having enough RAM to start with is surely useful. But even then there is the question which is the limiting factor: the RAM or the CPU? RAM speeds are a hundred times slower than CPU speed. Thus, it highly depends on your algorithm and your implementation what the limiting factor is. It can still be RAM even if all your data can be kept in RAM. This depends on the access pattern and the number of RAM accesses versus time spend on actual computation.
Recently I come to know about RAM Disk (Ram Drive) - If we have enough RAM (say about 96 - 120 GB) we can create half or more than half of the RAM as a drive and install our computational software in the RAM Disk.
High Performance Lovers of Animation making, Film making, 3D Game making and Quality Picture editing are using this feature nowadays.
Any body have experience of using this technique in computational modeling with softwares like ANSYS, COMSOL,etc,.
I don't think that a RAM disk will help a lot (well, it depends again on your algorithm). You will not have a huge performance gain by installing your software into the RAM disk. The operating system is smart enough to have the executable in memory anyway. In addition to this, level 1 caches usually are separate for instructions and data. Because of this there is no real benefit for installing software inside the RAM disk.
The reason why creative people are using this is because their software is written in a way to not load too much information into RAM and use the pre-existing files as fallback for data not in memory. Especially in time-based files (movies and audio) new frames or samples have to be loaded quickly. But, I suspect they don't put the software in the RAM disk, but rather their data they are working on. I am not sure about your computational task, but in most cases the relevant data is loaded into memory only once before the simulation starts. Then, there is no real use in having this data in a RAM disk. You might only have a slight gain if your log files/result files are written to RAM disk. But then again the operating system will do some clever caching in memory so that your performance gain will not be very high. You also need to consider that data in RAM disk is not permanent. So, you will have to copy the files back to your hard drive after the simulation.
So, it is highly dependent on the software if a RAM disk will help. I would say that it only helps if your simulation software writes out intermediate results and then reads them back in. However, this is very unlikely. Most software is written to keep everything in RAM because this yields best performance. Therefore, the better choice is to not restrict the memory for your application by splitting off some part of your RAM for a RAM disk.
From my experience with CFD modelling and analysis i can tell you in short that both are equally impotent. But at some extent i also experienced that RAM contributes more as for as modelling and meshing of a complex geometry concern.
But there are several other ways to speedup your computers like using GPUs, parallel computing etc. for analysis point of view.
I've got an inpression, that a few of above answer is about a solution with FEM, not about meshing (the question). Their are completely different animals.
The meshing usually in done with only one core, so for complex geometry you need a lot of RAM, but anyway you need more as fast single core as possible.
And second remark: probably if you are in trouble with meshing your geometry you will get in more serious trouble with computation. And finally, if you at last finish the simulation, you will hit a wall when you will try to display/interpret/analyse a huge amount of information requested as results.
From my practice, if you have any trouble with meshing, you shouldn't solve this problem, but try to use more coarse mesh with local refinements.
From my experience on Gambit, only one full core (out of 4) of a processor is utilized and the available ram. Extra memory required, other than ram, is managed with page file( user can set it on external drives or local hard disk). If ram is small then extra time is consumed in swapping the data between page file on disk and ram. Here, the speed is limited by ram or FSB ( whichever is lower),since Cpu is much master than ram.
From your question, i think, it should have been "Method or how to reduce meshing time for large complex geometries". If possible try multi-blocking for such cases or assemble them(different parts) in solver.
Second thing is that the development of meshing softwares will take their own time to utilize all cores of a CPU more efficiently like CFD solvers
In my experience, RAM is the most important feature in a workstation for meshing. I use Star-CCM+ and I found that it is convenient to generate the mesh with only one CPU (using parallel mesh gave some anomalies, like an apparent and uncontrollable number of cells reduction, but I think this is an issue of that particular software).
I also found that I need at least 1Gb of RAM of every million of polyhedral cells (I always deal with 3D mesh). So for mesh of about 10 millions of polyhedra, a 16 Gb RAM memory will avoid access to file paging. Of course, a good multi-core CPU does reduce the computational time, but if you do not have enough RAM (and this depends on the algorithm of the mesh generator), your workstation will become very slow or even stuck, due to the continue access to the swap partition.
The optimal solution, of course, is both. The limitations on the complexity of the geometry are threefold:
a) The design of the software which you are using.
b) CPU: the demand is increasing with the complexity of the geometry. Again, it depends on the software and its particular installation if - and how efficient - multiple cores can be used. The result of the limitation is WAITING.
c) RAM: This is a hard limit. Beyond the limitation your software will no longer work.
This makes the answer easy: if you have to decide between two options, choose RAM.
I would say you will need enough ram, especially if you are going to use other applications while the analysis is running, but the processor speed more crucial. At the same time don't forget the complexity and the size of the software you are using. All that will be very limited if you are using parallel programing and computers as they have more than one processors and it also depends on how these processors are linked.
I used to use fluent package for my fluid dynamics symmetrical and non-symmetrical geometries, it used to take a lot of time and even when I used my own program, then later when I moved my own program to parallel programming and used supercomputer just took couple minutes to run the same complex geometry using finite volume and element meshes.
Hi all, I'w working with meshing using VisCART of ESI-Group. I have this problem: 32GB of RAM is not enough. I set the file paging dimension to 80GB on the SSD drive. So now the system has almost 110GB of memory available. Anyway VisCART crashes without reaching all the available memory: I'm watching the 'Commit' tab of the Memory on the monitoring resource display of Win10, and never it reach 80GB; it crashes when it reaches 55GB. Does it depends on VisCART? Or Windows? Anyway is it useful to do this for very complex meshes? Is it true that this operation is dangerous for the SSD drives? Thanks
It depends on the algorithm and computer structure. Majority of CFD computations on normal machines are restricted by RAM mainly, especially when very large data files are created at each time step and need to be transferred to HDD. Your hard disk drive can also impact the computational time - e.g., SSD drives are much quicker and save you lots of time.