I recently became interested in using BigDFT, but I still have a few questions.

1. What is the computational performance gain of BigDFT compared to more traditional DFT packages (such as VASP, Quantum ESPRESSO, or GPAW)?

I’ve read that BigDFT uses wavelets and is said to be efficient for large systems. But in practice, does the simulation time really decrease significantly? In which cases is the computational gain most noticeable?

2. I’ve seen some runs with around 2000 atoms using 6000 cores that completed in just a few hours. Are these “cores” usually CPU threads or GPU units in the context of BigDFT?

I’m asking because I’d like to understand what this means in terms of more accessible hardware. For example, if I have an RTX 3060 GPU, would it be possible to perform similar simulations, or are supercomputers required?

3. Is there any practical equivalence between a consumer GPU (such as an RTX 3060 or 3090) and a certain number of CPU cores for running BigDFT?

ust to get a rough idea: how many CPU cores (on average) would match the performance of a GPU like the RTX 3060 in BigDFT simulations? Is there an approximate comparison?

4. Does BigDFT scale well for large simulations? For example, if I double the number of atoms, does the simulation time also double? Or does it scale more efficiently?

I’m trying to understand whether the method is truly linear for large systems, as sometimes mentioned. Does the scaling behavior change depending on the type of material (e.g., organic vs inorganic)?

5. Do I need a specific compilation to run BigDFT with GPU acceleration? Or does it work directly with CUDA, OpenCL, etc.?

More Lucas Faria da Silva's questions See All
Similar questions and discussions