Hello everyone,
I have observed significant variations in model performance when training on different GPUs. For instance, when training on an RTX 3060 compared to an RTX 4090, the performance varied noticeably both in terms of speed and overall output. Furthermore, I’m curious about the performance impact when someone using even more advanced hardware such as the A100 or employing multiple GPUs.
Has anyone else observed similar trends? I would greatly appreciate any insights or research articles/documentaries that explore how different hardware configurations (like RTX 3060, RTX 4090, A100, or multiple GPUs) impact model training, inference times, and overall performance.
Thank you in advance for your attention and help!