It depends. In fact, it depends on so many things. The processor e.g. Core i7 10th Gen or some other variant is important for the CPU speed and its in-built cache speed as well as capacity and all. RAM size like 16GB and all make space for your heap size as well as stack size of the program. Heap size is important for capacity for dynamic memory allocation i.e. allocating memory on demand. Stack-size is important for your program to run, to have space for context switching, saving the context, your local variables etc on the stack when nesting into methods or functions while the program is running. DL models and CV tasks dealing with images and videos require lot of physical space i.e your HDD or virtual memory to store. All these are important to sync up, to work together and algorithms can utilize them if implemented well in an optimized way.
You may refer to our article Technical Report gem5 System Simulator on N-Queens Program
to get an idea how one algorithm implementation works with different types and capacities of resources.
This configuration is enough for most of the Deep Learning and Computer Vision tasks. If you're unable to perform any (Deep Learning / Computer Vision) task smoothly on your low configuration system, you can use online platforms such as Google Colab.
The configuration that you mentioned is enough for almost deep learning and computer vision projects. The most important component here is the GPU, because the GPU will do the hard work. RTX 3060ti is a modern GPU with a computer capability of 8.6
https://developer.nvidia.com/cuda-gpus
and that is enough for deep learning. Actually I have a GTX 1050 Ti, and I work satisfactorily with this GPU in all my deep learning models and computer vision projects.
...what is the gpu memory size...a dual gpu of above 8 gb raam would be match made in heaven
Just make sure to test the cuda version with your tensorflow & pytorch..with appropriate python version in this case it must be 3.8 ....Also, u goota upgrade tour old code if you wish to maximize the utility out of your gpus...
It depends. First of all, and in my opinion of-course, the only thing that matters is GPU. So no matter how fast and powerful your CPU is, it is not going to affect your work. You only need a base system that won't bottleneck your GPU performance, which your current setup is way above that. One thing you can do if you are building a Desktop-PC is that to choose a RYZEN CPU and MOTHERBOARD. Also you should choose a CPU weaker than what you've chosen right-now because as I said before, you don't need that much power and weaker the CPU the Cheaper it is. And you can do this for every other part of your setup except you GPU. So I suggest you to study a little about this. Now for GPU, you have chosen a RTX 3060 TI which if I am not mistaken has a 8 GB of GDDR6 VRAM. In Deep Learning, specially Computer Vision, Our data is big most of the time and by increasing the complexity of our model, the VRAM required to store the necessary information to be processed increases respectively. Now 8 GB is the base minimum for a decent Computer Vision setup and will almost solve all the problems you have, sooner or later, but if you can up your budget a little and cut unnecessary expenses as I said earlier and acquire a GPU Setup with at-least 12 GB of VRAM(e.g. 1x 8GB + 1x 4GB), that could boost your work a lot. And the last thing is that the VRAM is not the only thing you should look for in GPU. You should also pay attention to the number of CUDA CORES the GPU has and also the work it was designed to do. So I would recommend you to buy a RTX Quadro Series GPU. They have more CUDA CORES and in my knowledge, they are designed for Computational Tasks which is exactly what you need.
Depends on dataset type and size as well the model complexity and ittaration. However, your configuration is well enough.
But for those who just wants to start, a basic computer with good internet connection is also enough, there is a well known platform where you can get gpu and tpu power for free named "Google Colab".
I think you can upgrade RAM to 32GB and video memory of Nvidia GeForce RTX 3060 to 8 GB instead of 6GB. 6GB is the least considered VRAM for Cuda Compatibility, that's why 8 GB or higher VRAM is better. Also, you can opt for RTX 3070 with 8 GB VRAM configuration if you would like to.