If you are referring to hardware implementation of deep learning for particular applications, devices, or similarly, such cards (e.g. AMD or Nvidia GPU cards) invaluably simplify your design process, depending on your computational architecture.
For basic software prototypes, those aforesaid CPUs would also function properly.
I cannot say you can only use GPUs for DL. However, the performance of GPUs is far from the CPUs can reach due to their DL compatible architecture and easy to use for the applications of DL. For example, you can easily find DL libraries (e.g., Tensorflow, Caffee, Theano, etc.) and example data with code for an initial application (see https://relinklabs.com/gpu-vs-cpu-in-convolutional-neural-networks-using-tensorflow) not to mention the factor of performance.
You ask a good question. To get started and go through a few simple examples using someone else's models, you do not need a fancy GPU.
You might consider a next step : look at an NVIDIA video card - nothing too fancy, but something with many cores. See https://corpocrat.com/2015/07/03/running-word2vec-in-nvidia-gpu/
I have tested an NVIDIA GTX 1050 ti and it seems to produce word embeddings ok.
Oh - this will take awhile to sort out - so please be patient - the right drivers and intermediate libraries are needed.
In my original message I wrote that AMD GPUs were almost useless in DL. Things have changed quite a bit. There is now an active ROCm tensorflow fork, making ROCm-compatible AMD GPUs a viable option for tensorflow/Keras users. Moreover, Keras itself now supports the PlaidML backend, making any OpenCL-compatible GPU capable of accelerating DL tasks, though on most platforms you won't get nearly as much performance out of OpenCL vs ROCm and, especially, CUDA. The performance disadvantage is a bit less significant on Macs, because PlaidML supports Metal. Nevertheless, Nvidia is still the best option when it comes to raw performance and industry support, but you are no longer forced to buy their cards to get decent DL training acceleration.
Original message
It depends. If you only want to use a trained model for inference, then a CPU will work just fine. All major deep learning frameworks (Theano, Tensorflow) can work on both CPUs and GPUs. If you want to train a relatively lightweight model (a couple of convolutional layers with a shallow MLP), then a 10-20 core Xeon CPU (physical cores, not hyper threads) will work fine on a small dataset, but that won't get you far with modern models, comprising multiple computational graphs, multiple convolutional and recurrent layers. You'll have to wait for weeks to train a modern model without a decent GPU. Take note, that AMD GPUs are almost useless in DL, so you'll have to but an NVidia. A GTX 1060 or 1070 will be quite enough for your personal usage. Any card with less memory will be problematic, given how much memory modern models require.
GTX 1050 ti is only about $100. If it works ok; if it doesn't, then not much lost. I never had problems with out of memory during my work, and I checked closely. 20-core Xeon is something like $2K, so we are talking apples and oranges.
If we are funding this ourselves, GTX 1050 ti is a good choice. Alternative is to find an online site that lets you run GPU code for free - you will need some patience to use such a site.