the term "artificial neural networks" most commonly refers to artificial networks with fully connected layers (each neuron in one layer is connected to every neuron in the following layer). They consist of at least three layers, namely the input (which receives the input data), the hidden (which process data), and the output layer (which provides the output).
Artificial neural networks with several hidden layers (let's say, more than two) are called "Deep neural networks", which is synonymous with Deep learning. They are computationally expensive but they provide better performance.
So, basically, the terms "Deep learning" and "Artificial neural networks" can refer to the same object, in some cases.
Luigi Borzí described it well, here is a nice overview article if you want some more detail: https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
Deep learning is a subset of machine learning that uses deep neural networks. Artificial neural networks are a broader class of computational models inspired by biological neural networks.
A good link is provided by Prof. Christian Schmidt. I would like a few points on the query.
ANN is the base model on which Deep Neural Network is established. The branch Deep Learning is much broader than DNN. In fact Deep architecture embedded into ANN results in DNN.
For a wonderful explanation on Deep Learning it is better to consult the book/monograph by Prof. Yoshua Bengio published under the series, Foundations and trends in Machine Learning (2009).
Deep learning and artificial neural networks (ANNs) are related concepts, but they are not exactly the same thing. Let me explain the difference between them:
Artificial Neural Networks (ANNs):
Artificial Neural Networks are a computational model inspired by the structure and function of biological neural networks in the human brain. ANNs consist of interconnected nodes called artificial neurons or perceptrons. These neurons are organized in layers, typically an input layer, one or more hidden layers, and an output layer. Each neuron takes inputs, performs a computation on them, and produces an output that is passed to the next layer. The connections between neurons are associated with weights that determine the strength of the connection. ANNs are designed to learn and generalize from examples by adjusting the weights through a process called training. The training is typically done using techniques like backpropagation and gradient descent.
Deep Learning:
Deep learning is a subfield of machine learning that focuses on algorithms and models inspired by the structure and function of the human brain, particularly artificial neural networks with multiple hidden layers. The term "deep" in deep learning refers to the presence of multiple layers in the neural network architecture. Deep learning models are characterized by their ability to automatically learn hierarchical representations of data by sequentially processing information through multiple layers. These models have shown exceptional performance in various tasks such as image and speech recognition, natural language processing, and many others. Deep learning models often require a large amount of labeled data for training and rely on powerful computational resources, such as graphics processing units (GPUs), due to their complexity.