In short, deep learning is a collection of techniques to train deep neural networks, whereas Q-Learning is a reinforcement learning formalism.
The goal of deep learning is to use neural networks to approximate functions, i.e., mappings from input data (e.g., images, EEG signals, whatever you like) onto output values (e.g., categories for image classification). Many different network architectures (feedforward or recurrent, convolutional or fully connected) and learning algorithms (supervised, unsupervised; the most popular one is backpropagation) are used, depending on the task to be solved by the network. Good introductory overviews are
Q-Learning on the other hand is one way in which to do reinforcement learning - to learn to select actions which maximize a reward. In particular it specifies that the reinforcement learner tries to infer a action-value function, i.e., a function which predicts the value (in terms of the reward that will be achieved) of each of the many actions an agent could take. Thus, if the approximation is good, the agent can choose the best action. Q-Learning tells you how to update your approximation of the Q-function after taking an action and observing the reward (specifically, to update the old estimate for the state-action pair just chosen by the reward obtained plus the discounted predicted future reward, minus the previous predicted value).
As Q-learning specifies how to update the value estimate for a state-action pair, one can implement it using large tables with all possible state-action pairs. For realistic problems, this is often intractable and one has to resort to function approximation - which can be done using deep learning techniques. In fact, some very successful recent reinforcement learning approaches use deep networks to approximate the Q-function (so-called deep Q-networks, e.g. https://arxiv.org/pdf/1312.5602.pdf). Thus, deep learning and Q-learning are not opposites, but can complement each other.
In short, deep learning is a collection of techniques to train deep neural networks, whereas Q-Learning is a reinforcement learning formalism.
The goal of deep learning is to use neural networks to approximate functions, i.e., mappings from input data (e.g., images, EEG signals, whatever you like) onto output values (e.g., categories for image classification). Many different network architectures (feedforward or recurrent, convolutional or fully connected) and learning algorithms (supervised, unsupervised; the most popular one is backpropagation) are used, depending on the task to be solved by the network. Good introductory overviews are
Q-Learning on the other hand is one way in which to do reinforcement learning - to learn to select actions which maximize a reward. In particular it specifies that the reinforcement learner tries to infer a action-value function, i.e., a function which predicts the value (in terms of the reward that will be achieved) of each of the many actions an agent could take. Thus, if the approximation is good, the agent can choose the best action. Q-Learning tells you how to update your approximation of the Q-function after taking an action and observing the reward (specifically, to update the old estimate for the state-action pair just chosen by the reward obtained plus the discounted predicted future reward, minus the previous predicted value).
As Q-learning specifies how to update the value estimate for a state-action pair, one can implement it using large tables with all possible state-action pairs. For realistic problems, this is often intractable and one has to resort to function approximation - which can be done using deep learning techniques. In fact, some very successful recent reinforcement learning approaches use deep networks to approximate the Q-function (so-called deep Q-networks, e.g. https://arxiv.org/pdf/1312.5602.pdf). Thus, deep learning and Q-learning are not opposites, but can complement each other.
Deep learning a general framework of learning function that maps input data to output through neural networks. Although neural networks are widely studied for decades, many layers of neurons are required to approximate the learning function for practical use cases (like image classification, speech recognition etc). This turned out to be a computationally challenging problem. But with the advent of high power computing system and massive data, we are able to train neural networks with a high number of layers now (Recent Deep Net from Microsoft has 150+ layers). This approach of using "deep" layer neural networks for machine learning is commonly referred as "Deep Learning".
Even though neural networks were first introduced as tools for machine learning on datasets, recent developments explore the idea of using them for Reinforcement Learning as well. And the results we are getting in Reinforcement learning are also astonishing.
Q-learning is one of the Reinforcement Learning techniques to learn a multi-state system. Two common approaches for Q learning is to maintain a Q-table over the states or to learn a function which approximates the Q-values over all states. Deep Q learning exploits the idea of using Deep Neural Networks for function approximation in Q-Learning.