The number of layers in a Neural Network (NN) determines the ability of the network to learn specific patterns. With more layers the NN not only becomes more complex but also requires additional resources.
A NN with a single active layer* can only learn how to solve linearly separable problems. With two active layers, however, a NN can form convex regions in the data space, which means the NN can separate the data patterns with multiple lines that form different shapes (like rectangles, squares, triangles, etc). A NN with three active layers can create any arbitrary shape to separate the input data, which means that a Multi-Layer Perceptron (MLP) with three layers should be able to solve any problem.
NNs with more than three layers are usually considered deep-neural networks and require more computational resources to train. In addition to that, some additional techniques must be used to ensure that the NN is not overfitting (learning the train data by heart, which makes it less or even not effective on new data patterns) and that is able to train effectively.
With that being said, using an MLP with three layers (2 hidden + 1 output layers) in deep learning gives the network the ability to separate the filtered data using more complex shapes, compared to a single fully connected layer. However, which one is better depends on the data you are using and a better practice is to try both techniques and compare the results. For instance, in Convolutional Neural Networks you can either increase the number of convolutional layers or the number of fully connected layers (at the end), depending on what you are trying to achieve. The first one, though, seems more popular, from my experience.
* Please note that active layers are usually hidden layers or output layers. The input layers are not considered active as the just forward the input data to the network, without any calculations.
A single-layer neural network represents the most simple form of neural network, in which there is only one layer of input nodes that send weighted inputs to a subsequent layer of receiving nodes, or in some cases, one receiving node. This single-layer design was part of the foundation for systems which have now become much more complex. A multilayer perceptron (MLP) is a class of feedforward artificial neural network (ANN).
A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions. ... InputLayer: The Input layer has three nodes. The Bias node has a value of 1.
Chapter Multi-Layer Versus Single-Layer Neural Networks and an Appli...
Essentially, a single-layer NN is tagged a machine-learning model while a NN with a depth greater than 3 is tagged a deep-learning model.
Additionally, as the depth of a NN model increases, so does the layers of abstraction learnt by the NN model increase. Thus, the greater the depth of the NN, the more computational resource is required.
The number of layers in a Neural Network (NN) determines the ability of the network to learn specific patterns. With more layers the NN not only becomes more complex but also requires additional resources.
A NN with a single active layer* can only learn how to solve linearly separable problems. With two active layers, however, a NN can form convex regions in the data space, which means the NN can separate the data patterns with multiple lines that form different shapes (like rectangles, squares, triangles, etc). A NN with three active layers can create any arbitrary shape to separate the input data, which means that a Multi-Layer Perceptron (MLP) with three layers should be able to solve any problem.
NNs with more than three layers are usually considered deep-neural networks and require more computational resources to train. In addition to that, some additional techniques must be used to ensure that the NN is not overfitting (learning the train data by heart, which makes it less or even not effective on new data patterns) and that is able to train effectively.
With that being said, using an MLP with three layers (2 hidden + 1 output layers) in deep learning gives the network the ability to separate the filtered data using more complex shapes, compared to a single fully connected layer. However, which one is better depends on the data you are using and a better practice is to try both techniques and compare the results. For instance, in Convolutional Neural Networks you can either increase the number of convolutional layers or the number of fully connected layers (at the end), depending on what you are trying to achieve. The first one, though, seems more popular, from my experience.
* Please note that active layers are usually hidden layers or output layers. The input layers are not considered active as the just forward the input data to the network, without any calculations.