26 January 2021 3 10K Report

How is the depth and width of a neural network in machine learning chosen for optimal processing?

Given that neural networks are tending to be deeper now (thanks to more efficient coding and faster machines) there's greater variability in the number of parameters to choose from. However, in spite of this 'spoilt for choice' scenario, there must be general guidelines on the number of layers and the widths of these layers in a neural network, so networks can be trained more rapidly, data is evaluated more quickly and computer resources are not wasted.

Input data parameters are normally fixed, such as the number of input nodes and the dynamic range in the data. Output parameters are also fixed, such as the number of output nodes and the quality of the data. Furthermore, the complexity of the neural network goes linearly with the depth of the network, but as the square of the width of the network.

Since the above 'boundary conditions' are generally known for an application of neural networks, how does one go about choosing the width and depth of a neural network?

many thanks,

neil

More Neil Salmon's questions See All
Similar questions and discussions