There are two reasons for using the Back-propagation neural network as an intelligent system in most intelligent applications, they are: 1) The fast processing of this neural network due to the calculation of the gradient descent in its algorithm at each training iteration, the gradient descent is the partial derivative of the error function with respect to any weight and bias of the network, it represents the fast changing in the error function due to the changing of the weights and biases at each iteration, i.e. the gradient descent starts with maximum value at first iteration, and then it quickly decreases to minimum value in few iterations. 2) The high accuracy training of this neural network, whereas, this network generates two error functions, the first is generated from the output layer, which is used to update the weights and biases of input connections the output layer, while the second is generated from the hidden and output layers, which is used to update the weights and biases of input connections of the hidden layer, so, the Back-propagation neural network can present minimum error function in few iterations.