When tuning the weights and biases of neural network, total number of tuning parameters should be calculated based on the network structure. Each individual in GA then holds a number of candidate solutions equal to the number of tuning parameters. At each iteration ANN calculates an output to the problem based on the parameters specified by GA. A separate cost function should also be defined which compares the deviations between output and real target values. GA minimizes this cost function in several iteration until a point where no further improvements could be made and then optimization is terminated.
Optimized values are replaced in the final neural network structure and your beloved network is now ready to use!!!
Thank you for your valuable answer. I'd supply your explanation with a hint that you could also introduce to the individual the activation function parameter(s) with ease (i.e. sigmoidal slope).
The simplest and most widely used solution is to predefine the ANN architecture in a way that makes sense to you and evolve an array of edge weights and AN threshold values. You can use a standard GA for that or, since these are real value array, you can try to use Evolution Strategies (ES).
If you want to do something more cleaver like evolving the network architecture, you'll probably want to find a tool that will help you with that. I would recommend Kenneth Stanley's NEAT. It's a powerful tool with many features and I know a lot of people are using it (I used it myself in the past). If you want to look into something even more clever and powerful there's also the more advanced version called HyperNEAT (But be advised it may be more difficult to understand and operate effectively).
Hossain Safari's solution covers the basic idea of a static NN being trained by a GA. Remember to do all the things you normally would when training a NN, such as the preprocessing of your input / output data.
Note that the GA training of the NN is far less efficient than the usual backpropogation method. There needs to be a good reason to use a GA (or any other evolutionary method) instead of backprop to train your NN, such as a ksituation in which input/output pairings are not readily available while a fitness function evaluation may be simpler to construct.
I have tried the code posted by Mouhammd Alkasassbeh and it works. I tried it in the MATLAB R2010b version. To make it work, you have to cut the last portion "function" and save it as mse_test m-file, then you can run the code. As you can see, the 2016 versions do not need this.
For me, while the code runs, the challenge remains as to where precisely I should insert it to demonstrate the training ability.
If someone has tried it in something like the bodyfat code example of MATLAB, I would be grateful to see it.
I have a matlab code about integrating the GA into the ANN or BRANN. But is it necessary that the GA-ANN is better than ANN or BRANN especially under small input data. My answer is 'not'. We need train more times to sometimes determine the integrated model with a little more accuracy under smaller data.However, sometimes we just determine the same accuracy model. That is not cost efficient. Does anyone agree with my idea? Maybe the advantage of GA-ANN would be obviously seen with the increase of the trained data. Thax
I prepared a complete tutorial about optimizing artificial neural networks using genetic algorithm with Python implementation. It is titled "Artificial Neural Networks Optimization using Genetic Algorithm with Python" It is available here:
You can read more and more in my 2018 book that covers GA in one of its chapters. The book is cited as “Ahmed Fawzy Gad ‘Practical Computer Vision Applications Using Deep Learning with CNNs’. Dec. 2018, Apress, 978–1–4842–4167–7” which is available here at Springer https://www.springer.com/us/book/9781484241660
Also here at Amazon: https://www.amazon.com/Practical-Computer-Vision-Applications-Learning/dp/1484241665/
Thanks to all of you for following up on the question and presenting me with some resources to get the answers. I believe I am not the only benefiting from your expertise. I am grateful.
Python is quite new to me. So I guess I will start by reading up and trying my hands at it and then I can utilise it for the task.
During my reading, I have observed that, in addition to genetic algorithms, there are other algorithms which can be coupled with ANN for accurate predictions.
1. Can anyone expand on these?
2. Compared to genetic algorithms, are they using the some principles and / or modes of operation?
3. Can these algorithms be implemented in the Python software?