Artificial neural networks have relatively strict designs. Of course, generally, they are influenced by biology and try to build a mathematical model of real neural networks, but our understanding of real neural networks is insufficient for building exact models. Therefore, we can not conceive exact models or anything that comes "near" real neural networks.
As far as I know, all artificial neural networks are far away from real neural networks. Standard, classic fully-connected MLPs are not present in biology. Recurrent neural networks have a lack of real neuroplasticity, each neuron of a RNN has the same "feedback architecture" while real neurons save and share their information rather individually. Convolutional neural networks are effective and popular, but (for example) image processing in the human brain consists of only a few convolution layers while modern solutions (like GoogLeNet) already use tens of layers...and although they are producing great results for computers, they are not even close to human performance. Especially when we think of a "per-layer-performance", as we need a fairly high amount of layers and data reduction compared to real neural networks.
Additionally, to my knowledge, even modular, self-extending / self-restructuring artificial neural networks are rather "fixed and static" compared to the huge adaptability of real neural networks. The biological neuron normally has thousands of dendrites connecting the neuron to a huge variety of different areas and other neurons. Artificial neural networks are way more "straightforward".
So, is there anything we can learn about the human brain / real neural networks from artificial neural networks? Or is it just some attempt to create software that performs better than classic, static algorithms (or even do things where such algorithms fail)?
Can someone supply (preferably scientific) sources about this topic?