The past few weeks I've been analyzing the particular solutions generated by my ANN. I've seen that even for simple tasks like a delayed XOR function the solutions can be highly convoluted and highly distributed across many different neurons making it really difficult for the human eye discern what specific computations are happening across this network. (See the weight matrix below which solves a delayed XOR task. The solution happens to be beautiful but is very complex and highly distributed for this simple function.)
These observations signal to me just how daunting the task of reverse engineering real neuronal networks are where:
- I can't control every aspect of my experiment
- There are stochastic elements
- Complexity orders of magnitude beyond simple ANN's
- The specific computational function isn't known
- The physiological details aren't fully known
- etc.
Hence, it is clear to me that work needs to be done in developing a theory for making sense of distributed computation in dynamical networks. Has there been work on this? Am I the first one to ask this question (or is this Dunning-Kruger in motion)