I'm trying to find a sensible embedding of neural networks in euclidean R3such that the distances between nodes in the embedding retains the nodes' graph theoretic distance and such that the embedding is  continuous.

This will enable me to find the manifold on which neural networks live (once I have the embedding) and then find field (in the form of potential energy defined for each point on the manifold) equations to understand the flow of firing frequencies through this field, and developing a "neurophysics" from the dynamics of particles at different levels of scale on this field. I hope to see things like a gauss's law equivalent emerge in my neural surfaces, etc. I'm a physicist meets neuroscience.

Now I'm only a first year undergrad and I haven't much formal background, only intuition.

My question is what the practical difference between two methods of embedding:

1. Arbitrarily project nodes into euclidean space (something like find a random projection in Rn, do PCA on this, and then take the R3eigenprojection). With this sensible projection into euclidean space, preserver graph theoretic distances by fitting a metric to the projection. 

2. Use Kamada-Kawai or equivalent spring model for the embedding.

What will the two approaches do for me differently? 

More Akiva Lipshitz's questions See All
Similar questions and discussions