22 October 2024 0 10K Report

Please could someone help me get a proof? They will be referenced.

Given a spiking neural network (SNN) with nn sparsely connected nodes, what is the best way to make connections and break existing ones so that the mapping the network is performing reaches the global minimum of its loss function?

My SNN has nodes that are labeled in layers denoted by depth. The labels cycle in a modular manner. In fact, they are grouped in 1212's like a musical keyboard. Also, like a musical keyboard, they are labeled as their corresponding keys were mapped to one.

When nodes in the spiking network fire, we form one big chord. The goal is to optimize the consonance of this chord (with a reward proportional to the consonance) so that it is highest. (This will teach the network to form activations that represent meaningful chords from meaningful scales. In fact, the whole of music theory.)

If we have a rule that says nodes from two consecutive states of the network (i.e. two chords) that follow the circle of fifths or fourths become connected. And vice versa, if they are connected already but go against either circle, the connection is weakened to a threshold and then dropped.

  • Would this optimize consonance even more?
  • Would it lead to us reaching the global minimum of the spiking networks loss function?
  • A bit on why we might think this would work. If we used English words to map hypotheses to proofs. Then, there would be the reasoning process needed to give the proof; we can imagine each data point is a reasoning process indexed by a hypothesis and a proof.

    Were we to follow the grammar more and more closely (read optimized consonance) of the language the reasoning process was in, then we would get better and better proofs, up to a limit. Is this the global minimum for the number of words/neurons in the language for the particular grammar I chose?

    More Tofara Moyo's questions See All
    Similar questions and discussions