I have a sort of "new" configuration in the hardware implementing of an SNN or ANN, which i'm going to detail here. I don't know if it has any worth or not, but it's just a concept inside my mind:
We are trying to mimic a neuron cell. Like many other types of cells, we have two pumping mechanisms, Na and K which balance the cell voltage and are behind its spiking function. The most similar design to that concept (as I far as I know) is 2T2R configuration which (I guess mostly in the case of PCMs are useful as they have different setting and resetting behaviors) gives us a better controllability over our nodes. Now, I have another suggestion for this configuration: instead of having an R—T--|--T—R configuration, consider an R—T as our main node’s output and then connect another R—T’s output to its back-gate (connecting either drain or source to the main R-T). Fabrication of this design is harder than the previous design but here, we may achieve something that I may call it the learning rate. In each cycle, the effect of the reducing pump is different from its previous state (hence, as we reach toward the answer, we reduce or increase the effect of back propagation). This design may (or may not) has a drawback which is changing the state of our main FET (its depletion region) but I don’t think it become very considerable. What is your opinion? Does it have any point at all?