Memory components in MANNs are pretty basic. I know of models with multiple attention components. Are there Deep Learning (DL) architectures that employ a multiple-component approach for memory as in Baddeley & Hitch, 1974 and Baddeley, 2000?

Similar questions and discussions