Even as Explainable AI" is all the rage, coding/transformation fidelity is a critical success factor. Whether you are using frequency bands from Fourier transforms, statistical features of Wavelet decomposition, or various filters in Convolution Networks - researchers must be able to perform the reverse coding/transformation to see if they have retained sufficient information for classification. Without this, they are only guessing via network architecture trial and error. These tools are sorely lacking in Keras on Tensorflow in Python, so I wrote my own. I would like to see these more generalized and made into public libraries
Question: Who can point me to current work in this area, or can give advice on next steps in my effort?
Please see this public Jupyter Notebook for an example:
https://www.kaggle.com/pnussbaum/grapheme-mind-reader-panv12-nogpu