What are the best techniques for geospatial datasets? Also, are there some techniques that are better suited for stacking of models than using a single model.?
So far I have only heard of activation maximization and Network Dissection.
https://arxiv.org/pdf/1804.11191.pdf
This was also a good paper to follow but truly saying, only got half of it. Are there some more good resources( preferably practical ones ) for the same.
Here are some example by using MATLAB: (Click on image and see its coding example) https://au.mathworks.com/help/deeplearning/examples.html?category=deep-learning-tuning-and-visualization
For model stacking, you can check AutoGluon ( Preprint AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
). Many AutoML methods use stacking to improve the final results. In terms of feature visualization, if it's in terms of interpretability of computer vision tasks, you can check this blog: https://bair.berkeley.edu/blog/2020/04/23/decisions/
I dig myself into CNN explainability in the past months. Most of the projects are suitable only one purpose or damn slow per image. I found an implementation of GradCAM, GradCAM++ and ScoreCAM from a Japanese developer on Github. It is awesome, no dependencies and very easy to integrate into your Python stack. And these methods can be run on a larger scale on many images in a reasonable amount of time:
But it is for Python-Tensorflow (Keras). I can't help in other frameworks-programming languages.
Other notes:
- SHAP APIs are not straightforward to use and that method is damn slow. Not very useful to do quick iterations of your work.
- Backprop-based techniques got criticism for being not really working.
- LIME got similar criticism.
- For saliency maps, I did not find them useful at all.
Any other methods are very much WIP, no public software or you have to implement yourself. I found the above mentioned CAM-based methods, easy to integrate, more or less working and they are relative quick per image.