Hello everyone

I have been working in the field of adversarial robustness for a few months now. I have been studying many literatures on adversarial robustness, and here I got a few questions that feel like I have not satisfactorily been answered:

1. Are we able to properly frame adversarial robustness?

2. It feels to me like the actual reality (take for eg., a traffic scenario) is very high-dimensional. If, in reality, the actual reality is truly high-dimensional, then the images captured for a high-dimensional space are low-dimensional. Now if this feeling is true then might it be that while we are converting the high-dimensional space to a low-dimensional representation we are losing critical information that is responsible for causing adversarial issues in DL models?

3. Why are we not trying to address adversarial robustness from a cognitive approach? It feels like the nature or the human brain are adversarially robust system. If it is so, then I think we need to investigate whether artificial models trained by principles of cognitive science are more or less robust than normal DNNs.

Sometimes it looks like everything in this universe has a fundamental geometric configuration. Adversarial attacks damage the outer configuration due to which the models misclassify, but the fundamental geometric configuration or the fundamental manifold structure is not hampered by adversarial attacks.

Are we fundamentally lacking something?

More Subhodeep Moitra's questions See All
Similar questions and discussions