I am trying to implement a fault detection method that is discussed by Ryoya et al in this paper "https://ieeexplore.ieee.org/document/8566578" and integrate it with a solid-state circuit breaker design I have developed to control a DC microgrid distribution system.
Then I started having a question that may sound a little bit naive. The paper I mentioned earlier uses current differential deviation i.e. change of current over time (di/dt) to identify faults. The question is, how do I differentiate between simple current deviation due to load changes or fluctuations and a fault driven change in current?
Is the difference going to be obvious because the rate of change due to a fault would be dramatically different in magnitude? and if so, how many folds should I expect the difference to be?
Thanks a lot in advance.