Yes. Fine tuning on the new datasets should work (and not lead to catastrophic forgetting), provided that the new datasets has only slightly drifted with respect to the original dataset used to train the network.
It is possible to incrementally train deep-or any other (among them SVMs)-neural network. This doesn't mean that the approaches carry over identically from one type to the other however-that's why bringing up the relation from one to another isn't, necessarily, useful.
In particular what the changes to the training set entail can, obviously, play a role!!