In my experience I used transfer learning paradigm on biomedical imaging domain. In most cases transfer learning can increase the error because either of a failed fine-tuning phase ( giving rise to overfitting). It strongly depends on parameters tuning (stride, output function of the overall architecture) or maybe because of a not balanced image samples with respect to the fine-tuning classes for the application domain.
Could you please tell me some further details?
I am sorry if my answer looks quite general, I give only my experience results with respect my application domain.
I think transfer learning fails (increase the error) in two cases:
When the new domain is very different from the original domain (the domain where the original model has been trained). In this case, it is better to train a model from sketch instead performing transfer learning.
When the new domain does not provide enough number of samples to fine-tuning, which will be produce overfitting/underfitting (as previously mentioned by Bruno)
For negative learning it decrease the performance. For example, between and source and target, you pick some thing as source which does not cover up all the criteria which belongs to target. In that case miss calculation or false prediction might come. Form confusion matrix you can observe those.