The presence of covariance between the errors of two (or more) indicators of the same construct suggests that one indicator is in some way related to the other(s). In general application of CFA, these indicators are reflective and should be independent. Hence, this may not seem right.
Although it's not ideal, my understanding is that we may have such covariance only if it significantly helps with model specification and/or model fitting. IMHO, as long as the presence of such covariance is minimal and you can address it's implications, it should be acceptable.
In short: Best to avoid. But if you can't, keep it to the minimum (and reasonable).
generally, an error covariance suggests that there is systematic variance that two indicators share which is not explained by the common factor.
A negative covariance between error variables may indicate that the relationship between two indicators is overestimated by the single factor model. In this case, the empirical covariance will be smaller than the model-implied covariance. A reason for this could be that actually there are two correlated factors instead of one. Another reason could be a method effect which has a positive effect on the first indicator and a negative effect on the second indicator.
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569. https://doi.org/10.1146/annurev-psych-120710-100452