MacKenzie, S.B. and Podsakoff, P.M., 2012. Common method bias in marketing: Causes, mechanisms, and procedural remedies. Journal of retailing, 88(4), pp.542-555.
Conway, J.M. and Lance, C.E., 2010. What reviewers should expect from authors regarding common method bias in organizational research. Journal of Business and Psychology, 25(3), pp.325-334.
Siemsen, E., Roth, A. and Oliveira, P., 2010. Common method bias in regression models with linear, quadratic, and interaction effects. Organizational research methods, 13(3), pp.456-476.
Kock, N., 2015. Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration (ijec), 11(4), pp.1-10.
If you are asking whether it is legitimate to ignore "impossible" values given in response to a query (e.g., Q: "How much do you weigh? A: "3 kilos"), the answer is yes. I would keep a count of such instances, however, as this might be instructive as regards concerns about: (a) whether respondents understood the queries; (b) whether respondents were being forthright in their responses; and (c) whether clerical errors may have been present in responses.
David's if clause is important. For example, I work with response times and occasionally get a negative one or one that suggests the person left the computer on and went to recess (I often work with kids). On the other side of David Morse 's if clause, if you are asking whether it is legitimate to ignore "values that don't seem good given my theory", the answer is no.
Gabriel Sanchez, you have received some good advice up above, but I confess that your question confuses me for three reasons. First, you mention content analysis, which usually refers to qualitative research, but you also mention analyses using statistics, which are part of quantitative research. To me, therefore, you are mixing up two different methodologies. Maybe I'm just being a bit narrow minded in terms of what content analysis can refer to, however, and perhaps there isn't a conflict.
Second, I'm not sure what you mean by "filter out". If it's to get rid of data that would confound the validity of your statistical analyses, yes, it's a good idea to get rid of those data.
However, as some have pointed out above, getting rid of "inconvenient" data because those data don't suit your predetermined notions is a very undesirable practice - though I'm sure I'm not the only one who has seen honours and postgraduate students attempt to do that when conducting content analyses, and, I suspect, they often get away with it.
I'd add that, as Heather Douglas has suggested above, keeping a track of useful, and therefore of problematic or anomalous, data can be desirable. In my view, doing so can help you, and others if you make them aware of it, to conduct better research in future.
My third reason for confusion is that you seem to be asking about both data cleaning and data acquisition (collection). They are different kettles of fish, so I'm not sure what your "target" is.
Despite my confusions, I'd repeat that I think you've been given some good advice up above.