The classic way to demonstrate the quality of the coding process is through inter-rater reliability. But that is mostly about showing that your procedures can be used "objectively" by different coders. As the name implies, this addresses reliability rather than validity.
Beyond that, you'll need to clarify what kind of validity your are seeking. Within survey research, validity usually refers to the measurement process: did the respondent understand the question in the same way that you meant it? If not, you have an invalid answer. This is also known as construct validity.
Alternatively, in experimental research, the typical issue is whether you have made an appropriate interpretation of the results. In particular, are there competing alternatives that could explain your results just as well? This is also known as internal validity.
If you are going to interpret the trustworthiness, you have to decide how to plan for the whole process: the aim, questions, design, sample, methods for gathering the interviews, analysis, presentation of your results. Altogether, the most important thing is that you have a fantastic plan how to cope with the overall process. Usually, the coding process succeed better if the codes are not too short.