Unless you are doing open coding it is advised to have a clear coding tree (themes>codes>sub-codes) to begin with, and to develop a coding policy to creating new codes. As you go through your first 2 or 3 transcripts your initial code tree will be tested against the data. Do annotate sections were you feel a new code or code change is needed. If indeed needed create new codes or code changes and write down the rationale for these change in a memo. This will help you keep track of the development of your codes and themes, structure your thought process (coding is already part of the analysis) and save you time for your write up.
If you are coding as a team then reaching an agreement on coding policy is crucial: you and your team mates need to be on the same line regarding the meaning of all predefined codes and the modus operandi for creating new codes collegially and registering the changes in Memos. Additionally it is highly advised to assess your inter-rater reliability, meaning if you and your co coders are actually in line while coding or on the contrary interpreting codes and subsequently coding datum differently. This requires for you and your team mates to code the same sample of your transcripts, and compare how you coded. The inter- rater reliability can be tested in most qualitative softwares. it is measured with a coefficient called Cohen's kappa coefficient (κ). You can cite this coefficient in your methods, it can showcase the consistency of the coding among different coders/researchers and thus support the validity of your coding structure.
Regards,
Leonard
Here is an interesting article on inter rater reliability:
The classic source on establishing validity in qualitative research is Lincoln & Guba (1985), Naturalistic Inquiry. They suggest several techniques, such as various for triangulation as well as member checking.
Fereday, J. and Muir-Cochrane, E. (2006) Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development, International Journal of Qualitative Methods, 5, 1, pp. 80-92.
Krefting, L. (1991) Rigor in qualitative research: The assessment of trustworthiness, American journal of occupational therapy, 45, 3, pp. 214-222.
Nowell, L. S., Norris, J. M., White, D. E. and Moules, N. J. (2017) Thematic Analysis: Striving to Meet the Trustworthiness Criteria, International Journal of Qualitative Methods, 16, 1, pp. 1-13.
Ryan, G. W. and Bernard, H. R. (2003) Techniques to Identify Themes, Field Methods, 15, 1, pp. 85-109.
Can a thematic analysis maintain validity and reliability if there is only one researcher?: https://www.researchgate.net/post/Can_a_thematic_analysis_maintain_validity_and_reliability_if_there_is_only_one_researcher
1. going back to majority of ur interviewees to let them see their response “ or themselves” in the themes. once they agreed, validity is being established. If there are disagreements, go back to the codes and transcripts so you can re read and re interpret. once modification is done, you can commence validation again. this process goes on until you get majority agreements
2. presenting the themes to people of the same experience but were not included in the interview- but the validation of this type shld be on the experiencing of the themes. note that the people selected herein should have the same experience with those included in the actual interview
3. presenting the themes to expert qualitative researchers for verification- the verifiction necessitates the presenttion of sample intrpttion. If thre are some gray areas, they can ask for more supplementary docx.
in synthesis the validity schemes are called (in order of presentation)
1. verification by same group
2. cross validation
3. critical friend technique
this validation of themes follow the Colaizi’s metholodogy, too