Qualitative researchers rarely speak in terms of validity and reliability because these issues are largely framed in terms of quantitative research. Instead, the more likely criteria are credibility and trustworthiness, as developed by Lincoln & Guba (1985).
One specific issue in terms of reliability is the calculation of inter-rater reliabilities, which by definition requires more than one coding (i.e., "rating") of the data. But this approach primarily applies to content analysis, where the goal is to guarantee the systematic use of a code book. It is less relevant to the kind of interpretive process that is central to thematic analysis.
Qualitative researchers rarely speak in terms of validity and reliability because these issues are largely framed in terms of quantitative research. Instead, the more likely criteria are credibility and trustworthiness, as developed by Lincoln & Guba (1985).
One specific issue in terms of reliability is the calculation of inter-rater reliabilities, which by definition requires more than one coding (i.e., "rating") of the data. But this approach primarily applies to content analysis, where the goal is to guarantee the systematic use of a code book. It is less relevant to the kind of interpretive process that is central to thematic analysis.
I conducted thematic analysis in my thesis by myself, i just asked a colleague who was not familiar with the content but was familiar with the theoretical background to read through the transcripts as an independent coder, then i compared the two analyses to see if there was any discrepancies.
If your analysis is a content analysis and it involves coding, then inter coder reliability is recommended. You may watch other studies that used similar method and observe how they coded the material, and cite the example in your study.
Morse, J. M. et al. (2002). Verification Strategies for Establishing Reliability and Validity in Qualitative Research. International Journal of Qualitative Methods, 1(2), 13-22
It depends. What do you mean when you say "thematic analysis" and also who is the researcher. Are you using qualitative descriptive analysis? Or are you certain of your operationalizations and are going a more quantitative route?
There is such a thing as "intra-coder" reliability testing. However, this is not usually recommended if it can be avoided. Ideally, you need two brains looking at the coding sheet and making decisions, and then getting an "inter-coder" reliability score. And you can use kappa, or my favorite scott's pi, as the test statistic on the reliability score (how many decisions may have been due to chance?).
If you are a semiologist though, or are conducting more inductive qualitative work, then, it is chiefly by way of the clarity of your writing and thought that many readers will judge the 'validity and reliability' of what has been written. The more that you have observed, researched, written -- the greater the possibility that others will see the phenomena as it is being described (by the researcher).
In your question you mentioned both thematic analysis and content analysis. Actually they are two different analysis methods with some similarities and differences. I recommend you to read the following article: "Vaismoradi, Turunen, Bondas. 2013. Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. ".
Validity and reliability refer to quantitative approach, and are rarely used in qualitative studies. However, we may use terms like trustworthiness etc.
I often adopt Clarke and Braun’s (2006, 2013) systematic guidelines for conducting thematic analysis to analyse the semi-structured, in-depth interviews. The suggested framework consists of 6 phases:
1. Familiarisation with the data
2.Initial coding generation
3. Search for themes based on the initial coding
4. Review of the themes
5. Themes identification and labelling
6. Report writing
Phases 2 and 3 are critical to establish the reliability of the resultant codes before identifying the final emergent themes. You can do so through relying on 'reliability as the researcher’s interpretative awareness'. That is, you need to show the reader that you tried different approaches and techniques while coding data before adopting a certain approach. More precisely, in the coding process, you need (based on the objectives of your study and your research questions) to justify and decide whether to code inductively (data-driven) or deductively (theory-driven). Additionally, the context of the research, the background of the researcher and the characteristics of participants involved as well as the theoretical and methodological underpinnings of the research should be adequately described, in order to help the reader ascertain for which context the research findings might be applicable.
I often use the ‘coder reliability check’ by sending the codes, along with different participants’ transcripts, to another coder after obtaining my participants’ permission. Notably, the percentage agreement between the coders before and after consultation should be above 75% to be considered reasonable. The qualitative data processing software programme NVivo10 can help you in this respect.
Two main possibilities that a researcher can use to testify the quality of the developing themes:
1. If one or more of the tentative themes have insufficient extracts of data to support the identified theme, the researcher should either modify or disregard that theme in this respect.
2. If the collated extracts of data associated with the tentative theme entail an indication to a new theme, the developing theme should be split up into another theme in this case.