I would appreciate researchers views regarding the answers related to the issue. it pertains to my research study that focuses on qualitative data analysis.
Sheeraz - a good question - but a very broad one to answer.
There are many approaches to qualitative data analysis and rather than attempt to provide a ‘recipe book’ for qualitative data analysis, the following response is merely an overview of common methods.
Coding and categorising
A common strategy shared by many qualitative methodologies is coding and categorising, which is possibly the most important phase of the analysis process. In this style, the first step is to divide the data into abstract ‘bits’ called ‘codes’. This requires close interrogation of the data. If using written data, such as transcripts, this would mean reading and re-reading them to identify and label recurrent words, themes and concepts. With written forms of data, usually after having read the whole transcript at least once, there are two basic methods for coding:
Line-by-line code — carefully examining words, phrases or sentences for data relevant to the overall research question. Scanning paragraphs for units of meaning relevant to answering the research question which are the denoted (or abstracted) into descriptive codes. There may be several such denotations per paragraph or perhaps none. When found, the bit of data is denoted into codes, i.e. another word or words that are an interpretation of emerging insights. Sometimes the words in the data are abstract enough and cannot be improved upon. The codes are then marked onto the transcripts or field notes and a separate list of the codes is created with a short definition: this is sometimes termed the ‘attributes’ of the code.
After line-by-line coding, or scanning of paragraphs, the abstracted codes are then grouped logically – ‘like with like’ – and a tentative label is allocated. This process is called categorisation. This may commence quite early or might only be done after all data is coded. The categories are labelled to signify the interpretation represented by the grouping of the codes. The categories may be temporary as there might be revision in the light of further analysis – the labels may be changed to enhance clarity, abandoned as the codes are re-categorised, and some might be ‘collapsed’ together.
The final step is to establish relationships conceptually by establishing a hierarchy of categories and sub-categories. A category will tend to have multiple sub-categories; sometimes there may be more than two levels in the hierarchy with the third level sometimes referred to as ‘properties’ of the sub-category. Thus, there may be categories, sub-categories and properties of sub-categories. It is possible that the processes of coding, categorising and conceptual ordering (establishing hierarchies) will be cyclical with the researcher moving back and forth from one to the other - rather than there being discrete steps. Overall, the relationships will be explored and reduced to the least number of categories possible. When the researcher’s carefully considered opinion is that data has been coded, categorised and conceptually ordered satisfactorily, data analysis will stop.
Some research approaches specify what type of coding should be conducted and how this is to be done. For instance, in the classic grounded theory method, two forms of coding, ‘open’ and ‘theoretical’, are required. Open-coding (involving some coding and categorisation) is the process of looking for underlying meaning and uniform patterns in and across data. Theoretical coding – a type of conceptual ordering - involves connecting concepts that have arisen from open-coding. The connections between concepts are related to the core category as relationships between categories, sub-categories and their properties are analysed.
With grounded theory, other sets of guidelines might direct a different way to code. For instance, following open coding, axial coding may take place where relationships between the ‘open codes’ are identified. It is termed ‘axial’ because coding occurs around the axis of a category with categories and sub-categories being linked. Additionally, a form of theoretical coding is possible termed selective coding. Here a central, organising category is identified and data is selectively coded to this core variable.
It is possible to start data analysis with a predetermined list of codes and then search for examples of data that fit into these codes. This process is a bit like using a ‘pigeon hole’ in a mail room: the ‘pigeon hole’ is the pre-existing code into which relevant bits of data are slotted (coded). With some forms of ‘content analysis’ this style may be required, especially when there are many data sources and a number of people are needed to do the analysis (multiple coders). Usually the codes are developed from either a range of opinions, previously reported research or as suggested by experts. Possibly, the interview questions (or observation guidelines) are designed to focus on each separate code. The codes may also already be grouped into categories that pre-date analysis.
Thematic analysis
This style of analysis treats the data set as a mass of information that can be best understood by not breaking it up into small abstracted sections. Rather, the analysis process is about understanding the overall themes in the data set. These themes can only be ascertained appropriately by having a ‘feel’ for the overall meaning of the whole set of data. A theme is broader than a category, as it will appear throughout the data set as a central idea that weaves through the data set.
Thematic analysis requires getting to know the data well through a method appropriate to the data, e.g. reading and re-reading if working with written words, and then it might require taking time to reflect on the insights being extracted from the data. The researcher then goes back to the data for further analysis and thinking and so on, until meaning-making occurs. Data analysis in this process is termed iterative, in that the researcher is moving back and forth over the data, rather than in linear steps. Although it can be applied to various qualitative research approaches, this style of data analysis is particularly useful for certain specific approaches, such as phenomenology. The researcher may use this style ‘free-form’ or follow directions suggested by others.
I hope that helps - but it can, as I started off saying, only be a 'surface' view of qualitative analysis. Specific aims and choice of methodology will play a large part in what style is adopted
As computer software has progressed, increasing numbers of qualitative researchers have turned to using computer-assisted qualitative data analysis software (CAQDAS) packages. CAQDAS are now an accepted tool for data analysis as they reduce the time involved in managing data and allow the researcher to spend more time immersed in the actual analysis. A wide range of CAQDAS packages are now available such as ATLAS.ti, MAXQDA and NVIVO10. These software assist the organisation and management of qualitative research data through the storage of data in multiple recorded forms (including aural, visual, video and word forms).
Not everyone has access to software packages though - and may rely on or prefer manual methods to code their qualitative data
Right you are, Dean: good question, but broad to answer. So I will just try to add one piece to the jigsaw puzzle.
Atlas-ti indeed recently offers an interesting possibility to analyze semi-structured questionnaires (survey). Generally it functions the following way: Each questionnaire (case) becomes a primary document. The structured questions are used to make primary documents families. So that you obtain a structured set of PDs. The open questions will become variables, whereas the answers to open questions will be quotes. Technically it works through assigning codes to your survey answers which you have e.g in a EXCEL file which enables you to import survey data into atlas.ti.
This works, but obviously not with all scenarios. It depends of the number of cases, the number of variables and the proportion between structured and open questions. At least it is worth having a look at it. It could be a helpful device for combining quantitative and qualitative analysis.
To get a first idea of what this new function means just have a look at Susanne Friese's tutorial video:
I agree with David, knowing more about your work would be important before providing advice. Based on your research question and how your data is collected will guide the methods you are able to use to analyze the data. Best wishes.
How do I analyze data from semi-structured questionnaires?
Well, I follow this rule: quantify what is or can be quantified, and analyze qualitatively what referes to the inner world of the persons, and cannot be quantified, or must not be quantified. This rule works!
Dear Dean Whitehead, Antonio Velasco, Maria Bicudo, Adria E Navarro, David L Morgan, Sarasuphadi Munusamy,
I thank you all for your valuable responses on the question. I have got the idea to go with my work on wards. However, I t would be good to have a practical example of such analysis.
Sheeraz - as David has already suggested - we would still need more detail to offer practical examples. There are potentially many different approaches. What clinical issue on your wards do you want to investigate? How and who will you sample? How will you collect your data - interviews, observation etc? What methodology would you adopt? Would you use a manual approach - or use software to analyse data? Would you need to achieve 'saturation' etc
Here is my example Sheeraz...I am analyzing data from a small qualitative project which is looking to develop a screening tool for adult protective services personnel to detect situations involving undue influence of older adults. My state has a new definition of 'undue influence', so we are using this structure (theory) to analyze focus group data. Broad themes are available from the law, yet in coding the transcripts we determined many sub categories (called child nodes in NVivo10) that add more depth to our understanding within this definition. We are determining the language APS workers use to describe various issues and where they are more likely to have gaps in their understanding that the screening tool can supplement. APS workers will pilot the screening tool and provide feedback, as well as an expert panel review will aid in development of a reliable and valid tool. Hope this mini-example helps.
Perhaps there is some literature in your subject area that used qualitative methods that can help you in making some decisions? Also you will want to select a text or other literature on methods that you can cite. M. Patton (2002) has one text I have used as a reference.
Thanks for your valuable inputs. I am investigating reading discourse problems faced by the learners at intermediate level. It is a qualitative followed by quantitative research using semi structured questionnaires for teachers and students, classroom observation sheets, and a checklist for the evaluation of reading texts and tasks. I have applied followed cluster and stratified sampling strategies to represent the overall population of Sindh province. Following are my research questions:
1) What are learners’ hindrances in reading text comprehension?
2) What discourse problems make an English text difficult?
3) How a text is taught in a reading classroom?
4) Does the textbook address language and academic needs of students?
Dear Sheeraz: I agree with Dean Whitehead and this was the structure I used to analyze my data from fieldwork that I did in Cuba. I will go one further to advise that I used a standard set of codes from an anthropological resource (buried in my memory now as to the source) so that I could formalize the data structure. I also dated and then coded my participants as well as numbered each page of data. I then put all of the various codes line by line into a spread sheet by catagory so that I could sort based on the participant (I interviewed approximately 80 people over a five month period) or the specific area of the data. If the data fit into more than one catagory, I had a primary code and a secondary code. It then made my 1400 hand written pages very accessible! Cheers, Catherine
Dean suggested that you consider Thematic Analysis, and the most commonly cited work on that topic is:
Braun, V., & Clarke, V. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101.
For the mixed methods study you have described, are you planning to analyze the qualitative data and use those results in designing your questionnaire, or will you do the survey without relying on the qualitative results?
Thanks Dean Whitehead your answer to Sheraaz Ali helped me with the task of analyzing the stuff consultation survey semi-structured questionnaire i am still working on at work.I must admit that i am now facing the problem of how do i compile the report of the same?
I think it depends on the quantity of data that you are collecting. NVIVO is good for large amounts of data but there are quicker, more connected and easier approaches for small amounts of data/interviews that I often use in the field. Can you elaborate?
Is there any approach for extracting qualitative coding from an (quantitative) instrument. For example, respondents were asked to give answers to open-ended questions under pre-fixed topics or subtopics in an instrument. Dou you think these topic or sub-topics could be considered themes or codes?
Süleyman Davut Göker, the analysis of open-ended questions as qualitative data is quite common. The standard approaches to coding would apply, but whether your data is "rich" enough to generate themes depends on how much data there is. Otherwise, the alternative is to treat your coding as essentially descriptive, rather than interpretive.
Thank you David. My data is rich ( I have to narrow down) and I want to continue taking the the topics or sub-topics as themes and interpret the data (codes) under these themes. Is ıit possible? If yes, how do you call my approach? Is ıt stıll thematic coding using content analysis? Could you recommend a source to use as a reference?
The production of too many codes is a frequent problem for novice researchers. I personally prefer to keep reviewing and consolidating my coding framework as I go. That way, that I get closer to my central themes during the coding process, rather waiting until I have compiled a large of "code categories" that need to be reduced into a set of interpretive results.
If you are in that situation, I have attached a "how to" article that might help. It is written from the perspective of an ATLAS.to user, but the basic process would apply to any analysis program.