I have useed Atlas.ti ( so I haven't used Nvivo 10), but from Atlas.fi analyzing tools you can get simply tables where you can see the codes (all) versus primary documents. It's easy to see wheather all codes are used in the analyzing process of all documents. Does Nvivo 10 has this kind of tools? Once when we found big differences we had to go to recoding , and found out that the data was different not the coding:)
I use Nvivo like a single user and when we are more coders on the same project we work separately on the word document. After we meet together for the agreement.
I know it is not the best way, but, as I said, I don't understand how conduct this process directly on Nvivo.
I mean, I have to prepare two copy on the same project, we do codings separately, but after? How can import the two database on the same project?
With Atlas.ti only one researher can code at the same time, and we had several researchers in the multiple geografical places. We agreed on the coding timetable, We had to discuss of the timetables (a lot) but at the end we had one(hermeutic unit) document with all the codes. But this is not a solution to your problem where you already have several coded documents, sorry. Best regards, P J-P
I share Pirjo's problem in that I am an Atlas.ti user and don't know nVivo. But perhaps there is something comparable in nVivo that can let you do what can be done in Atlas.
Even with a single user license, Atlas.ti lets you login as an author (you can even set it so that it automatically logs you in as you every time you open Atlas.ti), and then that becomes part of the quotation metadata (in other words, every quotation you create will have tracked, in additional to the usual line and character start and end, who created that quotation). The same is true for code metadata (you can track who created what code if each coder logs into Atlas.ti whenever they go code). You can then run any query you want sorted or filtered by author.
It sounds like you and your colleagues have already done some coding though on separate copies of the file in question. In Atlas you can merge hermeneutic units (the file that stores all the data and the codings); Atlas.ti gives you several options, such as simply appending files and/or codings from file A onto file B; or (and it sounds like this what you might need) merging files and/or codings between files A and B (i.e., it will check file B to see what is different from file A and only copy to file A whatever is different -- this is handy obviously to avoid duplication of quotations, codes, files, etc. And you can control exactly what you choose to append/merge.
Finally -- if what you want to do is calculate an index of inter-rater agreement (as opposed to manually compare answers and then discuss to arrive at consensus), then what you need is something a little different. You first have to agree on the set of quotations to which you will assign codes. You give two independent coders the files with the quotations already created, and ask them to assign them codes. You then do an export of primary documents and codes for each file as Pirjo suggested; then you can use a statistics program to merge the two files together so that you can calculate an index of inter-rater agreement such as Kappa or r-wg (James, Demaree & Wolf, 1984).
We have done a couple of projects with separate coding and for the moment we don't have considered to change with another program
I'm sure that with Nvivo is possible do the coding with two user license and also inter-rater agreement (in the Nvivo tools help it is contemplated). But I don't know how.
You're welcome! Absolutely, I wouldn't expect you to change programs just for this one little thing. My hope is that someone who reads the post can help you figure out which the commands in nVivo do what I just described in Atlas. Good luck with your project!
when you use NVivo you need to set up different user profiles based on windows accounts so that you can see who has done what in the programme. This does create an issue if the person is using the same computer and login. You would need to make sure that each person checks that their user profile is active (i.e. they are logged in as themselves) when they enter the programme so this can be tracked. This help section explains how to do that. I'm not sure if you can both code at the same time and then compare or if you would need to timetable it so you code one after the other.
If you do it this way you can then run a coding comparison query to see who has coded what and any differences between them. There is a help section on this too.
If you have more than one NVivo project, I think you can merge the projects but then I am not sure how the data would be shown and how easy it would be to run a coding comparison query so I think the first way is better.
As the others have detailed, you can keep two separate codings of the same data, but I'm not sure why you want to do that. You mention "meeting together for agreement," so I would think that you would just enter the version of the coding that reflects your agreement (sometimes called "resolution").
But, if your goal is to calculate a formal statistic, such as inter-rater reliability, then that would require storing separate records of the alternative codings on the same data.
Hello Sarah Drabble and David Morgan, thank you for the precious tips!
I have seen the Nvivo help several time, for some questions it is helpful, for this question I think that it is not.
We don't want buy the server option to share the project and the codings.
As professor Morgan said, I would like:
1 at least two coders on the same project for indipendent coding (also if is not possible in the same moment it doesn't matter)
1 calculate the inter-rater reliability (but I think to have understood how to do it)
Until today I have worked on the "old method": separate codings on word document, several meetings for the agreement, insert the final coding in Nvivo.
I would not only improve my knowledge on the program and the qualitative field, but also find useful way to save time without losing quality.
In the next few weeks I will begin a new coding on a project. I will try to use your advices!
You may wish to consider MAX QDA in the future for this type of problem. This program has a "intercoder agreement" function that would address your current issue.
Now that it is clear you are working on inter-rater reliability, I did a Google search on NVivo and reliability, which produced at least two YouTube listings, along with the following link to QSR's own site.
If you code your data by different researchers (you can use the file for that), you can compare the nodes by search queries. Then you can find the percentage of agreement of each researcher's worldview. Also, NVivo allows the researchers to calculate Cohen’s Kappa coefficient which is a statistical measure of inter-rater reliability.