I used Nvivo 10 for a QUAL study plus also tried to use it for literature reviews. I have no experience of Atlas, but I use Mendeley for the literature repository. My Nvivo 10 configuration was standalone, but there is a network version. Nvivo was easy to install and reliable for coding, I also like the idea of having local and global budgets where the coding you do can be reused for other projects/manuscripts. The limitation I found with the PC version was when I tried to use Nvivo for my literature reviews and I hit performance issues with my laptop /hard disk. So I upgraded to a SSD which helped a lot. Nvivo has a lot of potential in terms supporting coding across groups of people with their network version. Something that would also help in literature reviews where academics could save alot of time in reviewing pre-coded papers on set topics.
The only negative points are that the screen sizing on NVivo requires a large screen as there are many windows that require space, and performance ie a fast SSD is required with a fast processor for large projects. I would not say I am an expert in NVivo but I did create a PPT and a paper on how to improve NVIVO for literature review coding. They are attached. FYI If NVivo solve this then I would no longer need Mendeley.
I have used both softwares before but I find NVIVO to be more user-friendly and advanced than Atlas.ti. It's a software that allows you to manage and analyze qualitative data with ease. However, its file is too big and sometimes it functions slowly.
I use NVivo and have done for over a decade, and I probably stick with it because I'm used to it. I do find some aspects of it frustrating, e.g. the limited modelling function, and I have looked at other QDA software, but overall I find it supports and enhances my analytic process. Some of my colleagues prefer MAXQDA, and others choose Quirkos, particularly if they're doing small participatory projects. But I don't know of any comparative reviews, and your question makes me think it would be useful to produce one; I'll put out some feelers and see whether I can make that happen (don't hold your breath, though!).
I use Atlas: my data is in Hebrew > a right-to-left language, thus not supported by Nvivo but is supported by Atlas (although very clumsily). Arabic would have the same issue.
whatever you pick, check out if it is compatible with your devices before you order. We had to find out that there are some nice add ons (like NCapture) for Nvivo not entirely available for apple users.
I find Atlas is not user friendly. I don't have any experience with the other packages. In the end, it doesn't really matter which software is used, because it's just one of multiple instrument to reach the same goal.
I looked in to using AtlasTi, MaxQDA and NVivo7. The package choice was informed by several facts. I felt that AtlasTi seemed to have slightly less useful features and was slightly less user-friendly than the other two packages. MaxQDA was a neat, easy to learn package which fulfilled all the likely needs of this kind of project. However, its drawback was that very few people use it. This meant that there were fewer training courses available and fewer people to ask for advice. The package which I decided upon as the most appropriate choice was NVivo. It has a very wide range of features and it offers most things that are likely to be wanted from a CAQDAS package (in fact, it offers many more features than most researchers are ever likely to use). Secondly and possibly most importantly was its growing popularity. This meant that there was a wide range of support available (from formal sources and colleagues), and looked likely to be the most useful package for team or collaborative work in the future.
The various CAQDAS packages all have very similar features, so the decision basically comes down to which one you personally find to be most user friendly -- and this judgement differs quite a bit from one person to the next.
Fortunately, all the major packages allow you to download trial versions, and they provide online tutorials to get you started. I think the tutorials can be just as important as the packages themselves because they help determine how quickly and easily you can get "up and running."
Besides the availability in your research facility, it really depends on your research question, type of data and design. The last few years the Big Three: Nvivo, MaxQDA and ATLAS.ti have diverged in different directions again. Some designs "fit" better with a certain package than others. There is plenty of information available here:
I myself don't see that much difference between Nvivo, MAXQDA and ATLAS. In general they compete with each other on a feature by feature basis, so that whenever one of them introduces a major new feature, you can expect the others to incorporate that into their next round of upgrades.
Also, I've used ATLAS.to for a number of years and I don't see any stronger connection to Grounded Theory than in any of the other programs. Yes, ATLAS.ti does incorporate some terminology from GT, but some of that is outright inappropriate (such as labeling number of codes as "Saturatedness").
I'm not sure if you'll be using a Mac or PC, but I found NVivo 10 for Mac to be slow and buggy, which was really disappointing. I looked at the alternatives and found MAXQDA and Atlas.ti to more stable and lag-free, but not quite as intuitive.
While looking for alternatives I came across Dedoose which has been working well for me. It's browser based which means I can log in from anywhere and have all my stuff there. The main downside is that it's Flash based, which means its user interface can be a little weird, but it's a small tradeoff and hasn't been particularly problematic.
@David - it is called "groundedness", ATLAS.ti does not talk about saturation. It is no longer true that these three programs MAXQDA, ATLAS.ti and NVivo compete based on features (at least not of you look a step deeper). MAXQDA is focusing on mixed-method analysis for instance and NVivo has built in a number of tools that allow more "quick and dirty" kind of stuff, things where you get results quickly and that look pretty. This may be beneficial for some studies, probably not for all. In terms of comparision, only ATLAS.ti has the quotation level (it always had it) and so far no other program has "competed" on this feature. If you look at the margin area, called coding stripes in NVivo and the area for codes in MAXQDA - there ares substantial differences. the coding stripes are something you can look at, in ATLAS.ti it is a working area. These are two different features, often compared as being the same. They are not. Also it feels quite differently to work with all three - it is not all the same. So one way to figure out which one you want to chose is to test drive - download the demo versions and play with each program, all offer a sample project or a quick tour. You will find out, which program you find more intuitive for your way of working and the type of analysis you are looking for. Another criteria is: who can help you. Look at training opportunities, colleagues that use a program (which one?), are you happy to learn via video tutorials or webinars? Look at what is offered.
Sorry about the saturation versus groundedness -- you are correct. But that the term "groundedness" does not appear anywhere in GT, and whatever you label it, it is still nothing more than a count of codes.
With regard to Mixed Methods, this is simply a label that means working in the equivalent of a spreadsheet format, with cases on the rows and codes in the columns Versions of that format are widely available, but some programs choose to emphasize it under the mixed methods label (both Dedoose and MAX use this terminology).
Could you say more about "quotation level" since I do use ATLAS and have never thought of this as a unique feature.
As for coding stripes etc., I don't find anything that is all that different across programs, simply multiple ways of implementing the same thing. This goes back to my earlier post where I argues that most of the differences are in the user interface, and the importance of which interface feels most comfortable to any given users. (That earlier post also emphasized the importance of using the trial versions and tutorials etc. as the best way to find a personal match.)
Hence, my own conclusion is the user interface is more important because that is what differs the most across programs, and not the feature sets.
I have found that NVIVO is more attuned to initiative discourse analysis, and ATLAS somewhat more attuned to counting. A reference I have found useful (although could now be getting a bit dated); Weitzman, E. A. (2003) “Software and Qualitative Research” in Norman K. Denzin, & Yvonna S. Lincoln (eds.) Collecting and Interpreting Qualitative Methods. 2nd ed. Sage Publications: Thousand Oaks, California.
This is an interesting perspective, i.e. that ATLAS.ti is somewhat more attuned to counting as compared to NVivo. There are functions in ATLAS.ti that let you look at numbers, but generally the idea is you do your qualitative work in ATLAS.ti and if you want to count, you export your data in numerical form to other applications and do your number crunching elsewhere. This brings me back to the question asked by David about the quotation level. ATLAS.ti is the only software that has the quotation level - in ATLAS.ti a selected piece of data can be turned into a "quotation" and this is an object by itself and there is no need to code or start with coding. Thus, if you want you can stay at the quotation level. Each quotation can be renamed, commented and also linked to other quotations. The links are named like "discusses", criticizes, explains, leads to, or basically what ever you like or need it to be. You can create any relation that suits your data. This is what is called hyperlinks in ATLAS.ti and if you want to do discourse analysis, this would be the way to go. This also applies to narrative analysis and any approach that does not necessarily rely on coding data. You can start with highlighting data segments and creating quotations. You open the Quotation Manager alongside - you can comment right where you are in the data, or you do it in the quotation manager. If you want to add descriptive labels, you can rename the quotation (default is the first 30 characters of text data or the data file name for multimedia data) - this way you can utilize the quotation level for "very close to the data level" work instead of immediately working with codes - avoiding the danger of creating too many codes (and walking into the code swamp as I describe it in my book: Qualitative Data Analysis with ATLAS.ti). If you see data segments that are connected, you can link them using the hyperlink function. There are linking functions in other programs as well, but they do not have the same quality as in ATLAS.ti. In MAXQDA you can create links, yes - but it is just to jump back and forth between two data passages or to link to external media (like the see also links in NVivo). This is not the same. Therefore one has to look a bit closer where the differences are - almost all packages display codes these days alongside the data - but is it just a display, or is it an interactive area? You cannot simply say MAXQDA has a mapping tool, NVIvo has a modeler and ok, ATLAS.ti has the network view function - so they are all the same. All three are visualization tools, but they work in quite different ways and are often also used for quite different purposes. Then they are functions that are available in one package but not in others like the Word tree or clustering in NVivo (also available in QDA Miner) and you need to answer for yourself whether this is something you need for your analysis or not. To come back to the quotation level question - the quotation level in comparison is something unique to ATLAS.ti and it therefore lends itself much better to interpretive approaches than other packages. I would position Dedoose at the other end of the spectrum, also QDA Miner - starting on a different level of analysis and comparatively are more suitable for deductive approaches - which does not mean this cannot be done in other packages. A very recent comparison can be found in: Using Software in Qualitative Research. A Step-by-Step Guide, Second Edition by Christina Silver and Ann Lewins, 2014. SAGE. There you can see how the various functions can be implemented instead of just comparing names in product feature lists.
I am beginning to explore qualitative software for my descriptive phenomenology study.I would appreciate feedback on experiences specific to this research approach.
Raven's Eye is a new online textual natural language analysis tool, built by researchers steeped in qualitative approaches to research in the social sciences (and, in the process, often exposed to Atlas.ti & NVivo). As Richardeanea Theodore might be particularly interested, Raven's Eye is based on Quantitative Phenomenology, which follows Amedeo Giorgi's steps in conducting a descriptive psychological phenomenology. It also facilitates research from Grounded Theory and other approaches.
Unlike other Computer-Assisted Qualitative Data Analysis Software (CAQDAS) programs, Raven's Eye actually analyzes data, and in mere moments. If you are a Researchgate member who already owns a license to another CAQDAS program and would like to Raven's Eye out for free for one month, just send us an email from our website's Contacts page between now and the end of 2015. Please write, "I own a license for [X brand of CAQDAS] and Tim said I can take the CAQDAS challenge," in the subject line. Upon receipt of your email, we'll send directions on how to sign up.
I have used Nvivo and Leximancer software for analysis. I think the answer lies in what you intent to analyse. Is it in spreadsheet format (comments collected from a survey perhaps?) or is it in the form of video, documents or interview transcripts? For the later I would recommend Nvivo over Lexi but for spreadsheet based material (we had over 10,000 responses) I would recommend Leximancer. I have not used Atlas so cannot comment on it.
@Tim Lower / Raven's Eye: interesting new software, but I would say of a very different kind than CAQDAS. As there is no free 30 day trial, I can only write about the impression I get from your website - and from testing out other programs that offer "automated" insights. These insights are only as good as the algorithm behind it - and my assumption is that the algorithm will also determine the results. You state that it is primarily based on Quantitative Phenomenology (ok) - but then you want it to be an all-for one solution. A lot of Grounded Theory researchers still reject the use of CAQDAS - to my mind based on a misunderstanding related to the term "coding" that is primarily used in software and which is not the same as GT coding (see link). Suggesting that Raven's Eye might also support a GT analysis is stretching it quite a bit.
Having said this - I find the possibilities that become available through computerized analysis very interesting. I am not sure whether it is useful to integrate such feature into existing CAQDAS or whether it makes more sense to treat it as different and new forms of analysis, requiring a different set of tools. The commercial developers will offer what they think will be profitable. As a social scientist, I would like to see an intellectual debate about it. We need to talk to the digital humanists who already use much more automated tools and often know little about the possibilities of CAQDAS.
For those who can read German, an interesting book is: Text Mining in den Sozialwissenschaften (Lemke / Wiedemann, 2016, Springer VS)
Thanks for the impressions--I think that reviewing the actual product would allay your concerns. I see that your 2013 book on working with Atlas/ti is currently featured on that company's website. I can understand how the investment arising from such in-depth work with that one particular CAQDAS program might lead to your hesitancy in being open to a different kind of software program--at least it certainly does help to contextualize it.
You're right in that Raven's Eye does much more than most software programs in the CAQDAS category, and that sometimes such previous programs can lead to a false sense of accuracy. Indeed, that's why I was involved in the invention of Raven's Eye. It does more than simple CAQDAS programs (we call it a hybrid CAQDAS/natural language understanding software program), and allows for multiple research methods proceeding from multiple paradigms. Whether processing interviews, surveys, or books, Raven's Eye provides automated answers in moments flat.
interesting response - how should my expertise in ATLAS.ti interfere with my knowledge on methodological approaches? One of my critical remarks was that I think that it would be stretching it a bit much to promise that Raven's Eye can be used for Grounded Theory. How should a natural language understanding software replace the process of open "coding" - open coding is not just about attaching a label to a data segment. It is much more than that. My paper on how I would translate Grounded Theory to a computer-assisted analysis is not translated into English (please find attached – it’s a working paper to published in the working paper series of the Max-Planck Insittute). My translation is with ATLAS.ti – but this is not the only program I know something about :-)
CAQDAS today are quite complex. They are not simple (little programs). I totally disagree with you on this - but we can leave it like that.
By the way, I have just been writing an article on the state of the art of CAQDAS to be published in April in the KWALON journal using Kahnemann's framework on system 1 and system 2 thinking (fast and slow thinking) to evaluate available software features. NVivo is now the first one offering automated theme recognition and sentiment coding. I have been playing with these options, but they do not impress me (yet). Things may improve in the future with updated versions. The algorithm that they use currently does not seem to be very good and they do not allow for machine learning. On the marketing side, they claim that with ‘automated theme recognition’ decisions can be made faster – in the manual they adjust that view somewhat, because the trade-off for speed is quality. I would not trust a decision that is made based on automated “insights” without manual checking.
The algorithm used by Raven's Eye might be better; something however I would rather test myself than to believe a marketing slogan.
What I think is worthwhile to discuss is whether CAQDAS should at all go in that direction and if so, doesn’t this mean they will turn into a completely different product? If we want more reliable computer-generated results, CAQDAS needs to be able to manage larger data sets. Many are currently not capable of doing so. Maybe Raven's Eye is already one of the programs of the new genre, but is it a hybrid or another type of software that needs to be discussed on its own terms and not in comparison with CAQDAS (what I read through the lines was: we are the better CAQDAS....). Is it really so?
Thanks, Susanne--I would recommend giving Raven's Eye a try before passing judgment. I think we agree in our understanding of Grounded Theory and its processes. I prefer a phenomenological approach myself, and was part of developing the quantitative phenomenology on which Raven's Eye is primarily based. So, those approaches are integrated fluidly into Raven's Eye (see the methods section of our online technicals: https://ravens-eye.net/support/technicals/methods/Quantitative%20Phenomenology/index.html). Acknowledging, however, that many others have been trained in Grounded Theory approaches, we made it so that those wishing to conduct a such an approach can utilize Raven's Eye to do so (though the process involved is somewhat different than in other CAQDAS programs--it involves creating user-generated columns in a .csv spreadsheet).
Ravens Eye is indeed, as you point out, part of a new and advanced movement in natural language analysis software programs. It's processes and format are somewhat different from existing programs, but are easy to access and learn (it's a browser-based platform accessible on everything from desktops to smartphones, and is available in Software as a Service pricing schedules). It can also handle very large data sets. Such a program may require a new conceptual category of software, or expanded definitional characteristics for CAQDAS, but I'll leave that up to others such as yourself to decide. In the meantime, I thought researchers utilizing such programs might be interested in something that would save them time and increase reliability/validity.
Thanks for the pdf and the notice on the journal article. I look forward to reading them.