Although there are many case studies to propose success factors of Agile adoption, less reports have been published regarding real failure factors and their importance.
I agree with Diane: more empirical evidence would be great. Unfortunately, even scientific literature on agile development occasionally seems to be biased (both in favor and against).
I find all such publications suspect from the perspective of gauging the impact of agile. The problem is that it's difficult to qualify a product development as agile, and it's even difficult to create any metric of degree-of-agileness. If you combine this problem with the problem of separating the "is-agile" independent variable from the thousand other independent variables that matter, one really can't draw any conclusions about the value of agile.
There is an even greater problem here for the Scrum framework, which I tell people is maybe 20% agile and 80% from TPS (commonly referred to, incorrectly, as "lean"). If you use adoption of or adherence to Scrum as your gauge of agile, you're already off the mark. The best operational definition of agile comes from Alistair Cockburn. If you understand his agile metric, you'll better appreciate why this is a difficult question. http://alistair.cockburn.us/Agile+machismo+points
Last, for every path to success there are a million paths to failure. The true causes are rarely discovered or, if discovered, published. I feel your pain. However, studying the failure modes will never, in my opinion, make a meaningful contribution to any body of knowledge that can contribute to success. Avoiding one failure mode doesn't help you with the other 999,999. As for elementary school pedagogy, focus on teaching the "how to" rather than the "how not to." The learning metaphor works particularly well if one views agile through the lens of learning organisations.
Yes, you are right. However, I also know a very good study which does not only distinguish two groups of projects (agile versus non-agile), but also asks which Practices have been used. The findings are very interesting and they could isolate practices which seem to be factors for success, even in non-agile projects. As far as I know, the study is only avaible in German, however:
Stefan Toth, Uwe Vigenschow, Markus Wittwer: Einfluss klassischer und agiler Techniken auf den Erfolg von IT-Projekten. 1. Ergebnisbericht, Juli 2009, http://www.oose.de/pm/pm-studie.html
Actually, to continue with empirical work here we would need something like a common understanding of what we are talking about. I started with some theory some time ago but never continued it: "Identifying Common Characteristics in Fundamental, Integrated, and Agile Software Development Methodologies" (https://www.researchgate.net/publication/232639575_Identifying_Common_Characteristics_in_Fundamental_Integrated_and_Agile_Software_Development_Methodologies_%28PDF%29)
Some updates here would probably help..
Article Identifying Common Characteristics in Fundamental, Integrate...
To have an idea about failure factors in Agile adoption one could have a look on Agile Manifesto. Anything that impedes application of agile principles is a failure factor. As for their importance, on my experience the most critical are all related with people in the development team; the primary one is motivation of individuals in the team to adopt agile principles. But it's a less critical factor than (1) availability of necessary skills in the team and (2) motivation of individuals to contribute to the defined project. The latter two are more critical, but not specific to agile projects.
About 4 years ago, I had a graduate student do a thesis on a comparative study of students in a project class that you might find interesting. The technical report is:
Oyeyipo, E. O., Mueller, C. J., “REQUIREMENTS MANAGEMENT IN AN AGILE-SCRUM,” Texas State University-San Marcos, TXSTATE-CS-TR-1960-27, January 2011. http://ecommons.txstate.edu/cscitrep.
It is not a case study, but it does provide a comparison of the two methodologies in an academic setting. Also, I do not believe it is possible to say that a specific project fails for any one reason or technical decision. In the initial report on Agile Scrum, the author establishes the case that the development of software is a chaotic activity.
Unless it is paranormal software engineering (sounds a bit like debugging parallel programs), I am probably the wrong person to engage with this.
No, seriously: I do not think it is necessary to draw a strict line between scientific and non-scientific but rather decide what do find out and which method might fit. Any thereby I think that approaching agile methods scientifically is perfectly possible.
Andrea, yes — even random data can be used for scientific research and scientists extract conclusions from them all the time. All you need is the right slant — er, interpretation, and to ask the right questions.
See Magne Jørgensen's paper, "Myths and over-simplifications in software engineering," Lecture Notes on Software Engineering, Vol. 1, No. 1, February 2013, that shows ("proves," for those who believe such scientific evidence) exactly this.
Good science is much more than a blind grinding of "any data." The conscionable selection of information-data is one of the highest callings of a morally responsible scientist. Given any data and the right questions, you can prove anything. I'll leave that to the politicians; perhaps you have a calling there. I'll cast my lot with the longstanding ethics of conscionable science. Good research and publication should honour such principles.
James, I find myself in the strange position of agreeing with you, with just a small caveat. Most case studies indicating a project failure would come from industry. Most corporate executives and lawyers would not permit the paper to see the light of day, and it is doubtful that it would get published.
The next issue is what causes the project failure. Most projects fail from either poor requirements or management and not methodology. This has been reinforced by almost every study that has been published for the last 50 years. This fact brings the notion of using case studies to evaluate the efficacies of development methodologies into question even if studies were available.
Now to the unstated question: how do you evaluate a methodology? That is difficult because you are dependent on the most unpredictable of all things to measure, Human Beings. One method that seems to produce good results is to have two groups: one using Agile, the other using Waterfall or another methodology. Give both groups the same project with the same requirements, then measure the number of requirements implemented, quality of the items implemented and the time to implement. To dampen the effects of human subjects, repeat from 5 to 10 times. Unfortunately, the time and cost would not fit in the requirements for tenure or funding.
Carl: just a quick thought on "how do you evaluate a methodology?": By trying to figure out what the effect of different methodologies on preventing poor requirements engineering or improper project management is. Which, as you suggest, will hardly fit within typical research project frames.