It's hard to chose a quality metric for each phase in software engineering life cycle is there a procedure to chose from them, also is there quality metric specifically to mobile phone application ?
it is more or less same as to test the software product to ensure that it is error free. that means we ensure that there is highest degree of efficiency we can use all the metrics which we can use for software testing..
You could use "SonarQube" : A tool that is dedicated to gather many of the important software metrics for Continuous Inspection : http://www.sonarqube.org/. I highly recommend the books : "SonarQube in Action" by G. Ann Campbell and Patroklos P. Papapetrou and "Software in Zahlen, Die Vermessung von Applikationen" by Harry M. Sneed, Richard Seidl and Manfred Baumgartner (in German language).
This article available via this RG link might be useful .. https://www.researchgate.net/publication/220280118_Choosing_software_metrics_for_defect_prediction_an_investigation_on_feature_selection_techniques
Article Choosing software metrics for defect prediction: An investig...
You can be build a quality assurance plan, including metrics usage, by applying the GQM (Goal, Question, Metric) methodology. Yo have to define your quality assurance Goals. Then define questions to help you address your goals. And finally, define Metrics to collect data and help you build information to assess whether or not you are answering your questions and meeting your goals.
Vic Basili originated the Goal-Question-Metric concept; among his publications you're sure to find a helpful tutorial introduction; look at drum.lib.umd.edu/bitstream/1903/7538/1/Goal_Question_Metric.pdf.
G-Q-M is the gold standard in selecting metrics for almost any topic. It is taught in Software Engineering and is recommend to establish metrics in academic assessment. It is not fast or automated, but it works in every case were I have known it to be applied. A search on Google Scholar with "Goal Question Metric" will yield 400,000+ papers; including Basili's 1992 paper on the topic.
Almost any methodological approach will be fraught with error and can likely give you bad data. Most good metrics should be custom-designed to solve some problem. (One exception to this is pure research, where you gather many "parameters of interest" and do regression analysis — but it doesn't sound like that's what you're talking about.) If a general-purpose method actually worked, you could use one to assess the quality of the answers here and obtain an objective answer. 'Nut said.
Name me a methodological metric that can't be gamed. There are precious few metrics that can't be gamed and those tend to correlate highly with things that matter: post-partum defects, revenues, staff turnover, and the like. Function points, velocity, and code mass are meaningless if you don't get these, and the latter can be and unfortunately most often are gamed.
Most methodologies miss the really important stuff, such as staff's integrated view of process health, product success, management soundness, staff competence, and corporate image. (Measuring any one of these in the absence of the others gives you knowledge about independent variables which, when modulated, will have no effect in isolation.) In Scrum we sometimes use a Happiness Metric for this (http://www.scruminc.com/happiness-metric-wave-of-future/).
In the end, measuring something draws attention to it and causes improvement in that area. The good news is that you can use metrics to improve some individual factor with that approach. The bad news is that too many people choose overly narrow metrics, so this practice leads to local optimisation and overall system degradation. A good metrics program isn't a result of a method or tool, but rather of good design by someone with psychometric skills and other good systems analysis skills. These people are out there and I have often worked with them in my own major undertakings.
"Figures don't lie, but lairs figure" unknown, but where the first words my instructor spoke in my first stats class.
In my opinion, if we state that you must do x measure or y measure, we defeat the purpose of research. By having the researcher declare a set of goals and questions we gain insight to what the research finds important. That gives the rest of us the opportunity to conduct research to refute those claims.
Although my research into this topic is not exhaustive, there are some projects were Agile is a good choice and others were waterfall is best. There is a third category were a hybrid of the two is best. The more measures we publish and discuss the more we learn about process.
BTW. My research say that not all developers like Agile. Some actually prefer waterfall.
Carl — again, precision of terms is an issue. I love to provoke people by presenting them a method that starts with one to six months of up-front analysis, followed by a review meeting and review artefacts. That is followed by high-level design and a review. That in turn is followed by implementation and test and — guess what — a review.
I ask them to tell me what I have described. Most say: "Waterfall." I respond: "it's Scrum." Get over it. Jeff Sutherland will back up the statement about the potential upper bound on the analysis interval. All the rest are straight from the framework.
There are differences between waterfall (either as practiced or as described in the Royce paper that inspired most of its practices) and Scrum, but they are related to stovepipes and to project versus product semantics — things that are more subtle than most management cultures, and researching academics, appreciate.
Anyhow, Carl, I loved your post and add a hearty second.
— Jim Coplien, Certified Scrum Trainer, Scrum Alliance