For physical devices the common metric of quality is the probability of failure. This metric is statistically evaluated based on smaller sample and then extrapolated.
There are some caveats though. First of all failure rate may depend on period of use, i.e. Weibull distribution. Also for consumer devices probability of failure may increase with the number of users, i.e. Pareto and other fat tail distributions. This is because users are not independent entities.
For software the typical metric of quality is number of defects per line of code. This metric can be statistically evaluated based on the history of product development. So yes, if we assume that the number of defects per line of manually written code is constant, smaller code is better in terms of quality. So it is generally better to choose more expressive and productive language for software development even if it means slightly worse performance and memory efficiency.
Of course the number of defects per line of code depends on the language, algorithm complexity, qualification of programmers, development methodology, i.e. agile, TDD, OOA/OOD.
There is another caveat here. During maintenance the code quality is constantly degrading due to requirement changing with time, lack of resources leading to "minimal changes" policy, and rotation of developers. This effect is known as "code decay". So the number of defects nether goes close to 0 for most of the projects and even begins to grow at some point. This is usually a clear signal that the particular piece of code is rotten to death and needs to be rewritten or abandoned.
A range of quality attribute measures have been introduced from measures including measures for code or design quality (e.g.\ complexity, completeness, cohesion, ...) to quality attribute measures which are typically used to measure aspects of architectural quality (scalability, availability, reliability, security, ...). As an example of formalization of some design measures have a look in
@article{Chidamber:1994:MSO:630808.631131,
author = {Chidamber, S. R. and Kemerer, C. F.},
title = {A Metrics Suite for Object Oriented Design},
Quality is hard. Quantity is easy. To measure quality you need to first understand quantity. Quantifying quality is even harder. What other metrics can we use other than numbers?
According to me, Quality is a relative term. We always compare quality of an object with other. Every product or service is of best quality until any other product/service better than that comes in the market.
For e.g. : The software which we are using in our computers, mobiles or any other systems seems best for us until its update or new version comes into the market. After using the updated software, in most of the cases we find that its quality is better than previous version.
Quality change its meaning from customer to customer, the thing which is of high quality for you might be of poor quality for other. Actually I agree from the above discussion that it is a relative term. For some time this term was neglected however now days as the market is more customer oriented this concept has gained momentum exponentially. In software industry tools like QTP, Conformiq and other are striving hard to maintain and enhance the overall quality of software product. The notion of Model Based Testing (MBT) has also helped the software industry to make software quality more effective. Some concepts like using checklists, effective formal code reviews, automatic testing tools also helps to enhance this notion of QUALITY.
The basic units which are used to measure the quality of software or any product are:
1. Correctness - Development in accordance to the specification.
2. Maintainability - The effort spend on fixing and upgrading and system once delivered to client.
3. Reliability - The efficiency to recover from failure or work even in case of failure.
Mathematics is the mother of engineering so every concept that is there in engineering can be modeled with the help of mathematical concepts like empirical based techniques.
Quality is treated in Requirements Engineering as a set of properties that the software system should posses. There is a distinction between functional requirements and quality requirements often referred to as non-functional requirements (NFRs).
Whilst there is almost a unanimous agreement that NFRs play a significant role in systems, there is surprisingly an absence of a strong definition of what constitutes an NFR, how to capture it, represent it and include it in requirements specifications. Glinz (Glinz 2007) argues that there are serious problems in definition, classification and representation of NFRs. Far from having a convergence of views over the years, a robust classification scheme eludes researchers in the area of NFRs.
For a survey and a discussion on NFRs you may be interested in the work of (Mairiza, Zowghi and Nurmuliani 2010; Mairiza and Zowghi 2010a; Mairiza and Zowghi 2010b; Al-Balushi, Sampaio and Loucopoulos 2013; Loucopoulos, Sun, Zhao and Heidari 2013).
Al-Balushi, T. H., P. R. F. Sampaio and P. Loucopoulos (2013) Eliciting and Prioritizing Quality Requirements Supported by Ontologies: A Case Study Using the ElicitO Framework and Tool, Expert Systems 30(2): 129-151.
Glinz, M. (2007) On non-functional requirements, 15th IEEE international requirement engineering conference, Conference, IEEE: 21-26.
Loucopoulos, P., J. Sun, L. Zhao and F. Heidari (2013 of Conference) A Systematic Classification and Analysis of NFRs. 19th Americas Conference on Information Systems, Chicago, USA.
Mairiza, D. and D. Zowghi (2010a) Constructing a Catalogue of Conflicts among Non-functional Requirements.
Mairiza, D. and D. Zowghi (2010b) An Ontological Framework to Manage the Relative Conflicts between Security and Usability Requirements.
Mairiza, D., D. Zowghi and N. Nurmuliani (2010) An Investigation into the Notion of Non-Functional Requirements.
Quality contains a number of factors, it really doesn't matter if you are talking about physical things or logical things, quality is in the eye of the beholder. That said, an operational definition of quality is ALWAYS necessary for a product design, test, and release -- whether it is a sports car, a software product, or a spaceship. I have posted several papers on this site that address the operational definition of quality for a wide variety of software products. My personal favorite is the mean time to fail (MTTF) for which there are several models available to allow us to predict the reliability of software and there is an industry standard that can be used to implement the definition -- Institute of Electrical and Electronic Engineers (IEEE) Recommended Practice on Software Reliability -- Std 1633-2008. This, however, is only ONE way to measure quality...
Many of you, as well as ISO 9000, empirically identified a wide range of attributes of quality. However, quality is not a simple set of collections of those attributes, as that a car is not a set of its components.
The essence of quality is the function-hours (Fhr) that an entity or a system may provide. That is, quality is proportional to both the functions (F) that a system can provide and the lasting periods (T) of such functions performed by the system, no matter it is a physical (hardware) or virtual (software) system. A formal description of quality as an integral of F over [0, T] is attached below [Wang (2007), Software Engineering Foundations].
The mathematical model of quality indicates that for any given software/hardware system, there is no quality provided by the system if either the functions performed or their lasting periods are zero. It also fits with the attributes identified in empirical quality engineering where each attribute characterizes a certain facet of the generally integrated quality of a system.