It depends... Some (e.g., Weinberg, "Perfect Software") include reviewing and inspections as part of the testing process, and the efficacy varies according to the type of software (web, embedded real-time, etc.) and the type of testing.
Capers Jones has published numbers about the efficiency of several bug removal techniques. Design desk check comes in at 35%; design review at 40%; formal design inspections at 55%, code inspections at 60%; and prototyping at 65%, while unit test is only 25%; functional test 35%; integration test 45%; and field testing is 50% (all values represent the mode of the distribution). (Capers Jones, Programming Productivity, 1986, Table 3-25, p. 179).
Jones doesn't talk much about his experimental design and these numbers are regularly called into question. But they are industry-wide rather than company- or project-specific.
In my opinion, this should never be the right question to ask. The right questions to ask are: what is the value proposition, how low of an error rate must we achieve in a given development interval, and how do we get there? Running software to test other software is one of the least cost-effective paths to software quality that we know.
Per, I disagree. Let's say for sake of argument that an app was designed and implemented with no bugs. Testing would find no bugs and there would be no bugs in the field. The results say nothing about testing efficiency.
I think your statement presumes that quality comes from testing, yet design and good coding are (measurably) larger factors.
Do I misunderstand something in the message you're trying to convey?
It might be good to look at what establishes test efficiency. Cost is one possible measure, but is it a good measure of efficiency. When using cost as a measure it is first necessary to examine the major drivers of this measure. In this case it is the cost of the staff to design, execute and evaluate the tests. Reducing this cost has no effect on the quality of the product or the time necessary to deploy the software. So cost may not be a good measure of efficiency. So what makes a good a measure. Typically, it is one permitting comparison and utilized things that are constants.
By definition, adequate test coverage requires that their is one or more test for each stated requirement or backlog item. Therefore an efficient test program is one that has the fewest possible number to tests to validate the requirements. In this case, the metric would be #_requirements/ #_tests
The closer to 1 the more efficient the test design. It will also work with hours, but not cost.