In software testing, fault injection is a technique for improving the coverage of a test by introducing faults to test code paths, in particular error handling code paths, that might otherwise rarely be followed.

let's take this scenario:

I inject '25' faults in the code and implement two techniques(A, B). suppose 'A' discovers 10 faults and 'B' discovers 15 faults. So as a result B technique is better than A or coverage area of B is better than A.

So is there any mathematical term to find the coverage? Or how to define it in a formal way? 

Can i calculate like this,

coverage of A is (10/25)*100 = 40%

coverage of B is (15/25)*100 = 60%

As a result, difference between coverage area of B over A is 20%.

More Muhammad Iqbal Hossain's questions See All
Similar questions and discussions