I know that executing with a complex boolean expression, due to short circuit evaluation some atomic conditions are unvisited/not-tested/not-executed for both true and false value. Actually compiler ignores them to compute, by thinking that they are no more useful to compute. But, what if compiler ignores these atomic conditions in a boolean expression, at this point what will be the condition coverage? Can we say that those skipped or ignored atomic conditions will count in final condition coverage? If these conditions will counted in condition coverage then why those conditions have not generated test input values for their variable through test case generator. If all atomic conditions may not get visited, it means it will highly affect on MC/DC. Because the essential criterion for MC/DC is to invoke and check all atomic conditions at least once for true and false value. Which is not exactly happening due to short-circuiting evaluation? Please post your thoughts with example and references. I will appreciate this discussion. Thanks!!
Thanks for suggestion Omar. I did google this. But I think I searching here the specific answer who has performed this experiment and has some good experience.
Thanks for suggestions.
Still, I seek suggestions from others.
Hello,
even WITH short circuit evaluation every test is executed (given there is no constant expression within the term). But not every time. Some simple examples:
Given n independent inputs, you get 2n input combinations. With short circuit evaluation (and for simple "all and" resp. "all or" operations), only 2 of these 2n combinations require to test all inputs - resulting in a significant reduction of execution time. This is the reason for short circuit evaluation.
When testing for coverage, you can also make good use of this behavior: if you know how exactly the sc evaluation is implemented, you need a significantly lower number of test vectors (as compared to 2n) to get the 100 % coverage done.
Does this help?
HI U Dreher,
Thanks for your efforts on answering my question. I understood the concept you explained. But I do not agree with "even WITH short circuit evaluation every test is executed". For simple examples it may be possible, that compiler converts high-level language to assembly code and all the short-circuit operators get removed and the boolean expression decomposed into small atomic conditions in equivalent structure to the original boolean expression. But if you again focus on my question so you see that I am targeting for the "complex boolean expression", where compilers may not have sufficient instructions to solve the whole complex boolean expression (in short not much smarter or clever compiler). Now, this time we can not have all the branches true and false for the all atomic conditions present in all the boolean expressions in a program. I agree with your statements, short-circuiting deals with optimization to solve the expression with less amount of execution time. But, it doesn't mean that you mean that you compromise with other essential stuff. Like if I required achieving 100% condition coverage, so could you say that having short-circuit evaluation you have covered all the atomic conditions? Could you say that you have tested the conditions for all the possible outcomes? Can you tell me that if you are saying compiler ignores those conditions which got short-circuited, they got covered and tested successfully? You can answer it because it is contradictory. The conditions which compiler has ignored by considering that they are useless, these conditions have not invoked, never reached, never visited, never tested etc. So how you can say that you have achieved 100% Condition coverage. It is essential for MC/DC. You can't say 100% MC/DC without 100% condition coverage. For your easy understanding let me explain through branch coverage. Let us have 5 branches in a execution tree, {1,2,3,4,5}, if compiler execute this tree and says that it has visited {1,2,5} and not visited {3,4}. So this time what you would say? Would you say you have achieved 100% branch coverage? I must say no. You have achieved 3/5*100 =60% branch coverage. The concept is very common that compiler never ever invoked or visited uncovered branches exceptionally infeasible branches. So, if this is true that the uncovered or unreached branches tree do not have 100% branch coverage then how could you say 100% condition coverage has been achieved for unvisited/unreached atomic conditions. This is the main problem where we are. And we need to think about it. Please correct me and suggest wherever you feel I am wrong or I need to polish my answer. I am sure it will help to us for more rigorous discussion.
Thanks,
Sangha
Hello Sangharatna Godboley,
I basically understand your concerns regarding the decomposition into "atomic" Tests - testing single Inputs. I know this as - - - this is the standard case ! You may find some compiler that's trying the build the result of the combinational complex, but this is quite rare.
The "trick" is the following:
The combinational logic is "stored" in the sequence in which the tests are executed. AND in the branching following each test.
You cannot get some coverage of 100 % with a single test. This is why we have some "test vectors" and execute the test for each element of the vector.
If decisions 1-5 are Independent, you have to supply a test vector that "triggers" all branches to be executed. The "wisdom" is to design the test Input (vector) such that a minimum of passes is required to achieve 100 % coverage. Do do this, insight into the Code is required.
A different problem arises if the sequential tests are NOT Independent: if test 3 returns false if test 1 Returns true, you may get all individual decisions covered, but not necessarily all combinations of decisions (pathes). This could even be the case if you apply all 2n Input combinations. This is the case where you either have to say goodbye to 100 % decision combination coverage - or have to "interact" manually.
If you achieve 100 % decision coverage, you have also visited ALL atomic "members". What you do NOT achieve: some atoms are called comparably rarely, others every time. And if you have Atoms with so-called "side effects" (eg. performing some action - eventually depending on the test vector) you have a big problem: this will (most likely) not work!
Which is the reason why such "atoms" are explicitlyprohibited in the MISRA rules: "operations with side effects may not be used within condition testing" (or something like that).
Summarizing:
With some experience in these topics: it is not THAT difficult. And some of my colleagues are working on this every day.
I hope the concept is clearer now.
Hi U. Dreher,
I appreciate your discussion. Sorry for writing the stuff in a lenghty paragraph in my previous comment. I try to take care now onwards. I will write in pointwise.
Well, I agree this time with you. You said, "100 % decision coverage can be reached easily via test vectors designed appropriately." This is true with single test input value you can execute all the possible branches. So you required sufficient test input values i.,e, test vector to do rigorous execution and try to achieve 100% decision coverage.
So, for this "100 % coverage of every possible decision combination is often impossible without interrupting the normal operation as there is often code that's only executed in case of "abnormal" hardware behavior (EMI, SEU and alike). " I guess you would like to say more than one decisions make combinations in a program, is not possible. Again I agree with you. It totally depends on the constraints solver, how efficient it manages with the exploration of paths. Nowadays compiler is clever enough to explore all the paths which cover all these, exceptionally if there are no infeasible paths. So, this time manual intervention may require or may have some other automated techniques to resolve.
Now, according to your third point, I guess you understood my technical question since you are very near to my solution. So, I assume that you have well knowldgeI about the Branch Coverage, Condition Coverage, Decision Coverage, Condition / Deciions copverage, Modified Condition / Decisions Coverage. Multiple Condition Coverage, I can say these all are white-box code coverage criteria. So, I would like to ask on the basis of your third point,
1. Could you say that you have achieved 100% condition coverage if there is some short circuit evaluation exist?
2. Would you not feel that due to short circuit, you may get MC/DC affected?
Now, I think I have related the question in a better way, because you may see my research question at the very top of this page. What exactly we require to answer?
My discussion is worth. Because after identifying and realizing the problem, if anyone has a solution so he/she may share with us.
Thanks,
Sangha
Hello again.
(Don't stick to the source code - look at the machine code generated! SC evaluation tends to create some decision tree with singular branches off the trunc. Each branch is a short circuit.)
Answering your question: YES.
Given there are no unfeasible conditions, 100 % decision coverage is possible.
Regarding the "visiting of the atomic conditions": as I already wrote, there is a reason for MISRA to prohibit this type of implementation. WITHIN the decision implementation, branch coverage does not yield 100 % when SC evaluation is active. Where necessary, you'll have to split such a complex condition into a number of smaller - sequential - conditions. OR call the atomic expressions, store their results in variables and apply the condition testing on the variables. Which is exactly the way MISRA recommends for such cases.
It is really not that difficult :)
Hi U. Dreher,
Thanks again. Your are very near to clarify the concept to me with some important references such as MISRA guidelines. But still please help me for few questions below:
1. As you said "100 % decision coverage is easily achieved - even WITH SC evaluation.(Don't stick to the source code - look at the machine code generated! SC evaluation tends to create some decision tree with singular branches off the trunc. Each branch is a short circuit.) ". I know 100% decision coverage is possible using Active SC. But, you know 100% decision coverage doesn't tends to 100% condition coverage. This is bitter truth. As I guess you know the essential requirements for 100% MC/DC, which requires 100% Decision Coverage and 100% condition coverage both. Either one of them is not sufficient. 100% means, it must invoke all the decision and conditions at least once for true and false.
----->> One more comment you made "Don't stick to the source code - look at the machine code generated!", this may not be applicable for MC/DC as we are discussing for. This must have high-level language with boolean operators && and ||. When compiler convert high-level language to assembly code later in machine level code, , every complex and simple decisions, condition etc. are treated as only a branch node or atomic condition which actually result in only two branches these are true and false, which is not at all possible in MC/DC. In MC/DC you must compute the whole boolean expression.
2. I agree with your second point.
I appreciate your help and information that you have provided to us ""Regarding the "visiting of the atomic conditions": as I already wrote, there is a reason for MISRA to prohibit this type of implementation. WITHIN the decision implementation, branch coverage does not yield 100 % when SC evaluation is active. Where necessary, you'll have to split such a complex condition into a number of smaller - sequential - conditions. OR call the atomic expressions, store their results in variables and apply the condition testing on the variables. Which is exactly the way MISRA recommends for such cases." This thing today I learn. But, as my understanding MISRA guidlines prohibited to visit all atomic conditions, then what exactly solution MISRA guideline provide to compute MC/DC.? Is there any suggestions on MC/DC by this guidelines. As I know MISRA guideline is a static analysis which actually shows you the correct way of coding/programming, so that a code must not have dead code or infeasible code. Also, it may helps to write a code error free.
---> One comment:::Does it means after following MISRA guidelines you no need to perform testing? Are you confidence enough to launch a product? Are users ready to use your program/application/software? My suggestion would be, NO. What you say?
Please share your views.
Looking forward to hearing from you.
Exploring my knowledge, a special thanks to you U. Deher.
Thanks,
Sangha
Hello,
some short comments:
"Coverage" IS ALWAYS ABOUT THE INSTRUCTIONS EXECUTED!
(As you usually use the coverage feature of some Debugger: it can only cover instructions - which may differ from what you entered on the source code Level!
If source code coverage is displayed: this is for your convenience, but is as well a "simplification" of coverage.
Insofar: MC/DC has little to do with the source code - it's all about machine code (aka "instructions")!
The MISRA rules are targeting code quality, stability, portability and predictability regarding the machine code generated. Some side effect is that code according to MISRA is - to some extend - easier to test.
The code I'm usually writing often violates MISRA rules to some extend - being "beyond MISRA" on the other hand.
Regarding your question about the Need to test: I personally have code that was not tested further after executing correctly for the first time. Such things are possible but require a certain kind of software architecture. So: YES - I have quite some code running that was not tested after being set into Operation (in other words: "debugged"). And yes: one of my systems is considered part of a functional safety concept even if the code never underwent formal testing :) This may not be the normal way to do fhings, but it is possible.
Regards
Hi,
I am bit clear by your first point. But, not completely agree. You need to compute all the metrics to compare them. Some time concepts we are following the things but practically gets failed. It is really very difficult to handle and compute MC/DC. Also, it is not very easy to develop an automated analyzer for this. But we tried it by implementing a prototype version. But you know during implementation and experiment a lot of concepts we observed and learned. Also, still it is not clear from your first point that whether the 100% decision coverage gives100% condition coverage. In my view no, it is not always true. Due to the short circuit, you may degrade condition coverage. Also, the disadvantages of short circuit show you the problem, please have look on the brief info below:
Taken from {https://www.javabrahman.com/programming-principles/short-circuiting-or-short-circuits-in-boolean-evaluations-in-programming-and-java/}
Disadvantages of Short-Circuiting
Although Short-Circuits have the obvious advantage of efficient processing because it skips unnecessary condition evaluations, there are some disadvantages of this approach as well –
Disadvantage 1: Any logic which was expected to be executed as part of the conditions which are bypassed will not be executed. I.e. say we have Condition-1 OR Condition-2 where condition-1 is evaluated to TRUE. So evaluation is short-circuited and condition-2 is not evaluated. By, in this case, condition-2 was a method call which was supposed to complete a step in processing. In not evaluating condition-2 we are not invoking the said method and missing a part of the execution. Note – Java has two boolean operators &(AND) and |(OR). These boolean operators are not short-circuited. Using such operators Java programmers can avoid this disadvantage.
Disadvantage 2: Code execution becomes less efficient with short-circuited execution paths because in some compilers the new checks for short-circuits are extra execution cycles in themselves. Also, branch prediction becomes inefficient in some modern processors when short-circuiting is used.
To be honest I must not comment on MISRA points because I don't know much about this. But, I agree with you that testing may not be done for a small system or a software. As you had an experiment. But, you know, this is not good for the standard development phases. If you like to launch a product without testing, you never launch these products without certification. Please go through with standards DO178/DO17C/RTCA. These standards guide you to perform testing according to the criticality of software. MC/DC is mandatory to achieve level A certification for Avionics software. Though you follow any coding guidelines but you are not allowed to launch a product without these important report with certification. I am sorry but in a very generic way to avoid testing, I completely disagree with the second point. you made
I appreciate your effort to answer my discussion.
Please carry on posting views. We all are learing here.
Thanks
sangha
Hello again,
sounds as if you were trying some DC/MC coverage analysis based on simulation. Which might prove unfeasible. (My colleagues gathered some experiences with such attempts: until now comparably disappointing.) Especially if tried on the source code level - omitting the influence of "code generation" by the compiler. Be as it is ...
I'm convinced that 100 % decision coverage can give 100 % condition coverage. I'm not sure whether it is possible to create "constructs" where this does not apply. Looking up MC/DC @ en.wikipedia to "calibrate" my understanding of your term "condition": ok - in the MC/DC context it is more the other way round: 100 % condition coverage should result in 100 % decision coverage! SC evaluation btw. reduces the "condition landscape": seeing a logical expression as the product of the outcome of the individual conditions, this is significantly reduced due to SC evaluation. And it should be clear that SC evaluation produces always the correct outcome of the decision (aka complex logical expression).
The reference given has some flaws:
Disadvantages
Life can be so simple.
(If not SCing, you need instructions to "concatenate" the results of the individual condition tests - requiring code as well. Often more than this single conditional branch.
Branch prediction: see above.
I suggest to get a copy of the MISRA rules. Though they do not give the basic reasoning, they at least give the reasoning for every single rule.
I admit: I'm not working for aviation. But the requirements for automotive are only slightly lower. Nonetheless it is possible to write code that's "testing itself" - sporting a "simple" architecture where all paths are taken during normal operation. That's one of the tricks where formal testing does not add to quality.
I've had customers that still try to introduce software quality into their system by testing. Which provoked my proverb: "Quality is not created by testing. It is created by design!"
Even under DO178 etc. (or railway control systems), it is best to create software quality by design - then having formal assertion per formal testing.
This for today ...
Hi once again,
I am very thankful that you have rigorously discussed your experience here.
So, What we can conclude is, SCing focus on optimization of coding, may be a target to save time. But, this optimization would not be suitable for MC/DC. Both, optimizing the code (SCing) and Reduced test cases (MC/DC) have different objectives. But, MC/DC is dependent on SC operators/boolean operator. This code optimization may not record or save input values of variables for true/false branches which are skipped due to code optimization (SCing). Which could have helped to achieve higher MC/DC. Let us agree with our discussion, compiler test atomic conditions and think to ignore
Let us agree with our discussion, compiler test atomic conditions on the right operand of SC operators and think to ignore them for storing input values for true/false branches. So, here due to this issue MC/DC gets affected.
I agree with U dher comments and discussion.
Now, I would like to move forward. Can we have discussion, how to deal with this situtation. If i would like to enhance MC/DC, what may be the solutions? In my approach I tried to simplify boolean expression, in which we avoid to use short circuit operators, and the equivalent code is been written. Hence, we achevie higher MC/DC. So please discuss on this Question and Solution.
Sorry for delay but I was busy otherwise.
While SCing is targeting a reduction of execution time, it is not ruining the attempt to reduce test cases. It is just that you have to know how the compiler works. Given n binary conditions, SCing could end up with as little as n+1 branches which is a significant reduction. Now it depends on the nature of each condition how many test cases you need to cover this condition "perfectly". In the case of a simple bit test, each branch would require a single test. A set of bit test patterns could look like 0000, 0001, 0011, 0111, 1111 (eg. for a 4 condition AND) and would cover all possible conditions outcomes. Whether you want to speak of "reducing" test cases (as this set would perfectly match what's implemented) depends on whether you are referring to the original source code or to the implementation: with respect to the SC'ed implementation, further reduction is not possible as it is already reduced to the absolute minimum.
I agree: SCing does not record intermediate results other than that you can "see" (track) whick branch has been taken. But recording intermediate results is overhead without any use other than eventually analyzing decisions later.
Re optimizing MC/DC:
If the implementation is not SCing, you could implement exactly the SCing scheme for MC/DC. As indicated above, this results in the absolute minimum of test patterns (ld(n) + 1). To be kept in mind: it is somehow asymmetric - testing only "relevant" condition branches (those that make a difference). Thinking about that, "mirroring" the bit patterns (as given above) would revert the outcome, testing the pathes not taken by the first pattern set, ending up with
2 * ld(n) test cases (as the 0000 and the 1111 are identical to their mirror image).
Does this help?