The proposal is that we want to develop a decision support system based on a new decision making paradigm. The objective is to design an assessment approach to evaluate the effectiveness of the DSS system.
the attributes of effectiveness of a DSS will differ based on the domain it is used for, for example DSS for clinical analysis will have different attributes which will be used to measure effectiveness in comparison to business enterprise DSS.
Some of the attributes of DSS effectiveness are;
user satisfaction
decision performance to measure DSS success.
avoiding decision errors or reducing decision cost,
reducing decision regret (very important attributes). Regret factor is when a decision is taking and proves to be a wrong decision, then the decision making team regrets for the decision that was made. In this regard DSS is used to take a decision so that the regret factor can be minimized or in other words it can be said that the probability of success will be increased.
the attributes of effectiveness of a DSS will differ based on the domain it is used for, for example DSS for clinical analysis will have different attributes which will be used to measure effectiveness in comparison to business enterprise DSS.
Some of the attributes of DSS effectiveness are;
user satisfaction
decision performance to measure DSS success.
avoiding decision errors or reducing decision cost,
reducing decision regret (very important attributes). Regret factor is when a decision is taking and proves to be a wrong decision, then the decision making team regrets for the decision that was made. In this regard DSS is used to take a decision so that the regret factor can be minimized or in other words it can be said that the probability of success will be increased.
This approach though seems plausible but is not sound demonstration of a practical solution. The problem is to find real world correspondence to the features of presented solution where they will be assessed.
Correct the basis for evaluation is dependent on the domain but let me explain other matters:
1- We are not looking to implement at this stage for empirical studies. The target is to find an evaluation basis for decision-action process of software development.
2- Measurement of regret factor or decision error is somehow relative, subjective and they need empirical studies. Although the idea of regret factor seems appealing as an alternative measurement approach.
@Kkrishnendu and Anthony thanks for your answers. I should mention we have several impeding factors for conducting a concrete evaluation process, as the impact of new paradigm seems to have a complex structure for measuring its effectiveness. The context as dear Krishnendu mentioned in itself, software development process, is considered with many uncertainties. Although several scholars tried to model aspects of this process but again when it comes to specific domain or implementation then it is uncertain. Then you can imagine the process of decision making would be much more uncertain in such a context and more importantly it is a complex process to develop a decision support basis for such a discipline.
Dealing with uncertainty reduction, one might look at information access/availability.
If one is dealing with risk or risk reduction, then it is more than information, it is about proactive alternative/complimentary plans (i.e. call options) and insurance/contingency (i.e. put options). If risk management is one of your performance criteria, you need real options (much like financial call and put options)
@Krishnendu Mukherjee, could you give a more elaborate explanation about your approach. Although your answer has good points about analyzing the DSS framework but it is very general. I need more details and possibly with an example please.
Consider that the context is software development process, and the purpose is to improve this process by a decision support system. What are the criteria supposed for scoring this system? How to develop those criteria in a manner to indicate the DSS performance exhaustively?
There are possibilities to test the effectiveness of a decision support system without performing empirical testing. Consider the situation in a MCDA where every solution to a problem will have a variability range. This means that every individual solution covers a certain area in the total solution space, e.g. imagine a venn diagram. The total solution space is not completely filled with individual solutions, and pairs of solutions can partly cover the same area. Ideally one choice is the best solution for the problem. The open regions in the solution space might led to no solution at all, and the overlap might led to two or more equally rated solutions. Then the following parameters will help to evaluate the MCDA system:
1. Redundancy between solutions: analysis of the overlaps in pairs of solutions.
2. Uniqueness: effectivity of the system to identify one and only one optimal solution. An a-priori requirement to an MCDA could be the assumption that the Uniqueness is 100%.
3. Coverage of solution space: with an increasing number of criteria the coverage of the solutions in the total solution space will exponentially decrease. This parameter is good to analyse, but a coverage (far) less than 100% might be acceptable.
Besides redundancy between solutions, there might be redundancy between criteria as well. It could be imagined that in every pair of two criteria with a correlation coefficient higher than e.g. +/- 0.8 one of these criteria is removed.
There is a platform for DSS or MCDA in which these parameters are implemented: see www.determinator.wur.nl; switch to UK in upper right corner of screen. The player is freely available. Contact me for availability of the Developer. See for an example: http://dx.doi.org/10.5772/51362 .