Dear Majid Baseer, Christian Ghiaus * , Roxane Viala , Ninon Gauthier and Souleymane Daniel

I read your paper

pELECTRE-Tri: Probabilistic ELECTRE-Tri Method—Application for the Energy Renovation of Buildings

My comments

1- I would like to highlight some important aspects that you mention and that are rarely found in many papers using MCDM

In Page 2 “There are trade-offs between retrofitting project benefit categories due to the diverse interests and goals of stakeholders. For example, reducing life-cycle carbon emissions may be the priority of the local government but not the interest of the other stakeholders”

Very good assertion, which in reality is the essence of MCDM since it is a procedure in making comparisons

2-“As a result, when discussing building energy retrofits, it is necessary to consider the trade-offs between these objectives in terms of embodied emissions, operational emissions, occupant comfort, and investment”

This is fundamental

“The multi-criteria decision making (MCDM) approach is gaining popularity in energy retrofit decision processes because of its adaptability and potential to reach a trade-off among various conflicting criteria and to foster communication among stakeholder”

Very important. Stakeholders are fundamental in decision making. Unfortunately, many people consider that MCDM is a DM field, determining criteria at will, when it is a job for stakeholders even with contradicting opinions.

“An important challenge is to deal with the probabilistic nature of much data considered in the decision process”

Yes, normally are subjective data, but in general, most information is highly reliable just as investments, costs, manpower, contamination.

3-“When different MCDM methods are applied to the same problem, they produce different results because they deal differently with performance measures and criteria weights”

I differ with you here. I don’t think that methods deal different with performance measures. In my opinion, results are normally different because in 99% of the methods, the DM modifies subjectively the performance values by applying invented weights, as well as home-made assumptions that don’t have any mathematical support. There are many and different examples of this in MCDM methods

4- “Decisions made through MCDM are justifiable and clear because they are documented and traceable due to them being one of the widely used techniques to support sustainability assessment in the context of energy systems”

Unfortunatelly, there are few methods like PROMETHEE, ELECTRE, TOPSIS that apply reasoning. They may have sometimes erroneous appreciations on thresholds, but they use analysis, experience, reasoning, research. They cannot compare with methods where the DM decides which is the most important criterion, and apply weights from intuition.

5- On page 3 “The stakeholders are likely to turn to intuition in the event of a lack of well-defined practices in the use of decision-making tools”

In my opinion, and according to my experience, the stakeholders are normally people that know very well their field and do not give a solution based on intuition. However, also in my experience, I know that it can happen, especially in some government sectors where decision is taken based on vested interests.

6- In page 6 “For instance, economic uncertainties may arise due to the fluctuation of raw material prices while environmental uncertainties may be caused by the imprecise measuring of environmental impacts”.

I agree in reference to economic uncertainties but not with environment. It is very well known the damage that say CO2 and CH4 may cause, and also very well known the thresholds or limits for the emissions. Where is uncertainty here?

However, it is a must to introduce uncertainty, perhaps in the form of risk or establishing minimum and maximum limits for each criterion, or introducing correlation in some cases. Not everything is solved using fuzzy, because you need at least to values for this, and if these values are themselves uncertain, which is the gain? You can use fuzzy when these extremes are sufficiently known, for instance, minimum and maximum temperatures in a zone, known by statistics.

7- “In practice, decision-making problems are often complex and uncertain, making it impossible to comprehensively understand and consider all aspects of the problem thoroughly”

I disagree. The DM must-have a through knowledge of the whole process (we are not talking on quantum physics!!!). What he cannot do is to consider all the thousands of relationships between the different factors. It is supposed that this is the job of the MCDM method. Unfortunatelly, all methods, except Linear Programming, consider those multiples relations and dependence. They fall in the fallacy of dividing the project in sectors, solving each one and the adding up.

8- In page 7, Figure 1, on what ground you assume that data follows Gauss distribution?

9- Page 8, “pessimistic ranking” and the “optimistic ranking”

First time I hear about this. To qualify for a solution optimistic and pessimistic, you need to have a reverence. Which is the reference here? Which is the best ranking you can take as a yardstick?

10-Page 12 What are thermal bridges? You should define the term; the reader dos not have the obligation to know it

11- Page 14. I very much agree with your Table 10, especially when you say that criterion will be evaluated using qualitative, quantitative, or binary methods, depending on the evaluation unit.

It is the first time that I have seen in hundreds of papers that the author suggests to use binary numbers. (I have used them in many cases). The problem is that not many MCDM allow their using, and in many cases, they are the only way to represent a scenario. However, they are no methods, but performance values representations.

If the DM is an architect or and engineer, yes, he may decide, but it could be not the case, so the DM must consult and investigate about this

12- Page 15 “This involves determining whether criterion “A” is more important to the decision-maker’s final choice than criterion “B” and quantifying the ratio of prevalence between them”

Why? This introduces a lot of subjectivity? You should use objective weights not subjective. However, the DM can also input his own appreciations regarding weights. Remember that the first are absolute, while the second are relative, since they depend on the DM; another DM may think differently

13- Page 21 “It is necessary to ensure that the results obtained are consistent in order to validate the new procedure”

There is not such a thing as ‘validation’, because for that you need to know which is the best ranking, something that is what you are looking for. Consistence? Of what?

I hope these comments may help you and your colleagues

Nolberto Munier

Similar questions and discussions