To capture the inherent variability of data (variables) in the process or production systems you can use Monte Carlo analysis (MC). This tool simulates a probable range of outcomes given a set of variable conditions and can be applied within a risk assessment or Life Cycle Inventory framework to capture parameter variability. Thus, MC is a technique employed to quantify variability and uncertainty using probability distributions. The effect of variations in the production data, eg. on the carbon footprint, you can analyse using MC analysis based on 5,000 or 10,000 iterations, in which the probability distribution of parameter is estimated. Neverthless, it is possible use jointly the MC analysis and @Risk of Palisade Corporation and SimaPro 7.3. On the other hand, if you have few or no data bank conercernig the data of the process or production system, a triangular distribution can be assumed for all parameters in the analysis.
To capture the inherent variability of data (variables) in the process or production systems you can use Monte Carlo analysis (MC). This tool simulates a probable range of outcomes given a set of variable conditions and can be applied within a risk assessment or Life Cycle Inventory framework to capture parameter variability. Thus, MC is a technique employed to quantify variability and uncertainty using probability distributions. The effect of variations in the production data, eg. on the carbon footprint, you can analyse using MC analysis based on 5,000 or 10,000 iterations, in which the probability distribution of parameter is estimated. Neverthless, it is possible use jointly the MC analysis and @Risk of Palisade Corporation and SimaPro 7.3. On the other hand, if you have few or no data bank conercernig the data of the process or production system, a triangular distribution can be assumed for all parameters in the analysis.
I think that Clandio's comment is completely correct per se, but I am not sure if it answered Kadambari's question.
"The Monte Carlo method of data validation requires large data sets (random numbers) for starter."
Not really. To do an uncertainty and/or sensitivity analysis (I guess that's what you want to use MC for) what you need is to have a probability distribution assigned to each of your original datapoints. That's not a large dataset: it's just three times the size of the original dataset if you use a triangular distribution.
The random numbers don't need to be stored. Most programming languages nowadays have a reasonable random number generator and so all you need to store are the 1000+ copies of the simulation results.
There is a free open source add-in for Excel that does Monte Carlo simulation called YASAIw available at this link (includes triangular distribution and many other distributions also):
If you have access to Ecoinvent, most datasets are provided with uncertainty data. The uncertainty is typically extrapolated using the pedigree matrix approach.
If you have also access to SimaPro Analyst (or PhD) you can run extensive MonteCarlo, including different options for data analysis.
I'm very interested in the uncertainty area. However sometimes it seem me that uncertainty is less based on quantitative assessment and more on qualitative approaches.
I'm searching for a proper publication to introduce scientific framework for the calculation of uncertainty distribution starting from real spotted data. Can anyone provide me some suggestion?
My colleague has provided a structured procedure for systematic sensitivity/uncertainty analysis in waste LCA. We now follow it in most of our LCA studies. I think it is applicable in other types of LCA as well.
As Joao said, you need to create a probability distribution for you inventory entries. These distributions form the fields with which a program such as Excel or Simapro can draw data to run the computation. And as Allessio states, databases such as Ecoinvent have uncertainties built into their inventory values using the pedigree approach (a scoring matrix based on data quality and accuracy). If you are interested to see what this looks like but don't have access to Simapro or another software, the openLCA software is free and also has the pedigree approach built into it. There, you can assign your foreground data with uncertainty values which are then used in MC.
we use our own waste-LCA model, called EASETECH (http://www.easetech.dk/), which is available for free for research purposes. However, you can follow the procedure also when using other commercial LCA-softwares.
If you you want to stick to a fully probabilistic approach wrt uncertainty propagation, we have also released an updated procedure for sensitivity/uncertainty analysis, including a global sensitivity step for importance analysis. See the attached paper below.
Alessio
Article A global approach for sparse representation of uncertainty i...