Do different uncertainty models like Fuzzy, Rough, Grey, and Vague all have the same role? Some researchers called them data representation methods, others called them data analysis.
The role of the different models for determination of uncertainty is close: to determine the uncertainty interval around the “true result”, i.e., the risk (output of uncertainty) of uncertain results (e.g., high chance to be false). This risk should be evaluated, and it is allowable when it doesn't compromise the decision taken on uncertain results. For example, in my area (blood bank, cells and tissues), the decision is a clinical decision with impact in post-transfusion safety. Currently the most well accepted uncertainty model is the measurement uncertainty determined according to the principle of the "Guide to the expression of uncertainty in measurement" (GUM) https://www.google.pt/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCIQFjAA&url=http%3A%2F%2Fwww.bipm.org%2Futils%2Fcommon%2Fdocuments%2Fjcgm%2FJCGM_100_2008_E.pdf&ei=x56_VN6RG8byUoaSgtAO&usg=AFQjCNGlX6kEcxhxK1MQiIdUuTX_QnIPVw&bvm=bv.83829542,d.d24. GUM, also known as "uncertainty bible" features the "law of the propagation of uncertainty" model. These model is a "top down", where the uncertainty is a combination of the major uncertainty components following Pareto's principle. GUM is intended for chemistry and physics and uniquely for numerical values. When ordinal or nominal values are used, alternative method for the determination of measurement uncertainty must be used. Eurachem published a document intended for chemistry featuring a set of empirical models fulfilling GUM principles https://www.eurachem.org/images/stories/Guides/pdf/QUAM2012_P1.pdf.
The role of the different models for determination of uncertainty is close: to determine the uncertainty interval around the “true result”, i.e., the risk (output of uncertainty) of uncertain results (e.g., high chance to be false). This risk should be evaluated, and it is allowable when it doesn't compromise the decision taken on uncertain results. For example, in my area (blood bank, cells and tissues), the decision is a clinical decision with impact in post-transfusion safety. Currently the most well accepted uncertainty model is the measurement uncertainty determined according to the principle of the "Guide to the expression of uncertainty in measurement" (GUM) https://www.google.pt/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCIQFjAA&url=http%3A%2F%2Fwww.bipm.org%2Futils%2Fcommon%2Fdocuments%2Fjcgm%2FJCGM_100_2008_E.pdf&ei=x56_VN6RG8byUoaSgtAO&usg=AFQjCNGlX6kEcxhxK1MQiIdUuTX_QnIPVw&bvm=bv.83829542,d.d24. GUM, also known as "uncertainty bible" features the "law of the propagation of uncertainty" model. These model is a "top down", where the uncertainty is a combination of the major uncertainty components following Pareto's principle. GUM is intended for chemistry and physics and uniquely for numerical values. When ordinal or nominal values are used, alternative method for the determination of measurement uncertainty must be used. Eurachem published a document intended for chemistry featuring a set of empirical models fulfilling GUM principles https://www.eurachem.org/images/stories/Guides/pdf/QUAM2012_P1.pdf.
The term "model" is relevant to both representation and analysis. How you represent your data, affects how you do analysis on them. For example, if you use fuzzy sets to model uncertainty, it makes most sense to use fuzzy operations that model.
The four models you mention, all offer their view on how to deal with uncertainty, in that sense you can say they have the same role. If you think about that they all deal with uncertainty in terms of partial memberships, then they also all fulfill the same role.
However, if you look at how they deal with uncertainty, then perhaps you can't say that they have the same role. If we add one more uncertainty model, Bayesian probability, then the concepts are very different, and I wouldn't say that the two uncertainty models fulfill the same role.
Hope this answers your question, otherwise feel free to elaborate :)
Think about various uncertainty models as of special kind of devices to measure uncertainty in a quantitative way. By analogy: the measuring tape is quite fine when you want to know how high is your table but not so good when your goal is to measure the crystal lattice constant or when you want to evaluate the distance from Earth to Sun. For still other purposes you will use a laser-based device (or radar, or ultrasound sensor) or a caliper. Simply speaking: some uncertainty models are better suited than other to any given problem. Choose the one which is most easily applicable and gives the most tight results. Oh, there is also a question whether you need guaranteed estimates or just confidence intervals are sufficient.
You start a study that will output uncertain data:
1) first, you should have a model of the phenomenon that is the object of your study. It means that you will have one or more outcomes from it (experimental data, data from a survey, market data, ...); AND, you have a number of input factors that influence the numerical (quantitative (a number) or non-quantitative (an expert score)) value of the outcomes. The model put a relationship between the input factors and the output outcomes: it may be analytical (mathematical functions), or functional, or graphical, etc. Most of the parameters of the model are affected by an uncertainty, some are 'taken as' exact (e.g. from a handbook).
2) secondly, you have to make an 'experimental design', meaning that you must decide which data are needed to become available (from measurements, from enquiries, from data sources, ...) and how you can get them. At this point you must evaluate which is your target uncertainty of the outcomes, and derive from the model the target uncertainties that you need to have associated with those input data. This can easily involve also an evaluation about how many replicated data you need for each of the uncertain parameters.
3) third, you build the resources of your study accordingly.
4) fourth, you get the data: in many cases you will not be able to replicate these data after a certain deadline, because their source will not be available anymore, or because they would not anymore be 'repeated', i.e. they will pertain to a different population -non homogeneous.
5) NOW, you can ask yourself how to *analyze* the data, probably your initial question. Here the *data modelling* comes, which is a different issue from the previous modelling. It depends on the tool (method) you decide to use for that analysis.
You may want to use a kind of statistics or another (probability: classical frequentist, Bayesian, Fischer, ...; possibility; fuzzy; interval; ... many others, depending of your field of studies and on the problem). Risk is a different category, because, instead of the 'probability' (in large sense) of something to happen, the risk of a wrong decision is the object of the evaluation.
NONE is universal. Not is the GUM indicated to you, not are the several standards from ISO, or EURACHEM, or ILAC, or any other body, or in the literature. That is just the reason why there are so many. Not only for you, but starting and choosing is not simple for everybody, and most people (including myself) make a personal preference, and tend to recommend it in case of a question like your!