I'm trying to do a study based on statistical data, for that purpose I need to turn qualitative data into quantitative. In this case, to turn normal values into money. How can I do it?
Quantitative data contains quantitative information. By definition, such kind of information is NOT contained in qualitative data. Turning qualitative data into quantitative data would mean a de-novo (out-of-nothing) creation of information. That's impossible.
The other way around is possible. If you had data on money (in some currency), you can always throw away information and reduce it to some kind of qualitative data, like e.g. "no money"|"a bit of money"|"lots of money". But there is no way back, because this information about the quantity has essentially been lost.*
--
*not completely, because the new data still has an ordinal information. At least the quantity of "no money" would be still restorable, but it won't be possible to say how much money a person has that has been categorized as having "a bit of money" or "lots of money" - at least not precisely. If there was the additional information provided how these categories were built and how the frequency distribution of "money" looks like, it might be possible to substitute the categories by some reasonable quantity derived from the classification rule and the distribution of the money.
in my opinion, it would be useful more information about your research, about the universe you are searching on. I do not know, but I think you can do it, by introducing some qualitative category for the monetary data. You should need to ask your respondents something about the value they give money; the satisfaction about the value of money, or the objects they can buy (it depends on the use of the data); the way how do they spend that money. If you have not the possibility of making some interview to complete quantitative data, it is quite difficult to deepen in the issue.
Primarily the purpose of the qualitative research is exploratory research and based on that the collected data would be totally different than if you are collecting quantitative data and vice-versa. Quantifying the qualitative data isn't impossible but it wouldn't be accurate and serve the original purpose of your study.
I strongly recommend you to read a paper for EDWARD C. GREEN entitled Can Qualitative Research Produce Reliable Quantitative Findings?.
Last note, i think the explanation of your question doesn't reflect the question!
You asked about how to turn quantitative data into qualitative?, then you mentioned that you need to turn qualitative data into quantitative.
Qualitative data r nominal and ordinal data on the other way quantitative data ratio and interval data.
Qualitative data is also possible if your data are converted as a scaling status ( like as likert, etc) & also categorized along with frequency distribution norm. Then multiway analysis such as pattern recognition technique PLS - DA may applicable with induction logic.
In my opinion this is possible, provided that the source, can link each alternative to a value by using a scale such as Likert (1932). It is also important to make a pilot, with a reasonable sample to test reliability.
To me it is not clear what you mean by qualitative data and 'normal values'. Generally speaking, it is possible to quantify qualitative data. We for instance applied the content analysis method on newspaper articles.
Omar, Likert data is still not quantitative. It is only an ordered set of categories.
It remains just the major problem to assign sensible numeric values to these categories. There is no rule how this should/could be done. Unfortunately, there exists a paper
S. S. Stevens (1946) On the Theory of Scales of Measurement. Science 103:677-680
that has been used by authors of influential textbook in psychology. This established a completely unneccesary detour of defining "scales of measurement" for numerical data that makes sense neither mathematically nor statistically. Stevens thought to facilitate using statistical tools that are tailored for random variables (that assign probabilities to numerical values) to data that is actually not numerical by simply assiging some arbitrary numeric values to the categories so that possibly given restrictions (like an ordinal structure) are preserved and then stating a "measurement scale" of this new numerical variable that would define the "allowed statistical operations".
There is nowhere made the statement that the numers assigned to the categories would have any (quantitative) meaning. For example, the Likert data can be coded numerically by the values 1,2,3,4,5, or be any other combination of 5 values a1, a2, ... a5 so that a1 < a2 < ... < a5. Their values do not matter, and 1...5 if often used only for convenience. Obviousely, statistics like mean values depend on the chose set of ai values, what seems to be disgusting. But: Stevens said that such a variable is "measured on an ordinal scale", so the values don't matter, and all calculateions that use more than the mere order are NOT allowed anyway. Hence, calulatinong a mean in a non-allowed procedure for this data (as for all other procedures inclusing the calculation of sums or means, lke ANOVAs, t-Tests, regressions etc.). And the key point is: although the values are represented in numerical form, theses values are arbitrary and do NOT have a quantitative meaning!
I think your phrase " it is possible to quantify qualitative data" is misleading, especially in the context of this thread. ITo my opionion, it should better read "it is possible to quantify properties/attributes of objects". Don't confuse objects and data.
In your example, an object is a newspaper article. You can quantify the length in letters, the length in words, the area in printed space, the density of ink, the frequency of letters or words, and, if you wish, even quite abstract measures to value "correctness", "completeness", "politicalness", "information", "education", "entertainment" etc. All these things can be seen as attributes of an object of type "newspaper article". Quantification is done based on a particular operationalization (a set of rules that defines how the state of the attribute is turned into a numeric value). Such numeric values can be dealt with statistically, when a probability model is identified. The model over the possible values is then called a random variable. Both, operationalization and probability model are actually arbitrary. However, the whole analysis is meaningful only when the choice of both make sense in the scientific context.
Ok! Let me give some light over the question. The intention of the study is to replace Gross National Product (GNP) into a most reliable indicator. Due to that I have to analyze some indicators that are only expressed by quality and quantify them in economic terms. The example I'm taking is one from Mark AnielKy's Genuine Progress Indicator (from Pembina Institute), made for the Alberta region:
Anielski, Mark, Mary Griffiths, David Pollock, Amy Taylor, Jeff Wilson, e Sara Wilson. Alberta Sustainability Trends 2000 - The Genuine Progress Indicators Report 1961 to 1999. Drayton Valey, Alberta, Canada, 2001.
I must thank all your answers, they were quite chalanging.
This is a famous problem faced in management fields, such as Risk Management (RM) and Customer Loyalty Management (CLM), etc.
However, It tends to be subjective as you need to transform the non-scalar variable into an ordinal form.
For an objective measure, someone might consider a Key Performance Indicator (KPI) called Net Promoter Score (NPS) that is often used in CLM.
Let's put down some examples for easier elaboration.
Consider data collected from a survey, denoting the user experience after trying a certain event.
The feedback could be classified as:
Qualitative Data:
* Very unsatisfied :((
* Unsatisfied :(
* Neither unsatisfied nor satisfied :|
* Satisfied :)
* Very Satisfied :))
A mapping of these inputs into Quantitative Data might look like following:
* Very unsatisfied :(( ==> -2
* Unsatisfied :( ==> -1
* Neither unsatisfied nor satisfied :| ==> 0
* Satisfied :) ==> 1
* Very Satisfied :)) ==> 2
Or
Alternatively, another mapping into Quantitative Data might look like:
* Very unsatisfied :(( ==> 0
* Unsatisfied :( ==> 1
* Neither unsatisfied nor satisfied :| ==> 2
* Satisfied :) ==> 3
* Very Satisfied :)) ==> 4
Both mappings are useful and they have an order.
Let's take a more complicated case:
Suppose that you have two dimensions variables, providing the qualitative assessment.
It comes down to mapping your qualitative data into a quadrant.
In Marketing Management, we call it (Magic Quadrant).
This concept is also used in Time Management (TM) for Tasks prioritization according to 2 dimensions that are "urgency" and "importance".
Again, same concept is also used in RM. E.g. Probability (P) versus Impact (I).
Let's suppose, we have:
Risk 1: It is Low Probable, Low Impact.
Risk 2: It is Low Probable, High Impact.
..
Risk n: It is High Probable, High Impact.
We could use a KPI, called Expected Monetary Value (EMV) = P x I to quantify our data. See attached figure.
We also might think of another KPI such as the Euclidean distance from the center of the quadrant that is the famous SQRT( X2 + Y2) to transform our readings into ordinal form.
For higher dimensions, when working in IRn, Usage of Principal Component Analysis (PCA) might reduce multiple variables (n) into 2 principal components.
If not, provided that you have accumulated a lot of data mapping, usage of well-known Machine Learning (ML) techniques might help for training your ML model and using it to predict/quantify subsequent combinations.
If your qualitative variable contains many categories, for instance 15, as white, white gray, deep gray, marine blue, sky blue, ... magenta, you may attribute a score to each category, e.g., 1, 2, 3, 4, 5 ... 15. This solution is somewhat similar to the one pointed out by Omar, some answers above. Even so, to establish the scale, it is necessary to previously order the attributes according to a certain logic. But it is important to consider that this transformation will represented the variation by steps and not a continuous scale.