There are really two basic origins of outliers: either they are errors, or they are genuine but extreme values. Errors can occur in measurement, in data entry, or in sampling.
I assume the first two errors are pretty self-explanatory, so let’s talk a little about sampling errors.
The basic idea here is that the outlier is not from the same population as the rest of the sample.
For example, as a consultant I once worked with a data set of English reading scores for bilingual first graders. One student had a very low score, but it turns out that child wasn’t actually bilingual. He spoke another language at home and had not yet learned English. In other words, this student was not from the bilingual first grader population, even though he somehow was included in the study.
Here’s another example of a sampling error that’s a little less obvious. Cognitive psychology and linguistics often use reaction times as dependent variables.
Reaction times are often skewed to the right, but even so, there are often high or low outliers that are beyond the range of times that are reasonable for someone who is actually trying to perform the task at hand.
For example, participants may be told to indicate whether a presented string of letters is an actual word or not, and their time to answer will be recorded under different cognitive loads. Reaction times that are so fast the person couldn’t possibly have read it indicate this response is not part of the population of reaction times to the task. Rather, it’s part of the population of times that result from holding down the space bar for every trial in order to get the experiment over with.
A reaction time that is very slow may indicate a score from the population of reaction times that occur when participants are not paying attention to the task at hand.
Dropping errors like these is entirely reasonable because they’re not from the population you’re trying to measure.
In contrast, it’s not reasonable to assume all long reaction times are errors and it’s not reasonable to “fix” genuine data points.
My advice? Take the time to investigate each one rather than using a simple rule like “delete or winsorize all outliers over 3 standard deviations from the mean.”
Outliers can be caused by measurement error or by fat-tailed distributions. If you have a dataset with a large number of outliers which is not the result of measurement error you may have to account for departures from normality in hypothesis testing.
Another way outliers can arise is by not properly framing the hypothesis. Suppose you want to test the effect of medication adherence on diabetes patients. You see that several patients have hemophilia and have costs that are orders of magnitude higher than the other patients. Now you need to decide if the question you want to answer is “What is the effect of medication adherence on typical diabetes patients?” or “What is the effect of medication adherence on all diabetes patients?” In the one instance, hemophilia patients should be included while in the other case they should not.
If you properly determine the hypothesis, exclude all measurement errors and control for departures from normality you should be in good shape in dealing with outliers.
Go through the following papers in order to earn an idea and then ask question.
1. Dhritikesh Chakrabarty (2015) : “Theoretical Model Modified For Observed Data: Error Estimation Associated To Parameter”, International Journal of Electronics and Applied Research (ISSN : 2395 – 0064), 2(2), 29 – 45.
2. Dhritikesh Chakrabarty (2016) : “Theoretical Model and Model Satisfied by Observed Data: One Pair of Related Variables”, International Journal of Advanced Research in Science, Engineering and Technology, (ISSN : 2350 – 0328), 3(2), 1527 – 1534, Also available in www.ijarset.com.
Thank you for sharing your interesting papers. I will try to read them out. Though I don't understand statistics much, I will have a lot of questions nevertheless.
There are a number of reasons for outliers: 1. Some individuals in the sample are extreme; 2. The data are inappropriately scaled; 3. Errors were made on data entry; 4. Unanticipated complexities exist in the relationships among variables; are just a few examples, but they illustrate how important decisions about the handling of outliers is to the research project. Discovering the presence of outliers and studying them must preceed continuing with the data analysis. Their presence and their nature will influence the analysis and possibly impact the understanding of the empirical findings
I like to add one summary that outliers occur due to assignable cause(s)/effect(s)/Influence(s)/source(s) which can be identified, measured and controlled.
On the other hand, the error that occurs due to the cause(s)/effect(s)/Influence(s)/source(s)
which can neither be identified nor be measured and/or nor be controlled