I have 20 samples and I analyzed them using 4 different technique. I obtained the mean data with SD for each sample and a technique. I can easily analyze the results with SPSS using a proper test. However, I want to consider their SDs before analyzing them. Is it possible? and How?
I think, it is not meaningful analyzing (comparing) the data without SDs in my case because some of them have small but some of them have large SDs.
Dear All researchgate members
Sorry and I apologies for any inconvenience.
Dear @Emmanuel Curis
Thank you for reply.
Indeed, I faced the same problem ( “error” ) which you mentioned when I was submitting my answer.
Exchanging and communicating the answers and ideas through the researchgate is very good, such exchange and communication being its two sole purposes. However, one should not force others in following his answer or rejecting other answers. In addition, the answers should be based on either documented personal experiences or theoretical principles.
Moreover, the principles and the theory of Statistics, as in the case of other fields, has been and is strengthened and built up through writing theorems/papers in high quality journals, and in publishing books through recognized publishing companies. One of these theorems is the Central Limit Theorem. In addition, the Statistical technology/software has always assured and approved the veracity of the theory. And, if there is a contradiction between them, it is the folk of the technology/ software and not the theory.
The use of Statistics has permeated almost every facet of our lives. Statistical concepts and methods, and the use of statistical software in statistical analysis have affected virtually all disciplines-physics, biology, chemistry, agriculture, engineering, economics, business, sociology, psychology and others.
This importance has caused one of the most common problems in the arena of Statistics. Any researcher who can calculate the mean, standard deviation and, furthermore, may be able to apply merely a t-test, may be misled into thinking that such processes alone constitute Statistics, and thus may mistakenly imagine himself to be a qualified statistician.
Regarding the questions and answers: we are all supporting the science and exchanging our ideas and experiences. In case of any argument having ensued regarding any problem relating to the evidences (papers, books, resources, … ) of the personal experiences or theoretical principles, then these should be given in such a manner that they either support your idea or reject others (please refer to my reply below). Otherwise, our answers are not accurate.
Dear Emmanuel Curis
In your first answer you concluded that my idea was based on the output of last century! Your answer is supposed to give some statistical evidences, and, your having failed to do this, clearly indicates that you are unaware of the new (2018, 2017, 2016, ...) and extensively published work in this regard. Thus, the answer is not statistically based, and thus cannot be accepted. Your conclusions lack any credibility as they are neglectful of the solidly theoretical side of Statistics.
Thank you
Zuhair
------------------------------------------------------------------------------------------------------------------------
Briefly and statistically:
1. “With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed”.
2. According to the central limit theorem:
i. “If the sample data are approximately normal then the sampling distribution too will be normal”;
ii. “In large samples (> 30), the sampling distribution tends to be normal, regardless of the shape of the data”; and
iii. “Means of random samples from any distribution will themselves have normal distribution
The question is not clear.
ANOVA requires homogenity of variances. If variances are too different and cell sizes are too unbalanced, you can try Welch's ANOVA.
Firstly, thank you for the response.
I have data with SDs and I want to compare data including SDs by using any test in SPSS. Here is a sample below;
Sample no.-----Method1(Mean±SD)-----Method2(Mean±SD)
1-----20 ± 5-----20 ± 10
2-----40 ± 10-----40 ± 20
3-----10 ± 3-----10 ± 10
As you see, data are totally same for each method but their SDs are different. I just wonder whether it is possible to compare data with their SDs or not??
I hope it is clear now. Thank you in advance.
I am providing you video links you tube SPSS it might help you:
https://www.youtube.com/watch?v=e-CehMFn_lY
https://www.youtube.com/watch?v=GkxKLe1FnDI
https://www.youtube.com/watch?v=T2gCe1O7xi8
https://www.youtube.com/watch?v=GkxKLe1FnDI
Regards
In ANOVA ,SD will be automatically taken into consideration. Pl focus what exactly u want to find out from data and accordingly analysis will be followed.
Are you really trying to prove that at least one of the methods gives different results from the others? If so, ANOVA on your *individual* data can be used, but beware that it the *same* sample is analyzed using four methods, you must use random effects ANOVA or at least at two-ways ANOVA (method & sample). You cannot do anything using the means and SD only [well, except using formulas to compute the ANOVA, but would be easier to use directly the raw data in a statistical software].
However, if you want to prove that the four methods give the *same* results, ANOVA [in its usual meaning & implementation] is completely useless, forget it, and read about agreement/concordance studies (or, eventually, equivalence tests). Here again, anyway, you will have to use the raw data, not the means and SD.
First of all, you should be concentrate what you want to analyze. As far as I am understanding your problem, if you want to analyze samples from their mean and standard deviation (except using ANOVA), then you have to obtain coefficient of variation (CV) simply using "CV=(SD/mean)*100". The series having lesser CV is said to be more consistent (or homogeneous) than the other.
Thank you for the further suggestions. I could also use origin to do a statistical analyze, you are right. I just want to make sure to apply a correct method(s). I am grateful that you remind me the necessity of CV, I did not think about it.
Beware that CV is only interesting if you have always strictly >0 values and if you assume that variance (standard deviation) is dependent on the mean. Otherwise, small means will give high CVs just because they are at the denominator.
Basically and oversimplifying, CV is useful mainly if you assume a log-normal distribution, just as SD/variance is useful mainly if you assume a Gaussian distribution...
And if you assume a log-normal distribution, don't forget to log-transform your data before any T-test/equivalence test/ANOVA and so on...
Gizem, I still don't get the question and your explanations eg, "data are totally same for each method but their SDs are different.".
Do you want to compare 2 methods? How is your study design? What is the aim of your study?
What is the objective (goal) of your analysis? Comparing the MEANS of 20 populations, or comparing their VARIANCES? You say, "comparing data when SD's are different". What are the UNITS of your data? Do All 20 samples have different units. If units are different, the sample SD's have different units, and comparing them would be meaningless. Therefore, standardize the data and make them unit-less.
Ok, I have different materials which I prepared to evaluate their basic parameters obtaining signal values by MRI. I used same MRI by changing some scan parameters (I created 4 methods). Consequently, the values obtained are of course have the same units. What I want to check is whether I can get some correlation(s) or differences between the methods or not. I plan to define the methods' advantages/disadvantages statistically if possible. At the end, I want to conclude about the best method. I can't use raw data because it is meaningless in my case. I need to consider mean for each material (material is homogeneous) using each technique. I know that there are useful tests (I also used before) to evaluate means for more than two groups. However, I do not know how I include the SDs of each mean.
As said in previous comments by several authors, usual tests use the raw data, not the means, and hence include the SDs also.
Unless you speak of something different, and in that case, your best choice would be to consult a local statistician... and clearly describe the data generation process, which is not completely clear. Because may be we don't have the same meaning of « raw data ».
I am agree with you on the "raw data" definition, I cannot express the situation correctly I guess. Thank you anyway.
Hi Ziyafer
1. Your question is not clear.
2. What is the relation between the random variables/samples, i.e. related or not.
3. If the sample size of your random variables >30 you can use most of statistical tests (ANOVA, ...).
4. We can analyse data using a repeated measures ANOVA for two types of study design. Studies that investigate either (1) changes in mean scores over three or more time points, or (2) differences in mean scores under three or more different conditions.
5. One of the major assumptions of this type of repeated measure analysis is that of sphericity. If this assumption of sphericity is violated, then the value of F statistic will come out with severely biased results. In other words, if the assumption of sphericity is violated, then the researcher might end up committing Type I error.
6. If the sample size of your random variables
@ Zuhair : the n > 30 or n < 30 rule of thumb is seriously dated and based on last-century paper tables when computations were done by hand...
Even with « very large » samples, highly skewed distribution prevent the Gaussian approximation (especially when considering other questions than about means). And for « small » samples, Gaussian-based tests work well if there are reasons to believe in a Gaussian distribution.
Of course, it is a matter of how precise you want the approximation to be... But this clear-cut rule is much too simple.
Be careful to establish what is the answer that you want, differences is not the same that correlation, but when you are interested in evaluate methods, the best analysis is Concordance. If your data fills the normality and homocedasticity requeriments you can calculate the Lin's Concordance Correlation Coeficent or Intraclass concordance coeficent but if it don’t, you can calculate de Kendall W index of concordance.
You can read about comparative methods of analysis in the papers attached.
Best regards.
@Emmanuel Curis
Thank you for answering the question 17 times ! I am sorry, that you are not talking statistically. I would like to be mentioning that the literature of this topic is huge, and below there are enough references, which are supported my point.
Briefly and statistically:
1. With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed.
2. According to the central limit theorem:
i. “If the sample data are approximately normal then the sampling distribution too will be normal”;
ii. “In large samples (> 30), the sampling distribution tends to be normal, regardless of the shape of the data”; and
iii. “Means of random samples from any distribution will themselves have normal distribution”.
· Kieth A. Carlson، Jennifer R. Winquis (2018). An Introduction to Statistics: An Active Learning Approach. SAGE Publications.
· Stephanie (2018). Assumption of Normality / Normality Test was last modified: June 5th, 2018.
http://www.statisticshowto.com/assumption-of-normality-test/
· Conditions to Check Before One Can Use the t-Interval(2018). Department of Statistics. The Pennsylvania State University.
https://onlinecourses.science.psu.edu/stat500/node/36/
· Wayne W. LaMorte (2016). Boston University School of Public Health.
http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability12.html
· Asghar Ghasem and Saleh Zahediasl(2012).Normality Tests for Statistical Analysis: A Guide for Non-Statisticians. Int J Endocrinol Metab. 2012 Spring; 10(2): 486–489.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/
· Statistical Inference and t-Tests (2010). TRMEM160.SQBS.
http://www.minitab.com/uploadedFiles/Documents/sample-materials/TrainingTTest16EN.pdf
· Field A. (2009).Discovering statistics using SPSS. 3 ed. London: SAGE publications Ltd; 2009. p. 822.
· Pallant J. (2007). SPSS survival manual, a step by step guide to data analysis using SPSS for windows. 3 ed. Sydney: McGraw Hill; 2007. pp. 179–200.
· Elliott A.C. (2007). Woodward WA. Statistical analysis quick reference guidebook with SPSS examples. 1st ed. London: Sage Publications.
· Altman DG, Bland JM. Statistics notes: the normal distribution. Bmj. 1995;310(6975):298.
Regards,
Zuhair
@Emmanuel Curis
Thank you for answering the question 17 times ! I am sorry, that you are not talking statistically. I would like to be mentioning that the literature of this topic is huge, and below there are enough references, which are supported my point.
Briefly and statistically:
1. With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed.
2. According to the central limit theorem:
i. “If the sample data are approximately normal then the sampling distribution too will be normal”;
ii. “In large samples (> 30), the sampling distribution tends to be normal, regardless of the shape of the data”; and
iii. “Means of random samples from any distribution will themselves have normal distribution”.
· Kieth A. Carlson، Jennifer R. Winquis (2018). An Introduction to Statistics: An Active Learning Approach. SAGE Publications.
· Stephanie (2018). Assumption of Normality / Normality Test was last modified: June 5th, 2018.
http://www.statisticshowto.com/assumption-of-normality-test/
· Conditions to Check Before One Can Use the t-Interval(2018). Department of Statistics. The Pennsylvania State University.
https://onlinecourses.science.psu.edu/stat500/node/36/
· Wayne W. LaMorte (2016). Boston University School of Public Health.
http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability12.html
· Asghar Ghasem and Saleh Zahediasl(2012).Normality Tests for Statistical Analysis: A Guide for Non-Statisticians. Int J Endocrinol Metab. 2012 Spring; 10(2): 486–489.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/
· Statistical Inference and t-Tests (2010). TRMEM160.SQBS.
http://www.minitab.com/uploadedFiles/Documents/sample-materials/TrainingTTest16EN.pdf
· Field A. (2009).Discovering statistics using SPSS. 3 ed. London: SAGE publications Ltd; 2009. p. 822.
· Pallant J. (2007). SPSS survival manual, a step by step guide to data analysis using SPSS for windows. 3 ed. Sydney: McGraw Hill; 2007. pp. 179–200.
· Elliott A.C. (2007). Woodward WA. Statistical analysis quick reference guidebook with SPSS examples. 1st ed. London: Sage Publications.
· Altman DG, Bland JM. Statistics notes: the normal distribution. Bmj. 1995;310(6975):298.
Regards,
Zuhair
@ Zuhair : sorry for the several answers, this was not intentional: Research Gate told me that an error occurred, so I tried several times and finally gave up, thinking I could not answer. I just discover now that it just accumulated the answers.
I apologize to all readers for that unintentional nuisance.
I also see several answers from you, I don't know however if this is also a Research Gate bug or not.
The fact that this oversimplification is still widespreaded, with a lot of citations, is not a proof that it is correct. I do not understand what you mean when saying that I'm not speaking « statistically », could you precise.
I do not say either that the TCL does not work or that it's wrong that the larger the sample, the better the normal approximation for most usual tests. I just say that using n > 30 instead of careful inspection of the data and a little bit of thinking is not wise. It's very easy by simulation to show that, for instance, the use of an estimated variance instead of the true one [as in the Central Limit Theorem] slows down the convergence even for quite symmetric distributions, and that at n = 30 the approximation is quite bad. And for asymetric distributions, it works even less.
And, last point, using this rule of thumb as a non-questionned rule, as too often done, may lead to very false results in situations where it does not apply at all, like in building prediction or tolerance intervals for which whatever large the sample is, even with millions of individuals, the distribution of the mean is of little interest, and instead the data distribution is, for which no convergence at all is expected toward the normal distribution...
Looks like it is a BUG that Research Gate has to fix. Both Emmanuel's and Zuhair's answers replicated multiple times.
Dear All researchgate members
Sorry and I apologies for any inconvenience.
Dear @Emmanuel Curis
Thank you for reply.
Indeed, I faced the same problem ( “error” ) which you mentioned when I was submitting my answer.
Exchanging and communicating the answers and ideas through the researchgate is very good, such exchange and communication being its two sole purposes. However, one should not force others in following his answer or rejecting other answers. In addition, the answers should be based on either documented personal experiences or theoretical principles.
Moreover, the principles and the theory of Statistics, as in the case of other fields, has been and is strengthened and built up through writing theorems/papers in high quality journals, and in publishing books through recognized publishing companies. One of these theorems is the Central Limit Theorem. In addition, the Statistical technology/software has always assured and approved the veracity of the theory. And, if there is a contradiction between them, it is the folk of the technology/ software and not the theory.
The use of Statistics has permeated almost every facet of our lives. Statistical concepts and methods, and the use of statistical software in statistical analysis have affected virtually all disciplines-physics, biology, chemistry, agriculture, engineering, economics, business, sociology, psychology and others.
This importance has caused one of the most common problems in the arena of Statistics. Any researcher who can calculate the mean, standard deviation and, furthermore, may be able to apply merely a t-test, may be misled into thinking that such processes alone constitute Statistics, and thus may mistakenly imagine himself to be a qualified statistician.
Regarding the questions and answers: we are all supporting the science and exchanging our ideas and experiences. In case of any argument having ensued regarding any problem relating to the evidences (papers, books, resources, … ) of the personal experiences or theoretical principles, then these should be given in such a manner that they either support your idea or reject others (please refer to my reply below). Otherwise, our answers are not accurate.
Dear Emmanuel Curis
In your first answer you concluded that my idea was based on the output of last century! Your answer is supposed to give some statistical evidences, and, your having failed to do this, clearly indicates that you are unaware of the new (2018, 2017, 2016, ...) and extensively published work in this regard. Thus, the answer is not statistically based, and thus cannot be accepted. Your conclusions lack any credibility as they are neglectful of the solidly theoretical side of Statistics.
Thank you
Zuhair
------------------------------------------------------------------------------------------------------------------------
Briefly and statistically:
1. “With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed”.
2. According to the central limit theorem:
i. “If the sample data are approximately normal then the sampling distribution too will be normal”;
ii. “In large samples (> 30), the sampling distribution tends to be normal, regardless of the shape of the data”; and
iii. “Means of random samples from any distribution will themselves have normal distribution
@Zuhair: the Central Limit Theorem [TCL] assumes 1) that the distribution has a mean and a variance and 2) that we divide by the true variance of the mean estimator, hence that the true variance of the distribution is known (and 3) that the random variables are independent, and identically distributed, but that is the basic assumption of most statistical samples, so let's not discuss it). Besides, it says nothing about the speed of convergence.
In practice : 1) the n > 30 assumes that convergence is quite quick, but it depends on the real distribution. It quite often works well when indeed the variance is known. It works much less well if variance is estimated. Just compare a 30 ddl Student's law with a Gaussian if you're not convinced, and especially for low p-values (which can become an issue with large multiple testings).
2) If point 1 is not true, then TCL does not hold at all. The empiric mean may still be of interest, but it's law can be anything. Just try with a Cauchy distribution to convince you. Whatever n is, the mean will never be a Gaussian. And that's mathematics, easy to demonstrate. No need for a citation.
3) Since variance is estimated and not the true one, TCL alone does not hold. You must use the Slutsky theorem to prove the convergence toward a Gaussian. The price to pay is a slower convergence toward the N(0,1). Note that this theorem neither say anything about the convergence speed...
That does not mean that the mean distribution does not converge at its own speed to a normal distribution, of course, since TCL does apply to the sample mean. But what we're interested in reality is not the sample mean distribution, but the Student's test criteria, either for the tests or for building confidence intervals.
And as I said, just do your own simulations to be convinced that what I say is not without underlying arguments. I can send R code if you need, that will show you for instance that when n = 30 and if the true distribution of the data is exponential (of parameter 1), then the coverage of the 95% confidence interval obtained using the TCL is in reality around 92 %, and you need n ~ 500 to have 95 % (94.7 % exactly). Even for Gaussian data, for n = 30, using the TCL instead of the Student's law gives a coverage of 94 % instead of 95 %. Which may be or not an important difference, but that's another debate.
After that, the question is, as always, how precise do we want the approximation to be. You may say that when n > 30, whatever the situation is, it's OK. I say that a more careful inspection of the data & the context of the question is wiser that applying this rule as a golden rule. Especially towards tails of the distribution, the approximation that may be acceptable at n = 30 for 5 % may become "false" for small risks.
Dear @Emmanuel Curis
I don't like to repeat my answers.
As I said the answers should be based on either documented and published personal experiences or theoretical principles (books or papers).
Regards,
Zuhair
Dear Zuhair,
I don't like to repeat myself either, but note that you do not answer any specific point in the issues I signal, and that are theoretical principles that you can check yourself.
Note that none of your citations is appropriate, since all of them state « n > 30 » without any justification (no demonstration, no simulation...). In details, for the one I can access easily:
1) http://www.minitab.com/uploadedFiles/Documents/sample-materials/TrainingTTest16EN.pdf states, for one sample T-test, « it is fairly robust to
violations of this assumption for sample sizes equal to or greater than 30, provided the observations are collected randomly and the data are
continuous, unimodal, and reasonably symmetric » (emphasis is mine), which in fact goes my way: n > 30 is not a magic rule but should be used with care.
2) http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Probability/BS704_Probability12.html about the CTL does not explain or justify why 30 and not another value. Beside, as answered previously, CTL alone is not enough because in practice variance is estimated, not known.
3) https://onlinecourses.science.psu.edu/stat500/node/36/ does not explain or justify (demonstration or simulations or even a citation) why 30, and beside it also adds « if there are no extreme outliers that cannot be removed » which also goes my way: not a magic rule, use with caution...
4) http://www.statisticshowto.com/assumption-of-normality-test/ does not even mention sample size (except for application of the normality tests it presents, which is another subject)
5) Article Normality Tests for Statistical Analysis: A Guide for Non-St...
is less strict than you, since it says « n > 30 or 40 » (emphasis is mine again) and even speaks of « hundreds of observations » to ignore the distribution of the data [which remains false for tolerance/prediction interval problems, but's that's another debate]All of this, just using your citations, confirms that « n > 30 » is not a magic rule and that it does not replace a careful examination of the data, the context, the question before interpretation of any statistical results. Which was the only purpose of my post...
I didn't understand what you mean? try to rewrite your question again.
Best
Zuhair
I can't see any supporting evidence of your claims on your references. Your references are mainly some web pages and general statistical books (actually not even theoretical but about the practical use of SPSS etc) .
There is a fundamental statistic principle, if you want to perform inferencial statistics, your sample must to be calculated for that approach. Is a reality that statistics methods are constantly under attack, Why? Simply: the incorrect use of statistics, to apply statistics using “book formulas”, use of parametric methods indiscriminately without confirm the normality and homocedasticity principles, spreading statistics myths (like the “n > 30” myth), etc. I wrote answering to another question that if you didn’t planed your research with an adecuate number of samples considering the variables, the characteristics of the population, variability, significance level, power and size effect, nothing or almost nothing you can perform with the results thinking in apply inferential statistics; even you must to think in what analysis will you apply before start with your research.
Best regards.
Dear Emmanuel
Hello
Sorry to say it, but it seems that you like to continue the controversy about some very clear principles.
I) Let me repeat my answer, I wrote:
1. “With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed”.
2. According to the central limit theorem:
i. “If the sample data are approximately normal then the sampling distribution too will be normal”;
ii. “In large samples (> 30), the sampling distribution tends to be normal, regardless of the shape of the data”; and
iii. “Means of random samples from any distribution will themselves have normal distribution”.
3. Ten references were provided and selected carefully, some of them regarding the theory (one book from SAGE, five published papers) and some technical links from the Pennsylvania State University, the Boston University, ...etc.
II) I didn’t say n=30 is a magic rule.
III) Also, I asked you if you have strong evidence (published papers, books ) of the personal experiences or theoretical principles - such should be provided.
IV) In your last letter, you provided five references, four of them are technical links and one a paper “ Ghasemi A, Zahediasl S. (2012). Normality Tests for Statistical Analysis: A Guide for Non-Statisticians. Int J Endocrinol Metab, 10(2),486-489”.
V) In general, the fundamental nature of Statistics, as in the case of other fields, has been and is strengthened and built up by the pioneers and geniuses, accomplished through their writing theorems/papers in high quality journals, and in publishing books through recognized publishing companies.
VI) If any person has a doubt regarding the theory of Statistics, or wants to develop the old methods, or propose new methods as all the previous researchers in the field did, this requires the accepted means of publishing the modifications or the new techniques in a journal (specialized and indexed), in order that other specialists in the field may be convinced of their veracity.
VII) Regarding your references. I will not discuss the technical links. If you read them carefully you will find that all of them supported my answer.
VIII) Regarding the paper, I don’t know the quality of the journal, but the author did good work. It is very clear that the paper applies several tests for normality of two types of serum with different sample sizes, i.e. the first is the serum of magnesium levels in 12–16 year old girls (with normal distribution, n = 30) and the second is the serum Thyroid Stimulating Hormone (TSH) levels in adult control subjects (with non-normal distribution, n = 24).
Then the paper concluded that with regard to “serum magnesium concentrations, both tests have a p-value greater than 0.05, which indicates normal distribution of data, while for serum TSH concentrations, data are not normally distributed as both p values are less than 0.05”.
IX) Do you think that this experiment with these samples is enough! Defiantly not. The reasons are given below:
The tests should be developed using the same kind of serums and not different ones.
The tests should be developed using the same kind of serums and not different ones, and with the same characteristics of the samples.
The tests should be repeated using several samples from different hospitals/populations using the same kind of serums and not different ones.
How do you know that the samples are randomly selected and representative of their populations?
…
…
The paper compared the results of a normal sample with a non-normal sample! It should be investigated as to whether or not the normality is based on several non-normal samples with different sample sizes less than 30, 30 and greater than 30.
Finally, the experiment of this paper is not related to our argument. Because, the paper should consider all the above points and should consider the second serum Thyroid Stimulating Hormone (TSH) levels in adult control subjects with non-normal distribution, n=30 and not n = 24.
X) Again, if you have any modifications or a new method, the idea of point V above needs to be followed. Otherwise, I don’t have time to be wasted.
Regards,
Zuhair
Dear Zuhair,
Here comes a tentative point by point answer. Please note however that most arguments come from basic probability theory, that you can find in any introductory text to probabilities; some of them, in equivalent texts for statistics. I really hope you'll don't ask for citations for them, or that would mean you ask for citations for such basic things as axioms of probabilities... I quote parts of your answer for greater readability, and put them in italics.
I. 1) “With large enough sample sizes (> 30), the violation of the normality assumption should not cause major problems; this implies that we can use parametric procedures even when the data are not normally distributed”
That's the point we're discussing. All is in the « should » that becomes « we can use » and the 30 as the definition of a "large enough sample size". I think this sentence is way too general and tends to replace a careful thinking about the data, context and so on by a too simple rule, and as it is can do more harm than benefits.
2. According to the central limit theorem:
The central limit theorem says: if X1, X2... Xn are n independent real-valued random variables who have the same distribution function (same law), with expectation µ and variance sigma², and if M = (X1+X2+...+Xn)/n is their arithmetic mean, then (M-µ)/(sigma/sqrt(n)) converges in law to a centered, reduced normal distribution [N(0,1)] when n goes to infinity. And all hypotheses are needed for that result (or at least, they cannot be completely removed).
So, based on that:
i. “If the sample data are approximately normal then the sampling
distribution too will be normal”;
Technically, the central limit theorem does not say anything about that first point. Instinctively, it's appealing, but it remains to be defined what means "approximately normal". Because for instance symmetry is not enough: the result is wrong for a Cauchy distribution, as already said, which looks like a bell curve and is symmetric ; see below also.
ii. “In large samples (> 30), the sampling distribution tends to be
normal, regardless of the shape of the data”; and
I'm not quite sure what you're calling the sampling distribution, compared to the sample data above and the mean of random samples distribution in the point below. There are only two distributions at play in the central limit theorem: the distribution of the individual values, Xi (i=1..n), and the distribution of the mean, M. There is nothing in the central limit theorem about any relationship between the sample size, n, and the data distribution, and there is nothing either that gives 30 a special value.
Last thing: note that in the central limit theorem, the denominator is a constant. In all practical applications, it is not, because the variance is estimated from the sample. So, as already said, other theorems than the central limit theorem alone must be invocated to justify the « large n » result (I already gave hint about which theorems are needed).
iii. “Means of random samples from any distribution will themselves
have normal distribution”.
Not of any distribution: it does not work for Cauchy distribution (or Student distribution with 1 degree of freedom, if you prefer) for instance. It's « easy » to prove, since the law of the sum of two variables having a Cauchy distribution is itself a Cauchy distribution, hence by recurrence their arithmetic mean also. and it is because a Cauchy distribution has no expectation, so the central limit theorem does not apply to it.
Cauchy distribution appear when forming ratios of normal variables, and ratios are not so uncommon in real life...
Read for instance https://en.wikipedia.org/wiki/Stable_distribution and https://en.wikipedia.org/wiki/Cauchy_distribution for more details.
3. Ten references were provided and selected carefully, some of them
regarding the theory (one book from SAGE, five published papers) and
some technical links from the Pennsylvania State University, the
Boston University, ...etc.
These references affirm, like you, that n > 30 is ok (with more or less restrictions as exemplified in my previous answer), but *none* of them justifies this value of 30. So this references do not support your argument, they just repeat it.
II) I didn’t say n=30 is a magic rule.
I quite agree. However, it is often presented as such, even if the word "magic" is not used.
III) Also, I asked you if you have strong evidence (published papers,
books ) of the personal experiences or theoretical principles - such
should be provided.
I personally consider that facts and demonstrations are better arguments than citations, especially when these citations do not justify themselves the affirmation they do. Which stronger evidence than a reproducible simulation or a demonstration can you imagine?
I gave you hints on mathematical demonstrations, and arguments based on basic probability courses. You did not do anything to opposite them, except refusing considering them just because I don't give references of basic probability books to support them...
I offered you R scripts to do your own simulations and check what I say. Same thing.
IV) In your last letter, you provided five references, four of them
are technical links and one a paper “ Ghasemi A, Zahediasl
S. (2012). Normality Tests for Statistical Analysis: A Guide for
Non-Statisticians. Int J Endocrinol Metab, 10(2),486-489”.
These references are *yours*, I just recited them to show you they were irrelevant or suggest that n > 30 must be use only in some circumstances...
V) In general, the fundamental nature of Statistics, as in the case of
other fields, has been and is strengthened and built up by the
pioneers and geniuses, accomplished through their writing
theorems/papers in high quality journals, and in publishing books
through recognized publishing companies.
I do not deny the genius of masters in statistics. Like in other sciences, however, things evolve, and especially concerning computations capabilities. An approximation that could be seen usable in 1930 just because of tedious computations by hand may become irrelevant now that computers can do the exact computation in a fraction of second!
VI) If any person has a doubt regarding the theory of Statistics, or
wants to develop the old methods, or propose new methods as all the
previous researchers in the field did, this requires the accepted
means of publishing the modifications or the new techniques in a
journal (specialized and indexed), in order that other specialists in
the field may be convinced of their veracity.
I think specialists are better convinced by argumented demonstrations than by citations...
VII) Regarding your references. I will not discuss the technical
links. If you read them carefully you will find that all of them
supported my answer.
Knowing that all these references were given by you in the first time, discussing them or not is really your entire choice... But if you read carefully what I wrote on them, *none* of them *support* what you say. They all just *say the same thing* (which is a different thing), and often with more warnings.
VIII) Regarding the paper, I don’t know the quality of the journal,
but the author did good work. It is very clear that the paper applies
several tests for normality of two types of serum with different
sample sizes, i.e. the first is the serum of magnesium levels in 12–16
year old girls (with normal distribution, n = 30) and the second is
the serum Thyroid Stimulating Hormone (TSH) levels in adult control
subjects (with non-normal distribution, n = 24).
Then the paper concluded that with regard to “serum magnesium
concentrations, both tests have a p-value greater than 0.05, which
indicates normal distribution of data, while for serum TSH
concentrations, data are not normally distributed as both p values are
less than 0.05”.
Paper was cited by you first. Note that the first conclusion is incorrect: p > 0.05 does *not* demonstrate that the data are normal, because non-reject of the null hypothesis is not the same thing than accepting it. It just means that the experiment failed to reject the normality hypothesis. Which is obviously false for concentrations, since concentrations cannot be negative whereas normal variables always can with a small (negligible) probability. But that opens other debates...
IX[...] The paper compared the results of a normal sample with a non-normal sample! It should be investigated as to whether or not the normality is based on several non-normal samples with different sample sizes less than 30, 30 and greater than 30.
Finally, the experiment of this paper is not related to our argument. Because, the paper should consider all the above points and should consider the second serum Thyroid Stimulating Hormone (TSH) levels in adult control subjects with non-normal distribution, n=30 and not n = 24.
Discussion of the own paper you cited... In which it seems there is a major confusion between the distribution of the data and the distribution of the mean. The Shapiro test of normality, like any other test of normality, tests the distribution of the data, not the mean. The argument of « n > 30 => normal distribution », do we accept it or not, applies to the distribution of the mean, not of the data. So the fact that n = 24 instead of 30 to prove the acceptability of the paper for the n > 30 issue is totally irrelevant, since the paper does not explore at all the mean distribution, but only the data distribution *which does not change with the sample size*.
Dear Emmanuel
Hello
Please refer to last line of my previous answer:
I don’t have time to be wasted.
Thank you
Zuhair
If you consider answering questions directed at you waste of time, then I can recommend you not even bothering to answer anything on RG.
This might be a better way to communicate and to save time for all.
....
....
Again, I don’t have time to be wasted with trivial questions/recommendations ....
Have enjoyed these arguments and i have learnt alot as well. I would request my fellow researchers and statisticians to avoid direct attack. Remember learning is not conclusive, its a process. My thoughts though. Elmer from Kenya
Hey,
If your purpose is to compare the data then you can use Coefficient of Variation (CV) if it is appropriate for your data. It will utilize the information about mean and Standard Deviation (SD) for each data set.
All the best