when you have two or more groups including several observations and you want to compare the average of these groups statistical significance is required. In other words In statistics, a result is considered significant not because it is important or meaningful, but because it has been predicted as unlikely to have occurred by chance alone. P value and alpha level are used to show statistical significance.
It is easy to calculated it but statistical softwares are an easy way to deal with these problems. i.e. SPSS, MATLAB and ....
when you have two or more groups including several observations and you want to compare the average of these groups statistical significance is required. In other words In statistics, a result is considered significant not because it is important or meaningful, but because it has been predicted as unlikely to have occurred by chance alone. P value and alpha level are used to show statistical significance.
It is easy to calculated it but statistical softwares are an easy way to deal with these problems. i.e. SPSS, MATLAB and ....
As Benyamin said, in all fields if statistics when the probability of occurrence of a phenomenon we seek is less than what called by chance -alpha level or type I error- (conventionally 0.05 or 5%) we say there is a statistically significant difference or relationship etc. Performing statistical significance highly depends on the hypothesis and your questions. For instance, in regression the question is whether the slope if regression line is equal to zero (the null hypothesis) or not (alternative hypothesis). In ANOVA is, whether the mean of two or more groups are the same (null hypothesis) or at least there one difference exist between them (alternative hypothesis). In repeated measures, whether the mean of all paired differences are zero (the null hypothesis) or at least one of them are non-zero (alternative hypothesis).
As Benyamin mentioned, you can use some applications to do that. all of the applications test the parameters of the populations ans statistics of the samples.
Dear Benyamin Khoshnevisan sir, I have two queries, I am performing Analysis of variance, for some cases p value is greater than alpha even even average is better, is it means my proposed solution is bad ? Second have u ever used multi-compare function in matlab.
Sunita, such cases occur when the variance of your groups are high, check the Coefficient of Variation (CV) and Kurtosis of your data. Your CV might be high and kurtosis is negative. To overcome such situations that you know there should be a significant difference but statistically you didn't reach it, you should rise the number of replications.
P value should be elaborated based on your null hypothesis not based on the average of your data.
The p-value is a number between 0 and 1 and interpreted in the following way:
A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
p-values very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the p-value so your readers can draw their own conclusions.
The average can not be considered when you are comparing two or more groups. The deviation of average is also important. Also, reconsider your calculation. Are you sure about its accuracy?
About your second question, No. I have not used multi-compare function in matlab.
I hope others can provide you with more information.
Dear Sunita, it is impossible to reach a p-value of exactly Zero. Because, of the following theorem (i assume you want to get a p-value of two group difference):
Lim Difference==> infinity, then Lim p-value==> zero.
this means when the difference between two groups go toward infinity, the p-value goes toward zero, but never reach exactly at zero.
Statistical significance is a mathematical concept for to measure the probability that two or more groups of experimental results would be equal. If the probability (p) of two experimental groups is less than 0.05, it is universally accepted that these groups are significantly different. But you must to take into account that exist a p probability that your conclusion is incorrect. The test is based in the distance between groups average in terms of groups variances. That is, if the distance between groups averages is greater than a given times of groups variances (the necessary for the area under the probability density function of difference is less than 0.05) the distance is statistically significant. In other words when a test show a significant difference you founded that the distance between groups is greater than the chaos.
Statistical significance is a mathematical concept for to measure the probability that two or more groups of experimental results would be equal. If the probability (p) of two experimental groups is less than 0.05, it is universally accepted that these groups are significantly different. But you must to take into account that exist a p probability that your conclusion is incorrect. The test is based in the distance between groups average in terms of groups variances. That is, if the distance between groups averages is greater than a given times of groups variances (the necessary for the area under the probability density function of difference is less than 0.05) the distance is statistically significant. In other words when a test show a significant difference you founded that the distance between groups is greater than the chaos.