I rarely ask questions so it's my turn. This is more of a survey than a question. And your input would be much appreciated. Remember, no more than two sentences, and do not refer to textbooks, I want YOUR definition. Thanks
As the p-value is an arbitrary value in terms of what is or what is not a significant effect it is an overused and mostly misunderstood way of determining the significance of the observed result.
a p-value should be used as one indicator of what is observed not "the" indicator of what is observed.
Hi. Here's my brief definition. It is the error probability of a statistical test. Or the probability to accept as true the null hypothesis, while it is false.
Not sure whether this thread is also supposed to be about discussing the different views (if not please say so), but as far as Cristian's statement is concerned, I'd advise to carefully distinguish between the p value, which is an empirical result, and the significance level, which is the (a priori derived) probability to reject the null even though it really is true
The probability to get a result equal than or greater than the sample effect size under the assumption that the null-hypothesis is true. Problems with this are that the null-hypothesis is (almost) never true, that it is not evidence against the null or any other hypothesis, and that the threshold is abitrary.
A p-value is the (conditional) probability of getting a test statistic at least as extreme as the observed value of the test statistic if the null hypothesis is true.
I also sometimes have problems, when laymen ask me about this. I usually say that "it says how much it is likely that the effect is RANDOM only" (while we are looking for something SYSTEMATIC).