In a recent conference I have seen a poster that expressing some variables only with p values. Like satisfaction score- p= 000, quality of improvement -p=0.843 like that. Is it ok?
Two years ago, there was a major debate betwen scientists as regard the use of p values. The ASA (American Statistical Association) finally recommended that researchers do not use p values alone in the expression of significance vs non-significance and to supplement it with other parameters like 95% CI. Most high journals state clearly is that expressing results with p values only is not acceptable and a common reason of abstract rejection in conferences is using too much p value or no p value at all.
A p-value in isolation is not informative. If the authors explained in what statistical context (model and restriction) and based on what assumptions the p-values was calculated, and if the assumptions make sense, then the p-value at least conveys the information if a sufficiently confident interpretation of the direction (sign) of an effect (as defined within the model) is possible, given the data. Another aspect, completely untouched by the p-value, is if the data provides any evidence if the effect may be of any relevance. This can only be judged from the actually estimated size of the effect. A confidence interval gives a good impression (ideally one would prefer a highest posterior density [HPD-] interval, though). Then, if we know the model and the confidence (or HPD-) interval and if we have some expert knowledge, we may be able to judge if the data should make us believe in a relevant effect. This would be, to my opinion, the minimum required information to make the report useful.
Two years ago, there was a major debate betwen scientists as regard the use of p values. The ASA (American Statistical Association) finally recommended that researchers do not use p values alone in the expression of significance vs non-significance and to supplement it with other parameters like 95% CI. Most high journals state clearly is that expressing results with p values only is not acceptable and a common reason of abstract rejection in conferences is using too much p value or no p value at all.
It is not recommended to give results only with p values, given that the value of p only evaluates the probability that the results are by chance, allowing to see if you rejected or not your hypothesis. However, it does not give information on what the true value of the estimator is.
On the other hand, using the confidence intervals allows locating the estimator within a range, allows knowing if the hypothesis is rejected or not and when evaluating the amplitude of the same interval you can even see how accurate the results were.