Recently, I've been reading a collection of papers, and I noticed that almost none of these include any error bars (neither standard deviation, nor confidence intervals or some other indication of error). This is really surprising to me - I've always been taught that error bars are essential to the correct interpretation of results.

Is it bad practice to leave them out? If so, why are there so many conference and journal publications that contain graphs without them? Or should I not even bother including them in my publications? How does this affect trust we can have in the research results?

For context:

The immediate context is a number of IEEE Conference publications I was reading, which included simulation results as part of the evaluation. This was for work performed in VANETs, but I've noticed this practice across computer science and across both publishers of all quality levels, both open- and closed-access.

I'm not sure whether this is also common in other fields (comments to that end are also welcome!).

More Rens Wouter van der Heijden's questions See All
Similar questions and discussions