From what I know, you can either report error values from replicates (eg. the standard deviation) or from the tolerances due to the device/measurement/measuring equipment or the compound error. To be more thorough, you can combine the errors and you can then report these compounded errors (I think we had to do that for physical chemistry). Therefore, I suggest that you take a look at analytical chemistry (lecture/lab) or physical chemistry (lab) books.
First of all, error bars are meant to account for statistical errors (i.e. the precision), not for systematic errors (i.e. the accuracy), so if there is nothing statistical about your analysis, an error bar doesn't make sense and a thorough discussion of potential systematic issues of your method in the text of your Paper/thesis/... is the more valid approach.
So, ask yourself what you did during the application and evaluation of any method:
did you acquire data from multiple data points and calculate an average of anything? In that case you also have a standard deviation. If any fitting was involved in creating these data points, the accuracy of these fits can also be propagated into the standard deviation.
if you have only a single point per sample, think whether the things affecting the accuracy have a statistical effect. If you perform XPS and calculate a stoichiometry, you will use some set of cross section and penetration depth parameters. The cross sections are theoretical values from honestly rather old methods, so there will be an error from that but you can't represent that by an error bar.
There may be reviewers who will always demand that you put error bars on your data, but an error bar that shams the existence of knowledge is actually worse than no error bar.
Unfortunately, sometimes you get forced to put on bogus error bars on stuff just to satisfy people, if you are faced with that, think about something which does not cause too much stomach ache when you think about having that connected to your name.