Electron beam diameter, spot size, and pixel size of the image are typical parameter to describe the the scanning behaviour of an SEM. Investigating small-grain microstructures like martensite at low magnification, e.g. in the order of x2.000, by EBSD, often the hit rate is quite low which is commonly explained by the high fraction of grain boundaries. Going to higher magnifications like x10.000 and more, the hit rate increases fortunately (e.g. from 50% to 95%) without changing any focus conditions, but at the same time the statistical significance (less number of grains) decreases. Since the beam diameter for a FEG-SEM is assumed to be in the scale of a few ten nanometers and smaller, the distance between adjacent measuring points is usually clearly bigger than the assumed beam diameter. Obviously the spot size on the sample surface is bigger than expected. This can be tested applying the same step width for an EBSD scan at different magnifications. What are possible reasons? Is any SEM - and I am not talking about the different filament types - affected in a similar way, or are there differences between manufacturers?

More Gert Nolze's questions See All
Similar questions and discussions