There is a big difference between particle size and particle size distribution. For 1% standard error on the mean you'll need to examine 10000 particles. If you wish to have the x90 to 1 % SE then you'll need 10000 particles in the x90+ part of the distribution. You'll have to decide what is a primary particle, what is an aggregate, and what is an agglomerate...
BTW, everything under TEM is an artifact of sample preparation and particle selection.
Image J is freeware written by NIH in the US. Google is your friend: http://bfy.tw/Mgm1
JayaKumar G, I suggest starting with this nice overview, published about a decade ago in Langmuir: 10.1021/la801367j
For more practical approach, there are several topics on RG already opened, dealing with the same topic, having some valuable comments, e.g.: https://www.researchgate.net/post/how_to_calculate_particle_size_from_TEM_images
You can see these previous RG discussions to calculate the particle size: https://www.google.com/amp/s/www.researchgate.net/post/How_to_calculate_particles_size_of_material_from_its_SEM_images/amp
Following are the softwares which can be used to calculate the particle size from SEM and TEM images:
01. ImageJ software:
You can use the ImageJ software package. It is free and contains many powerful tools, including analyzing the size and shape of particles or You could measure by hand, using a ruler on a printed image.
02. ZEN software:
ZEN software to estimate average size of nanoparticles.
03. OLYMPUS software:
The OLYMPUS Inspector Series provides accurate and reproducible particle size and distribution data in accordance with International Standards for quantifying residue on filters, advanced particle analysis and non-metallic inclusion rating and is customizable for internal company standards.
04. Zetasizer Nano ZS90 and S90:
The Zetasizer range provides both exceptionally high performance and entry level systems that incorporate combinations of a particle size analyzer, zeta potential analyzer, molecular weight analyzer, protein mobility and microrheology measurements.
You can calculate it using some software like Image J or just take a print out of the figure and measure the length of scale bar that is provided on the bottom. it will be of some x cm. so calculate x cm equals y nm. Then, measure size of all nanoparticles, convert them into nm and then take average of all.
sir, I tried with the Image J free software but its not installed in system. If you possible give any other route to get it and i need to plot the histogram of particle size.
There is a big difference between particle size and particle size distribution. For 1% standard error on the mean you'll need to examine 10000 particles. If you wish to have the x90 to 1 % SE then you'll need 10000 particles in the x90+ part of the distribution. You'll have to decide what is a primary particle, what is an aggregate, and what is an agglomerate...
BTW, everything under TEM is an artifact of sample preparation and particle selection.
Image J is freeware written by NIH in the US. Google is your friend: http://bfy.tw/Mgm1
I ask anyone above who has posted 'Use Image J' to calculate the particle size distribution in any one of the 4 displayed images which I attach to this response. It appears that no one yet has even bothered to attempt this task. Further, let us know the total number of particles in any of the images and the assumptions in measuring the 'size' of a 3D particle with a 2D image to generate a 1D answer. Of what equivalent (circular, volume, etc) is this answer?
Alan F Rawle Although ImageJ is definitely not my cup of tea, for the sake of a discussion I give it a try on Figure a). Some remarks I encountered during the process:
- the thresholding does not work like expected, even the manual mode is challenging. Maybe working on the original image from the microscope would yield better results.
- the selected magnification is only sufficient for larger particles, while the smaller, nanosized stuff is mainly not recognised (see mask overlay; the same is reflected also in distribution histogram with cut-off below 5 nm)
- for the automatic calculation I only choose area and Feret diameter, although there are other options available. The particles seem spherical though, so I guess this would be a reasonable approximation?
- ow, yes, the important stuff. The process only recognised 234 particles. I guess this could be addressed by analysing multiple micrographs and having them at different magnifications would probably also improved the "missing gap" below 5 nm.
- PS I also had to manually clean some stuff, like marker was recognised as a particle, and brackets and letter a and so on...
Well done for trying here. You’ve identified many of the difficulties in analyzing TEM and SEM micrographs. Perhaps we need to take ‘seeing is believing’ with a pinch of NaCl - there’s a bunch of subjectivity involved with real images of this type. There are many other issues as well - overlapping and aggregated ‘particles’ etc..,
Hello. you can do iz by hand on digital micrograph. then mask and the peogram calculatr the area or d or l. then you put numbers in excel kd origin and use equation for your type of particles. ok if you have some spherical particles or close to it you use equation for circle. also for hexagonal you can use circle. the error is below the resolution of TEM. for good statistical reault you need to circle 300-400 particles per sample. ok do not circle all in one image. when you have calculated the diameter, you use the diatribution analysis (in origin) and make a graph. this was alao proven in some our articles (look mine, darko makovec or darja lisjak).
A colleague of mine ones said: if you do not what to do than try deep learning approaches. That was a statement, that deep learning always brings A solution but he wanted to emphasize, that thinking might even bring a better solution. As someone now being active in particle characterisation based on image analysis I have seen multiple challenges with generic as well with deep learning algorithms. We did not found THE solution yet. However, I have to admit that I am amazed by the progress the deep learning did make and not only the lazy colleagues are working with it:
Characterization of fast-growing foams in bottling processes by endoscopic imaging and convolutional neural networks - https://www.sciencedirect.com/science/article/abs/pii/S026087742030248X
I wish to add to your excellent contribution and give you full marks for effort here. But you are from Slovenija (and Ljubljana - a wonderful and beautiful city) plus from a first-class institute - Jožef Stefan... So your skill and determination are understandable.
The initial image - clearly you are right; one/software can't improve on the initial image so it's essential that this is of the highest quality (a hardware and sample preparation/presentation matter)
Thresholding - absolutely key. Even though the excellent Image J has the ability to deal with a varying background there is still a great deal of subjectivity and manual/visual interpretation
Elimination of groups of particles. Some of this may be due to depth of field but we need to be aware of how this happens. This is why complementary techniques are of so much benefit. Here I'd look to the essential (IMHO) chemisorption SSA which should be able to 'highlight' the smaller fraction
Calculation. Here we may diverge just a little. The image shows only a 2-D plane. Here ratios of 1-D parameters (such as Feret or aspect ratio) can be determined as well as appropriate equivalents. So, we need to be aware that shape is a 3-D issue and that equivalents (circular/spherical etc) are used in imaging just as in any other size determination of irregular particles. We used to get nonsense such as in light scattering 'Mie theory only applies to spheres and your particles are not spherical'. We can just as easily state the nonsensical 'Imaging only applies to discs' when we know that visualization is essential/mandatory. Yes, your comment 'The particles seem spherical though, so I guess this would be a reasonable approximation?' is worthy of much further dialog - what I query is the word 'spherical' - the particles do not seem spherical at all as we have no indication of the z-axis. Spherical (3-D) is an assumption - disc (2-D) is not...
Number of decimal places. Like most software, far too many decimal places are generated (yes, generated; they're not real...). 3 decimals in nano (pico) is ludicrous (and software developers persist in this nonsense) when we realize that the approximate diameter of a hydrogen atom is 0.074 nm or 74 pm
Number of particles. Yes, what is the 'truth' here? The harder we look then the more we see... So the bottom end of any technique determines this (hence the flawed EU number-based definition of nanomaterial)
What are overlapping (one particle behind another), touching, aggregated, and agglomerated particles? Should we deconvolute/disperse them in software?
Conversion to other forms - length, surface, volume, and intensity distributions). This is vital in understanding what other techniques may give on the same system. In the number-based evaluation then a 1 nm particle has the same weighting (statistical validity) as a 10 nm particle even though it contains just 1/1000 of the mass or volume or number of atoms/molecules
Removal of scale bars etc. Yep, a normal thing we have to do manually
Last, but not least. The image is a tiny representation of the whole. With any assumption, calculate the total mass of the particle(s) shown. You'll then see that this is a specimen and not ample and simply cannot be representative of the whole...
Again, this stresses the importance of knowing the basics of everything we do scientifically. This is why I despair when I see throwaway lines such as 'Use Image J' and sometimes linked to an url. Once again, I commend you for tackling the 'problem'. It's very illustrative of the phrase 'The longest journey is started by a single step'. You have taken that single step when others have remained rooted to the spot...
Alan F Rawle thanks, but I hope the micro-location has only a minor influence on the laws of physics :-)
But going back to the original discussion... Most TEMs now will be equipped with the digital camera, and let's assume it is calibrated from time to time, hence the measurements obtained are reliable: working with the raw data will eliminate coincidental errors while trying to calibrate the image, e.g. by ImageJ (in the attached image the pixel size is 0.17 × 0.17 nm), also the markers are not embedded anymore etc. Still, the magnification and the contrast in the image will be controlled by the experience and engagement of the TEM operator (e.g. using apertures, specific operation modes like dark-field TEM or STEM for identification of the particles, etc.). In general, every research can be subjected to the GIGO effect quite easily (garbage in - garbage out).
Thresholding in conventional TEM will always be problematic, as the contrast in the image is a combination of mass-thickness and diffraction contrast, the latter being the dominant one. This is also a major obstacle for reliable particle identification in multi-phase systems, as this example is. STEM in combination with ADF or HAADF detector (so-called Z-contrast imaging) can to some extend solve this issue.
Agglomerations can be improved by further diluting the powders, sonication of the solution etc. with a big chance we'll taint the results: larger particles tend to sediment, while nano-stuff (subjected to Brownian motion) will stay in suspension... from this aspect, even the time between sample preparation and transfer to TEM grids is important. With "dry" transfer we can avoid this to some extent, but then again we are dealing with aggregates. Kind of a trade-off :-)
I have to admit I am very weak on the calculation part. TEM tomography is a good solution for (almost) full 3D reconstruction of the nanoparticles, but it is extremely time-consuming, and I cannot imagine doing this on 100+ particles in one sample. Hence I cheat with "approximations"...
Regarding the volume of the sample, it is worth to mention that the total volume of the matter investigated by TEM since it started in the fifties is less than 1 cm^3 [ref.: https://www.tf.uni-kiel.de/matwis/amat/mw1_ge/kap_2/exercise/s2_1_1.html]. So much about the TEM sample being representative of a whole sample...
You make a lot of important points in your answer above. For the sake of brevity, I will highlight only one.
You state in your comments 'in the attached image the pixel size is 0.17 × 0.17 nm'. My question is: So, why are the 'answers' quoted to 3 decimal places? (The first area mean 13.375..)
BTW, I do feel your micro-location is important...
One point on sample preparation is the obvious one - it alters the original sample.
2 quotes:
“It cannot be over-emphasised (Br.) that great care must be taken in preparing any sample for size analysis. The objective of the investigation is to determine the size distribution of the sample. What is often determined, however, is the effectiveness or destructiveness of the disaggregating or dispersing technique”
J S Galehouse Sedimentation Analysis in R E Carver (Ed.) Procedures in sedimentary petrology Wiley-Interscience Chapter 4 (1971) ISBN 471 13855X pp 69 – 92
“Very little appears to be known about how the differences of behaviour (Br.) arising from the differences between the conditions of test and use, can be predicted, but there is no doubt that such information is necessary before the control of such materials can be adequately carried out”
Dr. Rose in Discussion (page 143) following paper “Particle Shape, Size and Surface Area” in Powders In Industry SCI Monograph No 14 Society of Chemical Industry, London (1961)
Alan F Rawle I really appreciate your references and citations from the "classic" scripts, which are sometimes (especially for the new students) somehow hard to find! Unfortunately, if I can use the analogy from the discussion, these texts will apparently have the same impact as a short reply on the RG...
Yes, the decimal places generated by software... as I said, ImageJ is not my primary tool, hence the test on the provided image is also a test for future use of the software. Looking closely at the thresholded image, the boundary set by ImageJ is (determined by eye) ± 3 px (= 0.5 nm), and would be probably close to that if done by hand. This would be - what, a 1 nm error bar when measuring a diameter? This is harsh...
Hmmm... 'This is harsh'... Maybe something we can debate. Many scientists do not look at a variable or error budget on their measurements. Here the instrument can make accurate measurements on the provided specimen (it is not a sample). The amount examined is in ng or pg. You get out what you put in and electron microscopy can never be representative of the whole material - vital, essential, mandatory information, but not statistically representative. This is the largest variable.
We then have 'sample preparation' as we've touched on above - the second largest variable. Taking sufficient images (10000 minimum according to NBS/NIST) takes us into the acquisition and interpretation of data. We can't do better than the initial acquired image - no amount of software can overcome this fundamental consideration. We can't interpolate within a pixel in imaging (it's either 1 or 0) so that would be the best achievable. So, +/- 3 pixel elements seems reasonable for a boundary in a variable background. As you say, there are implications especially for the smallest 'particles' examined. So, 5 +/- 0.5 nm sounds harsh for physical scientists but it's probably the best achievable under the circumstances.
Now, we can probably estimate a meter to 1 part in 109. To what degree can we specify a nanometer? Certainly not to 1 part in 109... So what would be reasonable? I await your thoughts...