we are synthesized Mn doped ZnO with hexagonal wurtzite structure. the size of hexagonal in SEM image is about 1000-3000 nm with nano particles under 100 nm. but in DLS the size of particles are in the range of 300-700 nm. why?
Your particles are not spherical. They have different dimensions and they are flake-like ( having a different aspect ratio compared to a sphere). DLS makes an assumption that your particles are spherical. So the final measurement might be shifting towards the smaller side. I suggest you to read more about the shape factor and the principles of DLS to resolve this.
Another reason might be the segregation of larger particles, did you observe it?
How many particles were analyzed in the SEM? Since the selection of an image section is always biased, it would be conceivable that areas of the sample that did not show the desired structures were ignored or less noticed. It looks like there are also significantly smaller particles that may not have been given sufficient attention in the image evaluation. Just a hint: I know it's common practice, but if you cut the background in EDX mapping, you can't tell what the signal-to-noise ratio was. You can easily draw structures in the EDX with the background noise without having a real signal.
DLS is just following the diffracted laser beams by your particles in water. In addition to what others have mentioned (shape of the particle and the possibly of agglomeration), there is a possibility that the size reported by DLS is the size of the particle + the water molecules stuck to its surface (hydration layer).
I think the differential sedimentation occurred during the DLS measurement. This factor play essential role in obtaining results you got as shown in the attached files.
I also think......DLS is giving results using a theory with brownian motion of the particles. Your particles are not spherical in shape also. Otherwise, your particle sizes also somewhat larger to measure from DLS.I do not know..check your instrument's limits. Your SEM images can be not represented the whole sample as DLS. Try to collect more than 50 sizes from the SEM and calculate the average.There are many software to analysis these factors like "Image J", etc.,
I agree with all the previous comments. Besides, I would also recommend you to check different presentation of DLS particle size graph. You can choose to see the graph number/size, for example... By doing that, you would see quite a different distribution of particles size. Best wishes
Please transform the intensity distribution to a volume distribution or number in dynamic light scattering(DLS) to display data , probably, the result only shows a single and narrow peak. However, because the particles of your material are not spherical, this difference is natural based on the type of theory of these two studies.
This question has been asked and answered on RG many times. A search would help. It is basic stuff that needs to be understood from the get-go. It's not a question of right or wrong...
Visualization is essential but slow. Ensemble methods are quick, typically low resolution, but can be applied online (for example). Both single particle and ensemble methods are needed
SEM is a 2D representation of a 3D particle. SEM still would assume discs in order for a circular equivalent to be plotted. What do you plot on the x- (size) axis? How do you deal with agglomerates and aggregates? SEM looks at the electron-dense part of a particle and looks at each particle individually. You'd need 10000 particles to get the standard error on the mean to 1%
No such thing as a 'hydraulic diameter'. I assume you mean hydrodynamic diameter. DLS obtains a diffusion coefficient first. Any conversions to a diameter involve an algorithm (Stokes-Einstein equation) with an assumption of shape. Same thing applies in imaging if the number of pixels is converted to circular equivalent - here there's an assumption of discs and an assumption of what happens in the z-axis (if a 3D assessment is required). You still need to assume shape for electron microscopy for plots. Other diameters (e.g. Feret) and shape properties (very often the ratio of 2 linear dimensions - aspect ratio is one example) can be extracted with microscopy
DLS will 'look at' the entire particle (not just the electron-rich core) including protective surfactants and stabilizers. It examines millions or billions of particles simultaneously
DLS is an ensemble method looking at the entire (intensity) distribution and does not count particles. It is rapid, looks at the system 'as is', agglomerates and all, but is low resolution and provides a distribution based on the scattering intensity of particles. Thus, the larger particles are more important in such a distribution
A mass/volume/intensity distribution will always be larger for a polydisperse sample. Polydispersity is the bane of all particle size distribution methods. The killer word in 'particle size distribution' is the last one...
On what basis would you buy breakfast cereal? On the number of corn flakes or the mass or volume of the packet? If I offered you a 1 millimetre particle of gold or a 1 cm particle of gold, which would you take? The 1 cm particle of gold has 1000/1001 parts of the mass or value of the system. However, for a number counter then these particles are equally important. Not so for a mass/volume/intensity distribution. Also see the link mentioned above:
It looks like your powder consists of two fractions: hexagonal particles and "nanosized dust". If you are interested only in bigger hexagonal particles, then you are out of luck with DLS. These particles are 1) of a very special shape, like tiny rotating mirrors (can make DLS software go crazy); 2) they are too big, near upper level of DLS (means fast sedimentation); 3) presence of "dust".
So, for hexagonal particles SEM is better, even if it will not give high-precision results. Beware of fully automatic image analysis, it may give you skewed result.
Dear Alan F Rawle , let me ask you who needs to measure "10000 particles to get the standard error on the mean to 1%"? I had never needed something even close to this precision. Disclaimer: I am not a manufacturer or a seller of nanoparticles. Please, stop to scare young researchers with these meaningless numbers.
I’m not sure what aspect of basic math and statistics scares you but I’m happy to get an opportunity to set the record straight. The math/stats are not mine and I can’t see how a few simple Greek letters in equations relates to an instrument manufacturer or supplier of nanoparticles especially as our company doesn’t manufacture electron microscopes or nanoparticle imaging hardware and software. Many people can understand the math but do not like or accept the conclusions… The numbers certainly are not 'meaningless' as you state...
I’ll not overwhelm you with references, but we’ll use NBS/NIST (the paper is from the year they changed their name – 1987), ISO 14488:2007 (Particulate materials — Sampling and sample splitting for the determination of particulate properties), Robert Richards (Professor, MIT), the works of Pierre Gy, and 3 simple basic undergraduate statistics texts. I’ll ignore the web and Wikipedia although the same information can be found there.
• NBS/NIST A L Dragoo, C R Robbins, and S M Hsu ‘A critical assessment of requirements for ceramic powder characterization’ Adv. in Ceramics Vol. 21 The American Ceramic Powder Science (1987) 711 – 720 Quotation on top of page 718: ‘As Kottler notes, S is proportional to N-1/2 where N is the total number of particles measured… This consideration implies that image analysis methods may require the analysis of on the order of 10000 images to obtain a satisfactory limit of uncertainty’. Scan of quote attached
• ISO14488:2007 (now ISO14488:2007 AMD 1:2019) ‘Particulate materials — Sampling and sample splitting for the determination of particulate properties’ Annex B Clause B.7 ‘Simple approach to the calculation of the fundamental sampling error (FSE) and minimum mass required for a specified standard error’. [Standard error of the mean (SEM)]. ‘SEM α 1/√n or n α 1/σ2 For 1 % SEM it can be shown that: n = 1/(0,01)2 = 10 000. Thus, 10 000 particles in total will be needed to specify the mean to 1 % SE. See also Reference [18 - K Sommer Sampling of powders and bulk materials Springer-Verlag New York, Incorporated, New York, NY, U.S.A (1986) Table 9.2.2 page 191]. The implication is that to specify any other point of the distribution to 1 % SE, at least 10 000 particles will be needed in the portion of the distribution above that point. The worst-case situation is considered first; specifying the x99 point in the distribution to a standard error of 1 %. This requires 10 000 particles in the x99+ part of the distribution’.
• The (minimum of) 10000 particles has implications for representative sampling and minimum mass of sample required for a specified precision. An early ‘academic’ reference to when this started to be raised – early 1900’s – is to be found in the Table of Robert Richards’ Ore Dressing Volume 2, McGraw Hill page 850 (1909). I attach a scan of this table. It can be shown that these minimum masses are (statistically) based on 1% standard error (10000 particles) on the x99 plus point in the particle size distribution. Obviously, these sample masses scale as the cube of the particle size. The minimum masses themselves will be small in the nano (< 100 nm) region but the consideration of the number of particles still applies. Thus, electron microscopy tends to be selective and qualitative in nature
• I trust you are familiar with the works of Pierre Gy relating to sampling and providing a sound theoretical basis for representative sampling. His calculations deal with (among many other things) the Fundamental Sampling Error (FSE) where the FSE is proportional to 1/variance of the number of particles (as variances are additive but standard deviations are not). Again, the calculations are based on the number of particles and provide identical information to that quoted above. For example P M Gy ‘Sampling of Particulate Materials: Theory and Practice’ Elsevier (1979)
• Standard university statistics texts. For example: C J Brookes, I G Betteley, A M Loxston “Mathematics and Statistics for Chemists” John Wiley & Sons page 266 (1966); H Lucas “Statistical Methods” Butterworths page 77 (1970); A E Annels Mineral Deposit Evaluation: a Practical Approach Chapman and Hall, London (1991) referenced in C J Moon et al Introduction to Mineral Exploration 2nd Edition Blackwell page 206 (2006)
Hopefully this is enough information to demonstrate that this minimum number of particles is not a ‘scare tactic’ but fundamental to particle size distribution (PSD – the last word is the killer one) analysis and based on a sound statistical core
Alan F Rawle , from your table we can see that 400 particles are enough in many (most?) cases. In some (many?) cases even 400 hundred particles are too many, for example if particles in set A have diameters about 10 microns and set B - about 100 nm. So, please, do not scare young researchers.
Vladimir Dusevich '400 particles are enough' for what? The objective needs to be defined. It's not 'my table'. For the last table, consult Professor Sommer. And do any of the images in the header of the question contain 400 particles?
The SE on the mean for 400 particles is 1/20 or 5% which may be adequate for many purposes, I agree. However, for a distribution to be specified then this may be a different case. This is not about 'scaring' as many automated image analysis platforms are capable of acquiring and analyzing far more images. I caution against things I've seen in the literature such as drawing conclusions (to 2 decimal places) on 20 particles from electron microscopy.
In terms of mass (and thus value if Au, for example), then it takes one million 100 nm particles to make the mass or volume of a single 10 micron one, so I can't see the relevance of 400 being 'too many'...
Over many years, I've been a keen follower of the answers, discussions, explanations, recommended references, tutorials, webinars, and presentations of Dr. Rawle, Dr. Miller, and others.
I've tremendously benefited and continue to benefit from their immense expertise and encyclopedic knowledge in colloidal science and physical chemistry, although I come from a completely different background.
I am very grateful for the precious time and effort they continue to lend to the hundreds (if not thousands) of young researchers on RG such as myself.
They have helped me with countless technical issues and questions over the years although I have never had any interactions with their companies. This is my testimony on the matter, for anyone reading this.
You demonstrated the worst case of a "bad statistician" - when a statistician did not understand a researcher at all. Let's consider two sets of particles with practically non-overlapping size distributions. Humble Student's test is good enough to check whether there is a difference. Ten particle measurements per set will be more than enough for a task. 10, not (as you said ) 10.000! So, you were wrong on 3(!) orders of magnitude. So, please, do not scare young researchers.
I worked for most of my career in the pharmaceutical industry where dose uniformity is paramount. This usually means importance is placed on mass/volume-weighted distributions. If I need my particles to be, say, 250 nm, for optimum dissolution/uptake/availability etc but my process occasionally yields 50 micron particle, how many of those are needed to reduce the available dose by, say, 20% and thereby lead to an out-of-specification result (and, worse, batch rejection)? How many particles would you have to count using a number-based technique to ensure that you would detect such a large particle 95% of the time. I've pretty sure it's more than 400. Just because the 400 small particles yield a standard deviation of less than 0.1%, it doesn't mean your sampling scheme is fit for purpose.
Young researchers need to know these simple truths. You need to know why you are measuring something and whether the way you are doing so is appropriately.
Young researchers need to know that the real world can be a lot less forgiving than the idealism of academia.
Vladimir Dusevich Well, english is not my native language, however I don't think the interpretation is on your side. Being able to identify two sets of distributions is not the same as trusting each of them as reliable.
I've never seen someone working with microscopy and reporting a particle size distribution with only 10 particles measured, this can give you an idea on what range of particles size you are working with (if they are at nm or um, or cm for example) but won't allow you to determine the truth between particle size/properties you're looking at.
John Francis Miller You just repeating what I have said a few posts above: high precision may be needed for a manufacturer (as you are), but for a researcher it's mostly overkill. So, please, do not scare young researchers.
Luiz Fernando de Sousa Lima English is not my first language also... But " the interpretation is on my side". You need to know about Student's t-test to continue this discussion (easy part of statistics). Wikipedia has a table on minimal sample size: https://en.wikipedia.org/wiki/Sample_size_determination#Required_sample_sizes_for_hypothesis_tests
You can see that for power of 0.95 sample size could be 651, 105, or just 42 depending on distributions. Of course my example with 10 particles was an extreme case scenario. In addition some statisticians do insist on a minimum size of 30, and some insist on 100. Statisticians...
All researchers have given probable reasons for having a larger particle size in the case of DLS analysis than in the case of SEM images. I just wanted to add one of the probable reasons that DLS gives the hydrodynamic diameter of the particle. If the mentioned nanoparticle has an affinity towards the used solvent, the particle might get swelled and this results in a larger particle size in the case of DLS analysis. You can try it with some other solvent to get a result with minimum swelled particles.