For polymeric nanoparticles sized by DLS method (dynamic light scattering) which one is more accurate? Unfortunately, in almost all the articles it is not mentioned whether the reported size is based on intensity, volume or number. Is the intensity the most accurate one? What are the differences?
I would argue that the number is most accurate especially if your nanoparticle can self-assemble or aggregate. In an ideal situation, with single particle suspensions, number, intensity and volume would all be the same. Since this is not usually the case, the number PSD gives you the size of the most frequent particle in the solutions. Intensity based measurements will give you very intense and large peaks if you have aggregates. Volume measurements takes this intensity measurements and uses the Refractive Index (provided by the user) to try to account for the aggregates. Unless you have an empirically acquired R.I. I would suggest not paying to much attention to volume. All-in-all I would once again say that number PSD is most accurate.
In DLS technique the intensity averaged particle size is very often reported. The contribution of each particle in the distribution relates to the intensity of light scattered by the particle. The number weighted distribution is useful in determining the absolute number of particles.
All three are important. If your particle size's PDI is less than 0.2, three sections size and z-ave s are the same size.
For a monomodal distribution, there are only small differences, but for multimodal, it must be Number
http://golik.co.il/Data/DLS-AnIntroductionin30Minutes_1901356996.pdf
Figure 8 in the tutorial gives a good illustration of the actually differences in representations
You should use a combination of intensity and volume and avoid number. The direct measurement is the intensity, but overestimate large particles. For bimodal distributions volume is more precise. In number there are many approximations that usually are not fulfilled. I personally do no trust results only showed as number, they give you the smallest value but the error could be huge. That said if you have a homogeneous, monodisperse distribution you usually get the same value, aprox., in the three ways.
I agree with Zi Teng , the intensity averaged particle sizes are oftenly reported in DLS technique.
I think a case can be made for each number, but it's important that you specify which one you're using. Can you correlate your DLS measurements with some kind of microscopy (TEM, etc) ?
Here follows an excerpt from a paper of mine that can be useful.
Analysis of DLS data provides different types of average value and
probability distribution of the hydrodynamic radius. The
Z-average intensity-weighted radius Rh obtained by cumulant
analysis of the autocorrelation function of the scattered
intensity is the most direct, reproducible, and numerically
stable measure of the NP size since it does not rely on the
details of scattering and is directly obtained from the initial
part of the autocorrelation function where the signal-to-noise
ratio is largest. Rh is an average value over the whole sample
and, in some cases, it may not be a significant representation
of the NP size, e.g., when the size distribution has more
than one maximum. Besides, Rh is an intensity-weighted
average and, with the scattered intensity being proportional
to the sixth power of the particle size, is biased toward
larger size. For instance, consider a dispersion of NPs with
radius R1 and suppose that half of these NPs form aggregates
with radius R2 ~= 10 R1. In this case, Rh ~= R2, a value not
well representing the actual size distribution. By the same
argument, Rh is a very sensitive probe for NP aggregation and
thus suitable for the present study.
DLS data can also be used to reconstruct the size
distribution P of the NPs. When more than one peak
is present in P, an average hydrodynamic radius Ri can
be attributed to each peak, considered as representing a
particle ‘family’. Such distributions and average radii are
indirectly obtained from the autocorrelation function by
exploiting optical scattering theory, some knowledge of the
NP, and a data inversion technique such as inverse Laplace
transformation or least-squares modeling. One must consider
that many factors can affect the resulting P and Ri and that
numerical instability can be a major problem. Thus, one has
to be careful not to over-interpret these derived quantities.
The number-weighted distribution Pn and mean radius Rn are
probably the most familiar to the NP community since they
are customarily used to describe the size of nanocrystals as
determined by TEM (see figure 1). Pn has no bias toward a
particular size and this makes Pn insensitive to aggregation.
For the above bimodal distribution example, Pn has two peaks
with area A1 and A2. Assuming that each aggregate comprises
about .R2=R1/3 ~ 1000 NPs, we have that the number of
aggregates is 1/1000 of the number of particles. Then, A2 is
1000 times smaller than A1, despite half of the NPs having
aggregated. The volume-weighted distribution Pv and mean
radius Rv are biased toward larger size and are thus more
suitable for aggregation studies. For the hypothetical case
above, the two peaks in Pv have equal area, better representing
the sample. This occurs since the number of pristine NPs
forming an aggregate is roughly proportional to the aggregate
volume. Then, the area of the peaks in Pv approximately
represents the number of pristine NPs irrespective of whether
they are isolated or aggregated.
number is usually correct if your have a counting method but only in case you have a large enough number counted.
As said one should use the value derived directly from the measurement, this is especially useful for comparison between different batches.
DLS is not 'correct' for multimodal distributions and less accurate for broadly distributed samples, for this you need a separating or counting method.
The other aspect is what you need for your application, knowledge about number of particles or mass/volume fractions. Unfortunately, intensity weighted is always biased by the limitations of the measuring principle, though mostly ok for comparison of related samples.
Many of the responses presented so far have included useful information. It is certainly true that the z-number which is primarily based on intensity tends to be the most robust and reliable for a DLS measurement. However, if there are a few aggregates in your sample, you may get a somewhat distorted picture so I also like to include a volume measurement; this is useful because volume obviously correlates with mass of material. Then the use of filters can also be very helpful to remeasure your sample after removal of material toward the high end of your distribution. In that way, your new z-number will be closer to previous volume peak.
I would report the true particle size. You find the true particle size by accurate measurements in a transmission electron microscope. You can plot it by frequency to size, or recalculate it to volume fractions by size. What you chose is dependent on what you wish to highlight. DLS will not tell you the true particle size, unless you can prove that the light scattering is not to any way influenced by agglomerates, or sedimentation rates etc. etc..
As previously reported the only real measured value through DLS is the scattering intensity. The other quantities are derived using different equations, in which invariably several assumptions are made which can or not introduce a large bias, depending on the PDI of the sample. On the other hand, scattering intensity depends on the 6th power of the size, therefore with a bimodal distribution, even if you have the same number of small and large particles, the scattering intensity of the larger particles is always much much greater. However, you have to be very cautious when dealing with number.
In addition to size, polydispersity, and aspect ratio. What does it cost to give both volume and number? Give the stats and the graph so the stats are clear. Give pictures...
In addition, it would be great if you could have some characterization of the surface chemistry. In many cases, it has a paramount importance.
If you are dealing with liquid dispersions separation behaviour is strongly related to surface chemistry / particle interactions.
European Commission
Joint Research Centre
Institute for Health and Consumer Protection
Institute for Reference Materials and Measurements
Institute for Environment and Sustainability:
"The definition of the term nanoscale as recently
given by ISO encompasses the size range from approximately
1 nm to 100 nm, a range which has been
adopted in a NUMBER of other definitions as well."
The size is reported by diameter based on BJH and DFT methods
DLS gives only one direct model-independent (to some extent) quantity: the intensity-diffusion coefficient distrubution and the corresponding z-averaged diffusion coefficient. All other hings are model-dependent and should be used with caution,. For example, if you deal with small, independent homogeneouus spheres, one can recalculate the intensity-diffusion distribution into intensity-size distribution. In this case, the intensity-averaged (z-averaged) size is (almost) direct quantity. Less reliable quantitiy is the volume-size distribution, which is a derivative from the Intensity-size distribution. Finally, most inaccurate may be the number-size distribution as it is strongly affected by unwanted contributions from dust, aggregates , etc. Of course, the above estimations are true for the average qunatities also.
In particle size measurement, microscopy is the only method in which the individual particles are directly observed and measured. In TEM the particle size are expressed as the diameter of a sphere that has the same projected area as the projected image of the particle.
This is a problem for all nanoparticles including those of metals and other inorganic solids. The size normally obtained by DLS is by mass or volume, i.e. if your measured size is, say 50 nm, it means that most of the material is present in the form of 50 nm particles. This may be the case even if, by number, the majority of particles have a size of, say, 2 nm. If you then look at it by TEM and only see rafts of 2 nm particles, there seems to be a contradiction between the methods but in reality both are correct. There is no right or wrong size, there are just different methods, which measure different things. In any case, it is important to quote exactly how the size has been determined, so that other researchers can reproduce the measurement. If you finally want to quote a size by volume or by number depends on your application. For noble metal recovery, for example, the size by volume is of interest, while for experiments aimed at single particle studies, the size number will be more relevant as it tells you how many of your particles actually are of a given size.
Dear Parastoo
If you want or have to report one number as your size, I recommend using intensity-based data since it better correlates with what you need from a size in most of applications. However, sometimes you need to have a special analysis on a population of particles. In these cases you would better to report all three typed and discuss about the findings and what the whole data says to us about the real situation of the population sizes. I hope it helps. If there is anything I can help further, please feel free to contact me.
Best wishes
I suppose I was not clear enough in my previous comment. Depending on the end user some characteristics could be more or less useful. Thus, in my opinion the more information you share the better even though you (personally or the reviewers) may not see it as useful for the clarity of your paper.
DLS is not the only averaging technique.
Other techniques that do not have the same pitfalls as DLS exist such as:
- electroacoustic
- titration method
- sedimentation method,
- Chromatographic HydroDynamic Fractionation,
- Imaging
Just a few examples this is not an exhaustive list.
Some equipment have combinations of techniques thus giving some estimates of agglomerates and/or .aspect ratio. DLS is not able to describe multimodal dispersions, agglomerates, and aspect ratio.
Dear colleague,
Please, look at the related discussion in our paper:
B. N. Khlebtsov and N. G. Khlebtsov,
On the Measurement of Gold Nanoparticle Sizes by the Dynamic Light Scattering Method. Colloid Journal, 2011, Vol. 73, No. 1, pp. 118–127.
This paper can be downloaded from the following link:
https://www.researchgate.net/publication/225359988_On_the_measurement_of_gold_nanoparticle_sizes_by_the_dynamic_light_scattering_method?ev=prf_pub
Article On the Measurement of Gold Nanoparticle Sizes by the Dynamic...
Nanomaterials or nanostructured materials are the materials with structural features in between those of atoms/ molecules and bulk materials, with at least one dimension in the range of 1 to 100 nm (1 nm =10-9 m)]. In this size range, the particles have a high proportion of atoms located at its surface as compared to bulk materials, giving rise to unique physical and chemical properties that are totally different from their bulk counterparts.
DLS gives you the results with intensity, volume and number. It depends to your sample and its size dispersion. I used DLS once on my powder sample and i think you can get the best result by number not by intensity nor by volume. Because you get the intensity or volume the contribution of the dimensions more than 100nm will be more so you can not get a good result to report and also you can not have a good judgement.
Well to refer to metal nanoparticle size it is enough to mention its diameter nothing more.
According my observation in DLS, size measurment i.e, Zavg. of the sample can be known from Intensity Vs Size plot. So, I think the in different paper the reported size for nanoparticles are from intensity plot.
There is a relation between the intensity and size. From this relation you can find the average particle size, good luck
First I would like to comment that it is more important that you know what you measure and how. That includes what kind if fit/analysis you use.
I am posting a method that may help you and others by the Nanotechnology Characterization Laboratory of the National Cancer Institute of the US. I believe it is a very nice and rather easy to understand guide for users that are non-experts in light scattering.
The general result of an DLS is the Z-average or a size distribution according to the intensity of scattered light. If you want to calculate the volume or number distribution from a size distribution, this can be done by applying the so called Mie transformation. But keep in mind, beside other requirements it is absolutly necessary to know the correct refractive indexes and as far as I know it just works for spherical particles.
I am working with liposomes and lipoplexes and all my DLS data are just used as intensity data since I can't be sure to fulfill all requirements...
I fully agree with Zi Teng and Volker Fehring, it gives size distribution as per intensity of light scattered by the particles.
depend of what tecnique you use. for example for TEM the numerical should be used,so if you wnat to compare for example the size distribution of nanoparticles obtained with with the dynamic light scattering with that obtained by TEM you should use the numerical
Of course the most accurate one is intensity.
but there are some differences between this 3 distributions, if you are going to investigate on the morphology distribution of the particles, it is better to report the Volume datas. in the volume datas you can distinguish the different morphologies in a multi phase mixture of powder. In this case each kind of particles morpholgy has its own accumulative peak.
The distribution based on numbers are not usually reported by the professional researchers, unless in the cases that they are assured that there is a narrow range of particles. Of course if you think there are a wide distribution among your particles' size you cannot employ the data belonging to the number of particles.
In these cases that there a wide range distribution of particles, you should employ intensity datas (which is usually between the volume and size distribution).
Also I disgree completly with Mr. Seyed Hosseini. You cannot employ XRD for estimating the particle sizes. I think reading this article can be helpful too.
http://www.iust.ac.ir/ijmse/browse.php?a_code=A-10-3-149&slc_lang=en&sid=1&sw=PARTICLE+SIZE+CHARACTERIZATION+O
As has been mentioned by several persons earlier, for DLS based study, especially for polymeric samples, the intensity is the main parameter, and other parameters are derived from it. So, for size calculation, the intensity of scattering should be considered.
Transmission Electron Microscopy (TEM) or Scanning Electron Microscopy (SEM) is correct tool for to know the nanosize and nanoscale. DLS and Scherrer equation are supporting data for TEM/SEM.
Mr. El-Shazly Duraia is correct. We can find the average particle size from the intensity.
Some times the error or wrong particles size was observed when our samples have little higher concentration.
Laser Granulometry and DLS are two methods based on light diffusion properties of the analyzed particles. For the first one, the light deviation is directly measured on a serie of detectors, and the correlation function permit the calculation of the equivalent diameter of the partcicle (supposed spherical) and the related distribution. For the DLS, the measurement is based on the oscillations of light recept by a fixed detector. The autocorrelation function permit to calculate the ("liquide")diffusion of the particles inside the medium due to the brownian motion. The so based calculation leads to the hydrodynamical diameter (expressed with the intensity curve). Added to the first calculation, the use of the correlation fonction (the same as used in laser granulometry) of the particles permit to correct the expressed diameters (hydrodynamical), and gives the curves in volumes. But, this calculation need to know percisely the optical properties of the particles. Most of the time, misunderstanding of these properties leads to the use, simpler to implement, results in intensity.
DLS gives an intensity weighted correlation function which can be converted to a Z-average (intensity weighted) diffusion coefficient. If the particles are spherical, the Stokes-Einstein relation can be applied to derive the hydrodynamic diameter of the particles. The main limitation of DLS relies on the strong particle size dependence of the scattering intensity so usually the obtained Z-average or intensity values tend to overweight larger particles/aggregates.
Mathematical conversion to volume or number distributions are often used as they can correct to certain extent this overestimation.
Many of the previous responses are fundamentally correct. The best mean size to report is the one most directly provided by the physics. In the case of DLS, this is the intensity mean. Using the mean which is most closely tied to the physics is critical because the initial analysis of the raw data depends on fairly severe assumptions in almost all cases. These assumptions may include spherical shape, monodispersity, homogeneous composition, etc. Converting to other moments generally requires that you superimpose one or more tenuous assumptions on an analysis that is already rife with assumptions. This assumption-on-top-of-assumption problem quickly degrades the quality of your analysis for all but the most well behaved samples. (i.e. monodisperse spheres of a pure single component.)
XRD is a method to measuring the size and array of crystal sheets in a material. XRD never give any information about the size of particle except the size of particle be equal to the size of sheets which particle is produced by it.
I think that Intensity is very important to be considered , but also the most precise and accurate measurements obtained from TEM , and confirmed that by XRD and other means ... anyway it depend also acc to the journals , sometimes they use Number in terms of particle size distribution relative to their size
I see there has already been a lot of discussion which i hope has shown how important it is to define the base of the distrution (number, volume, intensity) and delicate it is to choose the base. If you give and idea of the breadth of the distriution as well as the base people can better interpret the results and reliabiliy. personally I like to quote all the diamters. People have pointed out the limitations of the intensity weighted distribution, it biases the distribution to the larger particles, the number biases the result towards the fine particles, volume give something in between. Not disregarding the limitations of the transformation from intensity to volume to number (e.g. are your particle spherical - then transfomationsare more relibale), do you have a broad or narrow size distribution ( narrow low error, broad higher errors) - all the results are correct. What you need to do is ask yourself why are you measuring the size distribution - are the fine particles or the agglomerates of importance? - is your process sensitive to the number of particles or the volume of particles? then you choose the base which is most useful for your analysis bearing in mind the limitations of the transformations. Here are a couple of reviews which help show the complexity aof PSD measurement and how different methods can be very close e.g. TEM , DLS or centrifugation when the particles are spherical and have narow size distributions and when not how the can differ by a factor of 5 - depending on which base use you.
1. Bowen, P. “Particle Size Distribution Measurement From Millimeters to Nanometers and From Rods to Platelets”, J. Dispersion Science and Technology, 23(5) 631-662 (2002).
2. Aimable, P. Bowen, “Ceramic Nanopowder Metrology and Nanoparticle size measurement - towards the development and testing of protocols” J. Processing and Application of Ceramics, 4[3] 147-156 (2010)
Comments made that DLS is only useful as supporting information for TEM and SEM are fundamentally flawed. Microscopy is particularly subject to artifacts, unable to differentiate between "real" agglomerates and those formed upon depostion. TEM and SEM also yield a core size that may not represent the effective size in solution with a soft corona that is not imaged by electrons. Therefore this statement is too strong and and unduly exclusive.
There are benefits and limitations to all sizing techniques. The benefits must be aligned with the need or intended use. If you want rapid analysis of the changing state of dispersion in a liquid, DLS is certainly a better choice than TEM/SEM. If you want to know the morphology of an analyte, then microscopy is the most direct method, assuming your particles are rigid and scatter or adsorb electrons sufficiently.If you have a polydisperse or agglomerated sample, microscopy is unlikely to provide statistically relevant data for all components of the distribution, while DLS is likely to provide only a mean that is robust. In this case, other methods may be more appropriate (such as laser diffraction).
As for reporting DLS data, I believe the current concensus among experts and instrument manufacturers is that intensity based size or distributions are most appropriate and less prone to error. That said, it is a useful tool available in all commercial instruments to convert to volume base in order to understand how prevalent two species really are. I would not necessarily report this volume distribution, but it is a useful tool (qualitatively) to look at relative abundance of population components. A simple but useful protocol for application of DLS to characterize nanoscale particles is available on the National Cancer Institute's Assay Cascade site (http://ncl.cancer.gov/working_assay-cascade.asp). See PCC-1 in hard copy or video format.
There is also a somewhat spurious beliefe that DLS is always overly sensitive to the presence of larger particles. Users should be warned that this is not always true. Most modern instruments have dust rejection algorithms or data processing schemes that have the effect of ignoring small populations of large particles in a sea of smaller particles. Additonally, the increasingly common use of backscatter angles and the truncating of correlation data at longer times, both contribute to reducing the sensitivity toward larger particles present in small quantities. One needs to manually adjust the measurement duration, dust reduction and data analysis parameters on these instruments in order to "see" the larger particles, in many cases. So the assumption that large particles will always swamp out the smaller ones is no longer a hard and fast rule with DLS. We have performed many studies to show this effect, but have not yet published the work.
In the end, it is always advisable to use more than one technique to characterize size, especially for new or unknown samples. The combination of data from microscopy, DLS and other methods can actually provide a richer data set.
I agree that no single method gives you an absolute result and comparison bewteen several methods gives you a better picture of your particle size and distribution. However TEM gives primary particle sizes and you normally sample a small number of particles. So you have to be carefull to get a representative sample of the distribution. This depends on the breadth of the distribution and the accuracy you want .If you have a log-normal distribution and a relative standard deviation (standard deviation divided by the mean diameter) of 0.25 you need to count 100 particles for 5% accuracy - if the relative standard deviation is 0.5 you need to count 400particles for a 5% accuracy or 100 particles for 10% accuracy. Also if there are agglomerated particles it is not always easy to discern using TEM. but these will be "highlighted" in DLS. So I repeat - it is noit a question of which is the best method it depends on what you want from the measurement. If you particles do not have a narrow size distribution and are not spherical then care must be taken when interpreting all methods.
I suggest to use TEM and SEM measurement instruments to understand the size of nanoparticles and their distribution. Hope this will help.
intensity can be wrong in many cases for example whenever the size distribution of the nano-particles is high (PDI>0.5) intensity just show the biggest particles because they have the bigger intensity when they are hit by laser light but number will give you better interpretation . in the number diagram you can see peaks for small nano-particles and big nano-particles based on their quantity, but when you use intensity diagram, the peak of bigger nano-particles are visible and peak of smaller nano-particles will be evaded/
Would suspect what you see as individual peaks in number distribution derived from laser light scattering (if you see it at all) is rather scattering of data than reality.
IMHO....Best approach is always to use multiple techniques if they are available. One can then better decipher the results and better identify possible artifacts.If you are only interested in the hard core size of primary particles, and the particles have sufficient electron density contrast (e.g., metals, oxides), then EM is probably the best bet. Polymers can shrink or deform when deposited and dried for EM. If you want to know the state of dispersion, better to use in situ methods in combination with EM.
FYI....although theoretically DLS should over-emphasize a few very large particles in a sea of many small ones, modern DLS instruments often use settings that mitigate this effect (e.g., dust rejection algorithms, selective fitting of correlation data in order to emphasize the fastest decays, short measurement durations, high scattering angles, etc). It is best to keep this in mind.
If I suspect there are aggregates present and they are not reflected in the results, I start by reducing the dust "filtering" (i.e., reduce rejection of high average intensity runs; in the Malvern instrument this is a % value of runs rejected), by increasing the duration of individual measurement runs, and, if one has access to more angles, by measuring at lower angles.
I agree that you should always use more than one technique to measure the diameter of nanoparticles. As stated above, TEM and SEM can give only core measurements, because the coating normally isn't imaged by the electrons. Also, when having aggregates, it can be difficult to identify single particles. Moreover, metallization of samples for SEM can produce particles of the metal used, thus giving errors in the measurements. Also, measurements with microscopy can have more human errors, as it is a "manual" measurement.
Regarding DLS, modern equipments can identify and reduce dust and also large aggregates readings from samples. We know that the direct result is intensity of light scattered, however based on equations one can calculate approximate diameters and volumes.
So, answering your question, about only DLS, yes, intensity is the more accurate because it is the crude result, however, normally it isn't the one we want.
One need to choose based on the desired characteristic of the nanoparticles. Normally, number is better, even with the approximations from the equations, because it represents the quantity of particles at a specific diameter present in the sample, which could be next to the MET measurement, depending on the type of coatings. If big particles are present in the sample, the intensity and volume will tend to represent a bigger diameter, because those bigger nanoparticles will scatter more light and also occupy more space, even if they are minority in the sample. In a number distribution, however, they will represent less weight in the distribution because they are minority.
'Normally, number is better, even with the approximations from the equations, because it represents the quantity of particles at a specific diameter present in the sample, which could be next to the MET measurement, depending on the type of coatings. If big particles are present in the sample, the intensity and volume will tend to represent a bigger diameter, because those bigger nanoparticles will scatter more light and also occupy more space, even if they are minority in the sample. In a number distribution, however, they will represent less weight in the distribution because they are minority.'
It is not so easy, and in part not fully correct.
It is always the practical purpose which defines, whether number based or volume based is of interest. Except purely academic approach when not caring about every day problems. Number is important e.g. for situation where larger particles will ruin polishing or in case of risk evaluation for nano particles. However, it is of no use if you have 90 % particles in the nano range number based, in case this equals 1 % of particles on a volume base. As said there is no big difference of both numbers when having a very narrow distribution, however, this is mostly far from reality.
Do not forget the volume based distribution should not depend on scattering effects because it is derived by correcting for these effects.
Overall the analysis of particle size is not necessarily routine , on many occasions you require to know in depth your stuff , whether this form aggregates, point zero charge or isoelectric point, solvent to keep it or break it aggregates, therefore there are several techniques to estimate with any degree of confidence particle size , start with :
DRX is a powerful analyzes because a large amount of material to analized ( bulk analysis and therefore have crystal size information (using the Scherrer formula [ nλ = 2d sin ] .
The DLS technique is very useful but also have the added problem of your aggregates, because if not you make a suitable pH these aggregates may be present in the analysis and the results are incorrect.
AFM, Atomic Force Microscopy are also a powerfull equipment and tecniques to analized the particles size, but you need that you material present a high bonding with the sample holder or support because the tip of the cantilever can move your sample, and this are the principal limitation of this tecnique.
Finally and not least , scanning and transmission electronic microscopy (TEM and SEM ) where its major limitation is the amount of material to be analyzed , since only watch as a small amount of material is necessary that your sample is homogeneous.
Finally A Scanning Transmission Electron Microscopy (STEM ) with a high-angle annular dark -field ( HAADF ) detector was used to study the dispersion and chemical composition of nanoparticles . The signal used in the HAADF image can be Obtained from the Rutherford scattered electrons Which are Strongly dependent on the square of the atomic number ( Z to1.5 -1.8 ) . thus , Z -contrast information can be Obtained Directly from STEM- HAADF images . In other words , Differences in atomic elements can be distinguished by contrast on the images , and are possible and find the size of nanoparticles or cluster over the support materials (on catalysts nanoparticles).
Please review a next reference:
Solar photocatalytic activity of TiO2 modified with WO3 on the degradation of an organophosphorous pesticide. N. Ramos-Delgado, J. Mar, L. Hinojosa, M.A. Gracia-Pinilla and A. Hernandez-Ramirez. Journal Of Hazardous Materials. 2013. Vol 263. 36-44 (ISSN: 1584-8663) http://dx.doi.org/10.1016/j.jhazmat.2013.07.058
M. Gracia-Pinilla, E. Martinez, G. Silva Vidaurri, E. Perez-Tijerina
Nanoscale Res. Lett., 5 (2010), p. 180 doi:10.1007/s11671-009-9462-z
Synthesis by sol-gel of WO3/TiO2 for Solar Photocatalytic Degradation of Malation Pesticide. N. Ramos-Delgado, J. Mar, L. Hinojosa, M.A. Gracia-Pinilla and A. Hernandez-Ramirez. Catalysis Today. 2013, Vol 209. 35-40. (ISSN: 1584-8663)
http://dx.doi.org/10.1016/j.cattod.2012.11.011
Visible light induced sonophotocatalytic degradation of Acid Blue 113 in the presence of RE3+ loaded TiO2 nanophotocatalysts. P. Sathishkumar, R. V. Mangalaraja, O. Rozas, H. D. Mansilla, M.A. Gracia-Pinilla, S. Anandan,
http://dx.doi.org/10.1016/j.ultsonch.2014.03.004
Robert D. Boyd, Siva K. Pichaimuthu, Alexandre Cuenat, New approach to inter-technique comparisons for nanoparticle size measurements; using atomic force microscopy, nanoparticle tracking analysis and dynamic light scattering, Colloids and Surfaces A: Physicochemical and Engineering Aspects, Volume 387, Issues 1–3, 20 August 2011, Pages 35-42, ISSN 0927-7757, http://dx.doi.org/10.1016/j.colsurfa.2011.07.020.
Parastoo, read the following couple of articles that we have recently published on the issue. This will help you understand the differences and between the measured sizes by different analytical techniques and how results from DLS would compare to AFM and FFF
https://www.researchgate.net/publication/224976574_Rationalizing_nanomaterial_sizes_measured_by_atomic_force_microscopy_flow_field-flow_fractionation_and_dynamic_light_scattering_sample_preparation_polydispersity_and_particle_structure?ev=prf_pub
https://www.researchgate.net/publication/261103864_Quantitative_measurement_of_the_nanoparticle_size_and_number_concentration_from_liquid_suspensions_by_atomic_force_microscopy
Article Rationalizing Nanomaterial Sizes Measured by Atomic Force Mi...
Article Quantitative measurement of the nanoparticle size and number...
I would do TEM/HRTEM and/or SEM for particle size. If the particles are well crystalline I would also go for XRD analysis (scherrer equation) for particle size and would compare them with TEM/SEM data. Hope it helps?
it is not posible to compere TEM/SEM size distribution with that obtained with the Scherrer equation. The best way should be to use the Warren–Averbach method for the XRD, calculate the numerical size distribution and compare that with the TEM analysis
The distinction between XRD (using Scherrer equation) and TEM data may sometimes occur due to the presence of more than one crystallite in single grain, and the size determined by diffraction methods corresponds to the magnitude of the coherent crystal regions, that is, to regions where the periodic arrangement of the atoms is perfect and continuous. Furthermore, there may be some extent of agglomeration among the particles during the preparation of the samples for XRD measurement. There's no other reasons that you can't compare sizes from TEM with the one obtained by using Scherrer equation.
if i do not wrong, the Scherrer equation calculate the mean size of a volumetric distribution...The mean size obtained by TEM is from a numerical size distribution. If the distributions are narrow the values are similar but if the size distribution is broad the numerical and volumetrical measured are different..
agreed with Dr. Mathias Brust.....
while reporting the nanoparticles size the size by number gives the the more exact idea that actually how many of the particles are of given size.
The XRD method and Scherrer equation is used to calculate the crystalline size and distance between crystal sheets. It seems not suitable method to used XRD to measure particle size except the grain just made of single grain.
I should mentioned that the Light scattering methods is based on hydrodynamic size of particle and do measurement in colloidal state.
You should report it in intensity but you should also consider doing TEM aswell. one other thing you should bear in mind is that when using light scattering methods like DLS you only obtaining, a hydrodynamic average diameter of the particles
Luca described the numerical size distribution can be obtained by TEM.
The most important thing is to be clear about what you are reporting when you write your paper. Reporting the z-average diameter and polydispersity are fine if you explicitly state it. If you like to report the lognormal distribution for the number density of particles that is the "best" interpretation of the DLS data, that is fine also. Just state it explicitly.
What I hate seeing are statements like "the particle diameter is xx nm". Those are ambiguous.
It is necessary to explain to the reader what you mean when you report a result -- how it is directly related to the data. Not just for particle size, but for everything. When you write a paper, someone should be able to reproduce your results. If you don't state clearly what your result is, then your work will not be as valuable.
Maybe the file in the attachment helps. But remember that the model used for adjustment also affects the results.
Size vary from 5 to 1000 nanometer.
For drug delivery it should 10-300 nanometer.
For tissue engineering it should 10- 600 nano meter
For filteration purpose it should 1-50 nanometer
So it varies depending upon application.
Thanks
With Best Regards
If you have a monodisperse distribution, all (Number, surface, intensity) values will be almost the same (which is a good proof of monodispersity) with a mean value smaller for number than for surface, hence intensity.
If you have a polydisperse, none of them is fully meaningful per se: Intensity-based is the direct result, but it highlights so much the largest size that a significant population of smaller size could disappear from your results.
Therefore, if you apply the basic concept of "never trust a computer", you'll take values obtained by applying the different models (number, intensity) and suggest a size distribution that should be more reliable and in connection with the actual state. Don't forget that just above your eyes that are used to read the results displayed by the computer, there's a brain that you are supposed to use to digest and analyze these results.
Size is the diameter (radius, lenth or other geometric papameters) of nanoparticles which can be measuared by TEM/SEM or calculeted from some phisical properties.
If the nanoparticles are monodisperse, each of the distributions should be almost exact to one another as it was mentioned previously. I prefer to use Number-average distribution since the particle size represented the size of the majority of nanoparticles, which is very relevant for drug delivery applications. I'm sure that many people can relate to the fact that the number distribution is usually the cleanest amongst all distributions. Intensity-weighted distributions are heavily skewed towards larger particle sizes in solution since they will scatter light light more-so than small ones and I believe that this size does not accurately represent the total population. Volume weighted distribution describes the size distribution in terms of the total solution composition. That being said, this is much closer to the accurate average size of the particles in your sample, but it will sill be weighted more towards larger particles. Number-weighted also tends to match the results from TEM experiments also. I hope this helps.
Regards,
Ryan
Interesting discussion. How we can recognize that the sample is monodisperse or polydisperse? Is it concluded from PDI results?
It will be difficult to find a small fraction of larger particles in TEM as this would require a very large number of particles to be counted. Even more difficult to do this job based on light scattering. Separation based methods like field flow separations and analytical centrifugation are much more sensitive to polydispersity. Of course sensitivity required has a major impact.
Rozita,
Both DLS and TEM can give you some ideas about particle polydispersity. The PDI in case of DLS and the relative standard deviation in case of TEM are indicators of polydispersity. You can find the criteria for distinguishing polydisperse from monodisperse particles in the publication in the link below
http://www.nature.com/nnano/journal/v8/n5/full/nnano.2013.78.html
How to get the exact diameter of the colloids with DLS systems giving the results in ASCII format?
Interesting discussion. I guess number is not useful when you are dealing with a polidisperse preparation. I normally use intensity and volume, but volume always estimate a bigger size. Do you know why?
David - Are you saying that the intensity weighted mean diameter is smaller than the volume weighted mean diameter? This would be an unusual result, as the intensity weighted diameter is the higher moment of the distribution.
Normally I obtain a peak in 750 nm for my preparation in terms of intensity, when I see the volume peak it tends to be around 900 nm. Number is specially low: 150 nm aprox.
Here you can go - http://www.materials-talks.com/blog/2014/01/23/intensity-volume-number-which-size-is-correct/
I would think one has to report the value that is closest to the type of distribution needed by the 'customer' of the measurement results. In other words: what is the intended use of the data?
Sometimes one needs a number-based distribution, sometimes a volume-based distribution, sometimes an 'intensity'-based distribution is sufficient. (Please note: 'intensity' is not the same for different particle size analysis techniques.)
Indeed, if the raw data have to be transformed from one type of distribution to another, then it has to be carefully considered whether this is possible at all, and, if yes, which assumptions are made in the transformation.
If there is no unique intended use that prefers the one type of distribution over the other, than it is best to report the original data, which in the case of DLS are scattering intensity-based. Beware, the z-average value is not a distribution; it is a specific average value based on a simplified analysis of the scattering intensity information.
As I understood, the size that should be reported is the value showed in the "intensity" colum, however what should I do if I perform the experiment by triplicate and in each I get 2 values in intensity, product of the 2 particles sizes, as an example:
Measure 1: 39, 154 Measure 1' : 45, 165 Measure 1' ': 50, 170. (all of them in the same sample)
Between those 6 values, which of them should be reported?, is the case of median as in statistical analysis? Thanks for your comments. Regards!
Claudia, For most of our low aspect ratio particles here at NIOSH, we will measure 'intensity' of the sample in at least 3 separate sample preparations and take the median of two or three peaks. But, we will also report the median for each peak and the PDI value since a lot of different particles do typically show different size classes per sample in different preparations. Each size class will have a different agglomerate surface area which can be critical factor.
Gert has a good point. Since we are interested in dosimetry calculations of particles suspended in cell culture solution, we will use intensity, number, and volume as parameters in our dosimetry calculations for our in vitro toxicity exposures.