As we all know, the particle size decreases the intensity and it also decreases whereas broadening increases and vice versa. What is the actual role of particle size in this effect?
generally we run the XRD as a powder diffractometer as to remove any error associated with sampling and preferred orientation. Large crystals can be run just as well and you'll get sharper more intense peaks but all of your peaks may not be there. When powderized you run the *assumption* that you have infinite crystals in infinite degrees of orientation, so all of your peaks should show up. This factor is not as important as the debye's equation, but it is observed as well.
.. quite a strange question I'd say and the complete answer takes a few pages of a good x-ray diffraction textbook.
Just to be a bit concise on that, the (integrated) intensity of a diffraction profile in a powder diffraction pattern is related to the volume (i.e. number of atoms) of the coherently scattering domains. So size enters indirectly in it. An analytical formula was provided by Warren (1978) for spherical domains (it can be easily generalised to any shape). If we don't change the volume, the intensity is preserved. If we do change the volume, then of course the intensity changes. The relative intensity is related to the structure. Now, in a real specimen we have a distribution of sizes and this has to be taken into account when talking about "size". And this is why I find the question quite odd: you are never able to produce a monodispersed distribution of equal objects.
The broadening part is the most tricky as the shape of the profile (in reciprocal space, not in 2theta) is related to the Fourier transform of the autocorrelation of the shape function. The net effect is that peaks get broader when the size gets smaller. If the quantity of matter is preserved, then you just see a broadening and a decrease in the max, but the area of the peak remains constant.
Again, some analytical function in a very simple case of spherical domains has been obtained by Wilson (1962) and we can easily see that a) peaks are not gaussian nor Lorentzian and b) the size enters the peak profile shape in a complex way. Shape also enters the formulae and for any shape except for spheres, different peaks can show different broadening!
If you measure the broadening given by your instrument (just take the pattern of a suitable line profile standard such as e.g. NIST SRM 660b), you calculate the peak profile for a sphere and you convolve them (this is what happens in a diffraction pattern), you see that for a size above ca. 150 nm the effect of the size cannot be appreciated. if your instrument is bad, then the limits lowers. With synchrotron data you can get up to 200-250 nm, but that's a risky limit.
I do not agree with Daniel on his last part (and Warren in the 1978 paper is quite strict as well). This is true if you use the wrong tools (e.g. the Rietveld method) as they do not handle microstructure (and diffraction at the nanoscale) correctly. The Debye equation is a nice tool, but is often misused. Provided you do things correctly, you can use Bragg's law down to a very small size: you just have to use the correct peak shape (calculated from the shape autocorrelation), to do the calculation in the correct space (in reciprocal space and not in 2theta), remove the tangent plane approximation (the main limit there!) and use the correct physics (it is e.g. unfair to compare the result of Debye calculated on a relaxed particle with those of a Bragg calculation on a gemetric cluster).
The advantages of the Bragg approach is that a) you don't have to deal with the small angle part introduced by the Debye equation (the autocorrelation of the shape) that differ from the actual small angle part in most practical cases and b) you have an average over all regular atomic configurations filling the given volume and not just a given shape that, in a real case, is of little physical significance (Ino and Minami,1979,1984), c) you can easily do a refinement and consider extra broadening components (including a size distribution) using the WPPM approach (Scardi & Leoni, 2002), without dealing with an enormous number of parameters, d) you are much faster in calculating the pattern!
well, you can't get any more spot on that quoting text. Broadening is a HUGE issue with powderizing.
As you decrease particle size you create more and more particles with similar orientations as your expected peak...which produces more of a bell curve type effect. If you go small enough you start to see the amorphous shift that is observed in nanoparticles and amorphous materials. Bell curve analogy is a good one, if you have 1 person taking a test, you get a sharp peak at their score...if you have 1000 people you get a distribution. Same with crystals.
Antonio, the problem is usually more severe when you work top down as the risk of mixing coarse and fine fractions is higher. It is more limited working bottom up.. but I still have to find someone able to produce a reasonably quantity of any nanosized monodispersed powder (monodispersed in size and in shape). So if anyone is up for the challenge I'm available to do the analysis. Diffraction is an extremely sensitive probe there!
I've learned a lot about peak broadening here but I have a same problem with some differences. I anchored nanoparticles on reduced graphene oxide (RGO). The XRD peaks of nanoparticles-RGO nanocomposites are broader than the XRD peaks of pure nanoparticles. Also, their intensity is lower than the nanoparticles. since the nanoparticles are from the exactly same sample which is separated in two parts and one part was added to RGO and I dried them. Is it possible that the peak broadening is depend on aggregation of nanoparticles?
with the reduction of size number of planes are less as compared to large size structure. less no of planes leads to reduction in peak intensity as well as broadening of peak.