Intensity of the signal will increase with increase in concentration .Larger signals and improved signal-to-noise (S/N) ratios can be achieved by increasing the sample concentration or the injection volume. I already experienced this while doing the calibration with the polystyrene standards. Increasing the concentration of the solution, the intensity of the signal got better keeping the injection volume constant. As the injection volume was small, higher concentration gave better intensity.
However, a large signal is not always guarantee accurate molar mass determination. The optimum concentration and injection volume depends on the sample itself. Because high molar masses lead to high-solution viscosities, a high concentration produces a very viscous injection band, so that the diffusion process on the column can be hindered. When this happens, higher elution volumes are measured, yielding lower molar masses when a conventional calibration curve is used. This problem is less pronounced for low molecular weights.
For large molecules it is better to use a smaller sample concentration and increase the injection volume.For small molecules it is best to use high concentrations and small injection volumes.
I am interested in data or results for various concentrations of mixtures of standards to actually quantify the error that can be made due to the lack of signal intensity and/or the lack of separation of the molecular weights.
Have you done any design of experiment to assess this for the polystyrene?
One of my objectives is to figure out how much can you actually trust the shape of the molecular weight distribution of "monodispersed" standards .
Attached you will find the spectrum of the mixture of PS standards that I found in an old informal paper that did not describe the operating conditions or equipment used. How much time, sample volume, and concentration and what column (type and length) will it require to go from the unresolved spectrum to the resolved one? The plain curve is a theoretical result of the mixture.
In addition when deconvolution is used from an unresolved sample what will be the error due to measurements (overlapping Mw) and how much from the deconvolution that suppose that one knows the shape of the distribution?
You use the polystyrene standards to generate a log(MWt) /retention time equation. The equation depends on the solvent and the particular columns used. This assumes that the peak position corresponds to the weight average molecular weight of the standard. You can then use the GPC software to determine the PDI of the standards if you wish. When this equation is used for a broad distribution using the supplied software you can get number and weight average molecular weights easily. If the broad distribution has two or more peaks, there may be deconvolution programs in the software that can be applied. Otherwise just deconvolute using any program and apply the equation. The baseline should be what the solvent absorbs for UV and 0 for the RI analysis. If you have very broad distributions they may have to be approximated by several curves in the deconvolution. There will be errors but they will be specific to the samples, the type of deconvolution used and the apparatus and cannot be discussed a priori.
Thank you again Professor Litt for sharing your experience.
In the above chromatogram what could be the part of actual instrumental resolution and of computational resolution? What could be the share of instrumental error and computational error?
Let me clarify my initial question. I am looking for representative uncorrected chromatograms (if possible raw data) of PS standards and PS standard mixtures measured with RI and/or UV detector and/or any other detector even if not as good as in this example.
I would also be interested to know the experimental conditions in which the chromatogram in the attachments could have been obtained and/or the experimental conditions one could set to obtain such a chromatogram without computational artefacts.
Alternatively, I am looking for references dealing with the issue of the spreading of the distribution of the PS standards and the resolution of standard mixtures and a comparison of the detectors on this issue.
If you develop[ a calibration curve using the peak PS retention times, you can measure the spreading of a single molecule component. For an injection volume of 50 microliters, the absorption. curve should take 3 seconds at 1 ml per minute, if there is no spreading. The sample volume should expand as retention time to the 3/2 power.. Later peaks should spread more. For most systems this amounts to a change in PDI of ~0.04.( I have a paper on making a living polymer with a true Poisson distribution -. Synthesis of Bifunctional Monodisperse Poly(N-Isovaleryl Ethyleneimine); A. X. Swamikannu, G.-H. Hsuie, M. Litt and M. Balasubramanian, J. Polym. Sci., Part A: Polym. Chem., 24, 1986, p. 1455.) The spreading is too small to be seen except for the narrowest distributions. Calibration peaks will change position if the solvent or temperature is changed, since this changes the particle swelling and the polymer chain volumes.
Literature spectra usually have too little information make any corrections and the one you sent has no information attached. The basic spectra are plots of absorption or RI differences versus retention time. If the calibration curve is linear, log(MWt) can be substituted for. retention time; the Y axis is the amount of material detected at a particular time which is equivalent to the derivative of the cumulative weight curve. You need to calibrate your own instrument and run spectra under tightly controlled conditions to understand what is going on.
I understand that one can synthetize a polymer with a true Poisson distribution using living polymerization under specific conditions and that UV detector are very sensitive and cannot be the cause of a peak broadening. What happens when you have to separate your own standards from a different type of reaction? I have no access to a library facility and therefore I was not able to read your reference. What were the ranges of Mw and Mw/Mn?
I concur with you that it is regrettable that in some papers we cannot find the proper information to assess the ins and outs of the technique used. However, I am still interested in the practicality of the technique in the present case.
For the given example, with your experience would you be able to tell what could have been the main cause of the low molecular weight discrepancy: experimental conditions, calibration, or computational (deconvolution) errors.
Additionally, what would it take to make it better? Could you be able to give an estimate of the GPC running time of such a spectrum to be more accurate (if possible)?
I have no idea about what you are trying to do. If you can send me information I might be able to help.Was the curve you sent was from the literature or was it yours?
Was there a procedural difference between the two calibration runs? The difference in the long times could be due to a leak developing somewhere in the system with more solvent pumped for the second run. The low molecular weight peaks would show up as later. Our GPC tended to develop leaks that showed as a drop in pressure.
I thought I saw that you are located in Akron. If so, you could come to Case and we could discuss it properly.
Thank you again for your contribution professor Litt.
I would like to stay technical since this a technical question opened to anyone willing to share his/her experience with GPC.
Do you mean that it could be due to some broadening effect that does not show on the corrected GPC data because it would not been taken into account properly by the calibration or/and the deconvolution it leads to an over evaluation of the low molecular weight?
I understand that it is quite unusual to check for the sensitivity of detectors. Then let me rephrase again the original question.
Could anyone tell what experimental conditions are required to get the precision and the accuracy of the PS spectrum enclosed?
Does anyone have ever checked the precision and accuracy of their calibration by running a mixture of standards and compare it with a computed result as in the attached example?
If so can anyone share his/her experience with the results she/he obtained? How does it compare with the attached example?
You have to integrate the area beneath each curve to get the total amount of each component. If you know the extinction coefficient or dn/dC for the materials and the amount injected then you can calibrate the instrument for that system.
if you want to find any optimize conditions then you have to use design of experiments considering several control factors which affect your results for example concentration of the sample, injection volume etc. with several levels for each control factor. Monitoring for the response factor, you will be able to find out the best experimental conditions what you are looking for.
I have looked again at your last answer. It implies that the figure you attached was the result of your own work. It seemed that the experimental results and your calculations of what to expect for the narrow dispersion peaks did not agree. If this is what is happening, then the obvious answer is that the calibration equation is incorrect. If the dotted line shows the calculated results, then the equation slope is slightly higher that it should be. Such equations may not be linear over the whole range; people usually use a polynomial fit to get the most accuracy. The linearity depends on the analytical ranges and number of columns used. You show a very broad range of molecular weights which implies many columns or a special mix of gel particles.
Changes in peak half-widths with retention time can be calculated and may be important if the cited PDI is less than 1.2. This can be checked by running at two different pump speeds - e.g. 1 cc and 0.5 cc/minute. If there is no difference in half width (or PDI), you can ignore diffusion spreading. If there are differences, run at several different pump speeds. Plot the half width versus retention time to the 3/2 power and extrapolate the plot to t=0 to get the half width due to the PDI.
If the calculated results are from deconvolution of the broad curve, I do not understand what you are trying to do.
It seems to me that there is some misunderstanding between the question and answers. Let us speak about the role of detectors in GPC/SEC. Recently, we have compared results obtained with two different detectors: RID and ELSD. The same polymer, column and eluent! The same calibration dependence and the same soft. However, RID needed about 4x higher injected concentration than ELSD. M values calculated from the RID traces were systematically lower than that calculated from the ELSD trace. The reason is "concentration effect" in GPC. Data will be published soon. In the case of your interest, I can send you some papers on the concentration effect in GPC.
Further to this topic, the GPC (now called SEC as per IUPAC) is still a LC and the Van Deemter equation still applies. One should always calibrate the flow rate of a GPC column set to gain the most theoretical plates.
1
As per the chromatographic theory of band broadening, the higher the elution time (not necessarily volume) the greater the effect of diffusion of the band, and the lower the elution time the greater the resistance to mass transfer will be.
One should also note that due to the way flow cells are constructed vortices always occur, which skew all peaks towards higher elution times, and this is totally independent of concentration. A uniform polymer (monodisperse is a term IUPAC has deprecated) with a perfect underlying poisson distribution will always form a skewed gaussian distribution in SEC at slightly higher elution volume than it theoretically should, and the level of broadening depends upon the system.