Hello Everyone,

The scaling factors used for Collection 2 and Level 2 Landsat data includes a multiplicative (0.0000275) and an additive (-0.2) term. When these factors are used to convert to non-scaled surface reflectance values for each band, and these data are used in various processing procedures (like the ratio used to compute the NDVI), it has both noise problems (pixels with water or clouds) and the data are smoothed so it appears that the spatial resolution is lowered.

I don't understand why this problem has been around so long and not fixed at this stage. Not sure what the benefit is to spread the range of digital numbers way beyond the original range to create an artificially enhanced dynamic range of numbers. The 'empty spaces' from valid number to valid number is then filled in with a smoothing type of procedure (perhaps the resampling using cubic or bi-linear methods), so when zoomed up the image looks smooth and appears to have lost resolution compared to the original image (get level 1 and level 2 data and compare them). This combined with the 'noise' problem introduced by the 'scaling factor corrections' for pixels with water or clouds is one reason why many users are downloading level 1 data and doing their own corrections instead.

Generating the NDVI with the not scaled corrected data (the data you download) DOES NOT give valid NDVI values. They are in the range of -1 to +1, however, they are not correct; the values will be quite a bit smaller than they should be. If only a multiplicative scaling was used (e.g., 0.0000275) and not an additive also (e.g., -0.2) then the scaling would not matter because they would cancel out in the ratio process; this does not happen when an additive term is included.

I think it is time to re-think the scaling of the data and use a method that does not create problems that were not there to begin with !

Pat Chavez

Northern Arizona University (retired USGS)

More Pat S. Chavez, Jr.'s questions See All
Similar questions and discussions