Is it required to normalize the reflectance with respect to one another when spectra of the same object and same lighting conditions are obtained using two different sensors?
Thanks a lot for your advice. Please find the attached normalized spectra according to your suggestion. Are they now comparable? Plot of Normalized grass spectra & actual grass spectra and a plot of normalized spectra of all the objects are all attached for your reference.
Let us focus on the normalized grass spectra: they look quite similar in their general behavior; but they do really not match in absolute reflectance (with resepct to their white references) and the dashed line still exhibits a fine structure. I had the hope that this fine structure will be cancelled when performing the normalization. It didn't work.
As consequence we see that the fine structure is not part of any multiplicative term contributing to the sensor signal (otherwise it would be cancelled by normalization). Thus I think additive contributions such as stray light might be the reason in case of your drone experiment (dashed lines).
Atmospheric modelling with respect to air attenuation will not help, because they will give multiplicative impacts onto the detector signal.
You should look for sources of light scattering; e.g. some panels at the drone which will scatter light onto your detector or a non-perfect imaging system etc. .
With respect to unsufficient coincidence of absolute reflectance you should test the linearity of your sensors.
There may be any bias in the detector signal as well.
What is about the pixel size of your image detector with respect to the image of your object and white reference (is the pixel within the object/white reference)?
Is the white reference taken at the same pixel or from a different one in the same scan?
What is about cross-talk between the pixels of your image detector?
What is about acoustic/vibration/EMI influence of your drone onto the detector signal; you should test your remote imaging system without running the drone (e.g. by 'looking' from a roof or any stage or tree) and see how the reflectance spectra match. For test purposes please use homogenously colored sufficiently large area samples in order to avoid any pixel size matching issues.
Thank you for such a detailed explanation. For info, pixel pitch of micro hyperspec is 7.4microns and I am sure that the pixel was chosen from the centre position of largely homogenous surface. For example, a large black sheet of size 1.5m x 3m was used and I collected the spectra almost at the center of the sheet which was at least chance of getting mixed pixels with the nearby pixels since nearby other objects such as grass, and rocks were at 25 to 30 pixels away from all sides. This made me assume that its a pure pixel for black sheet. However, with respect to grass, it might have mixed pixel with that of sand as, in playground grass was sparse and thin. White reference was 1ft x 1ft placed over blue sheet which was 3m x 10m. In all the objects the pixel was chosen from center of relatively large pure object area. Microhyperspec vnir a series.
How do we find the bias and gain of a sensor from these spectra? One thing to be noted is the ground spectra was collected by asd spectroradiometer on 16th may, 18 almost at the same time I.e., in the evening around 3:30pm. The image was taken by drone mounted headwall's micro-hyperspec on 18th may, 18 at around 3:30pm. Then the image spectra were not collected from georectified image. Rather, spectra were collected from ungeorectified reflectance image given by the sensor directly. Meaning they were collected from image in which roll, pitch, yaw were not taken cared.
I think bias and gain cannot be checked by your reflectance spectra. This has to be done in a special optical set up.
But according the data sheet (provided above) your drone camera seems up be a highend product, which should be ok with respect to bias (being zero), gain and linearity.
Robustness to harsh environment is also mentioned there.
But nevertheless you should check whether there are differences in your detector signal when having the drone camera stationary (off) compared to the case of a running (vibrating) drone.
I fear that the above mentioned fine structure, which could not be cancelled via normalization to the white reference, will be due to the 'running' drone (vibrations and turbulences by air).
Different spectral distributions of the light source (sun) while doing the experiments at different days will no be the reason for these fine structures. Spectral distribution of primary light source will be cancelled via normalization process.
You may fix the drone appropriately having the optical axis horizontally.
So you can perform your tests while drone being 'off' and drone being 'running' respectively.
Thank you very much for the info. May be, I should have extracted these sample spectra from the georectified image (taking into account roll pitch yaw) instead of raw reflectance image. I will try and let know by Saturday.
You have an interesting topic and issue at hand. In addition to what Gerhard has already said, I would like to add my thoughts on it.
- How were the reflectance values (in first plot) obtained?
The plot shows values between 0 and 1 on the y-axis. Thus, we are talking about values that have already been normalized (in one way or another).
The ASD spectroradiometer gives you a relative reflectance: it is the ratio between the reflected light from the measured target and the reflected light from a reference target (the white reference that is supposed to have a reflectance value of 1). I guess you used a Spectralon panel as white reference, then measured again the white reference (which gives you a nice line with reflectance=1) and finally the other targets. But how did you do it with the hyperspectral sensor? Were DN (digital numbers) values from the image divided by the maximal value allowed by the encoding bits (12 bits => 4096) or by the mean value of pixels extracted from the white reference? Was there a DLS (downwelling light sensor) aboard the drone? Were the values directly output by the manufacturer's software?
- The wavy pattern (that Gerhard referred to as fine structure) at high wavelengths, more precisely in the NIR region, is quite intriguing. If it is inherent to the detectors' spectral response, it should have been corrected when "normalizing" the values (again we need to know how the normalization has been done) or not (just before 1000 nm, the effect is quite sharp: maybe the signal/noise ratio is too low for the detectors to read the measurements correctly). It could be linked to different lighting conditions when taking the white reference and measuring the other targets (it happens esp. indoors, with artificial flickering lights; or outdoors if there is a long time interval between the white and target measurements). It might also be an object close to the sensor or to the targets that pollute the signal (adjacency effect).
- The Micro-Hyperspec looks like a push-broom sensor: the orthomosaicking process is quite tenuous, especially at low flight altitudes and high spatial resolutions, which might distorts pixel values. If you have access to raw data (line by line), it could be a good idea to work with them.
When working with reflectance, some aspects should also be considered (and apply for both sensors, ASD and Hyperspec):
- Field of view: make sure that the ASD "sees" only the central part of the target; with the hyperspectral camera, the target should cover a large portion of the image, then average values from the central pixels, not just 1 pixel, to remove random noise inherent to the sensor (just as the ASD integrates several times over the whole spectrum); if you work with 1 pixel, take several images of the same target and average the values.
- Take images of white reference and targets in the same conditions: close in time (no changing lighting conditions) and in position (nadir, or you'll have to correct for different viewing angles).
- Black reference: sensors don't give DN=0 when measuring an absolute black target, there is always a background noise that should be subtracted before normalizing with the white reference. Notice how the ASD take a measurement of the dark current before the white reference.
To me the most stringent conditon in the series of measurements is your statement:
'white reference has to be taken in the same conditions';
i.e. to my opinion white reference data have be taken from at the same image. Otherwise one cannot be sure that the primary spectrum is totally the same for the reference as well as for the samples.
The white reference is the link to the primary spectrum. But it is not necessary to have a very homogenous wavelength dependence of its reflectivity very close to 1. Flat wave length dependence whithout any sharp peaks is enough and sufficient high reflectivity (noise issue).
After normalization to the white reference the normalized reflection spectra have to be multiplied by the wavelength dependence of the absolute reflectivity of the white reference taken at an optical lab (calibration curve). In this way one gets the absolute reflectivity of the sample(s).
It looks to me like some of your spectra show atmospheric water vapour absorption (and "apparent" emission) near 940nm. Even in clear sky conditions water vapour density can be quite patchy so if there is any time (and/or spatial) difference at all between your target and reference you can produce these effects.
The data are further refined in the way we discussed. Using headwall software, chosen white reference from the raw image itself and generated a reflectance image. Then happened to notice in my previous query that I have applied cubic spline to smooth the drone spectra as it originally comprises high frequency spikes. However, I donot want to go by such arbitrary method of smoothing the curve by which I am afraid of losing important absorption feature. So a new question pops up is to how to denoise this noisy drone signal with reference to asd spectra taken on ground. This time dashed lines are of asd and continuous lines are of drone spectra. How should I approach this problem further?
Follow a strict protocol when collecting data with the ASD spectroradiometer (see the manual). Here are some details that come to my mind:
- warm up the instrument long enough prior to use
- spectrum averaging for each scan: at least 50 for spectrum, white reference and dark current when outdoors
- weather conditions: fine and stable weather, not wet; sun high in the sky
- update dark-current frequently to avoid "dark-current-drift"
For each target, I would repeat the following steps: optimisation, dark current, white reference, 1 measurement of white reference then 3 measurements of same target (and log everything on a sheet of paper: when and what).
After field work, check that each white reference measurement gives a nice horizontal line close to 1, with no significant noise. Then check the 3 measurements of each target and choose the one with minimal noise.
Thanks Kosal. Here I do not have any problem with ASD (dashed lines in the plot) as they are smooth, assuming noise free. As far as the protocols are concerned, we have followed exactly similar protocol and here we have shown the best ones for ASD. The issue here is in the drone data. I need to remove noise from the drone data before picking the spectra from it. To my mind I am getting MNF as the option. Please suggest me MNF can be done over reflectance image or over the radiance image? As per my understanding, it works better if I do it over radiance image. Any other suggestions for removing noise from drone data?
It is just strange that the white ref spectrum is not that smooth. What is the radiometric calibration process with Headwall software?
When you're talking about raw image, is it the orthorectified image with DN values (no reflectance) or is it 1 line scanned (with roll, pitch and yaw)?
raw image is the line scanned radiance values with roll, pitch and yaw. The reflectance image still contains the geometric distortions. However since I am working on spectral upscaling model, I am not looking forward to do orthorectification as of now. Is it mandatory? Meanwhile the drone spectra contains noise because I did not do any noise correction over the data. How will I go about it?
As you suggested I would be inclined to use inverse MNF on radiance data prior to georectification and evaluate the results. If you are successful you won't over-smooth the spectra.
Here are the noise removed spectral data. Comparison between pairs of spectra corresponding to asd and drone. File names with cr in it refers to continuum removed spectra. Carried out MNF and found first four bands of MNF to be inversed to remove the maximum noise from the image. Still there are noise in the image. Can anyone suggest how to proceed further? smooth lines correspond to ASD's and spiky lines correspond to that of drone's.