12 December 2018 0 461 Report

Is it possible to predict pixel wise values for a Hyperspectral Image ?

My problem statement is to create the original Hyperspectral image from it's ground truth values .

I have several 24x24x91 Hyperspectral images each are a mixture maximum 4 of a total of 11 spices.

So basically my ground truth labels are a 24x24x11 image where the 11 vector represents the sequence of the 11 spices a d the values represent the weight of each spice for example [0,0.25,0,0.25,0,0,0,0,0.5,0,0]

So for each 24x24x91 image I have these ground truth of 24x24x11 and so the have the same value for each 24x24 spatial dimensions .

What my query is

With the given date set and labels

Can I create a model where I use the labels as the input and create it's respective 24x24x91 image as the output ?

Like a transpose convolution network maybe ?

I have tried this however failed as the model seems to learn some standard values that minimize loss but don't really learn anything useful . Even for noise inputs it predicts a low value.

I have used mse and mae as loss functions and a linear function for the output layer .

Any suggestions will be appreciated

Best regards

Similar questions and discussions