I'm attempting to design a neural network to approximate SED fitting.
I have a set of 20,000 runs through the SED fitting program, MAGPHYS.
Each run contains:
I've built a neural network in Keras to attempt to learn this function. Currently I'm using 40 hidden nodes per layer and 4 hidden layers, in addition to one input and one output layer. The input and hidden layers are all using TanH activation functions and the output layer is using a Linear activation function.
I am normalising my input and output data to within 0 and 1 using minmax normalisation.
I've tried a lot of different combinations of neural network parameters, such as:
Regardless of any of these parameters, the network always seems to output values that are very close to the averages for each of the 32 outputs. Sometimes the network will output exactly the same number for every test, or sometimes these values will vary slightly, but will still be a value very close to the average.
What would cause my neural network to always output values like this?
Am I doing something wrong with my network design, or is there something else that I'm missing?
Is there anything I can try in an effort to get my network to actually learn this function correctly?
http://www.iap.fr/magphys/magphys/MAGPHYS_files/readme.pdf