We tried to implement an Extreme Learning Machine (artificial neural network) algorithm in C# within the .NET framework.To avoid any mistakes: I am referring to a particular neural network architecture, similar to a multilayer perceptron, in which the connection weights from the input to the hidden layer are randomly assigned by initialisation and only the weights to the output layer are trained.
We ran into a problem, as during the computation of a large dataset of many input vectors (i.e. training examples, in our case a very long electricity load time series) the implementation throws an exception stating that one of the arrays just exceeded the maximum number of elements a .NET array can hold. This is happening during the Singular Value Decomposition for the Moore-Penrose pseudoinverse.
Is there another computationally feasible way to calculate the pseudoinverse without using SVD (and without producing matrices with the size of the squared input sample count)? Or is there some way to split the H-matrix of the ELM algorithm, than calculate the pseudoinverse for these smaller parts and reassemble them afterwards before computing the output weights? Or how do you tackle solving this for long time series?
Your help would be much appreciated!