Dear all,
I have written a paper about predicting the separation efficiency of hydro-cyclone separators by means of machine learning algorithms. To this end, I have collected 4000 single data comprised of 14 inputs (14 features of hydrocyclone separator), and one target (separation efficiency).
I have been asked by the journal to include an analysis into the computational complexity of applied algorithms (ANFIS, MLP, LSSVM, RBF), in terms of either run-time or big-o-notation. However, as I understand, the run time must increase commensurately by increment in the size of data (or size of inputs). But, strangely I have found out that by the increase in data size, the run-time for the respective models reduces dramatically. So, I am left puzzled how to report this, cuz as far as I know the big-o-notation diagrams cannot have a negative slope (the reduction of runtime by the increase in data size).
From an engineering point of view, this can be justified by bearing in mind that the algorithms can better recognize the target when higher characteristics of hydro cyclones (inputs) are fed. But, this remains as a paradox analyzing it through AI engineering.
I appreciate it if you help me out with this matter
thanks