###################################
Imperatively Hidden Object (IHO) Learning
###################################
Authors: Thomas Hahn, Dr. Daniel Wuttke, Dr. Richard Segall, Dr. Fusheng Tang
The key to the keys to machine learning challenges lies in the correct answer to the main question:
How to naively discover new essential – but still hidden - objects (HO), which are defined by their features, required for properly training novel adaptive supervised machine learning algorithms, for predicting the outcome of many complex phenomena, such as aging, medical recovery, tornadoes, hurricanes, floods, droughts, population growth, infections, diseases, wars, profits, stock values and other complex phenomena, which partially depend on still hidden factors/objects?
===============================================
Introduction to feature discovery to train supervised machine learning algorithms for artificial intelligence (AI) applications.
===============================================
==============================================
Feature discovery and selection for training supervised machine learning algorithms: – An analogy to building a two story house:
===============================================
Imagine a building named “Aging”. It consists of two stories: the ground floor, which is called “feature selection”, and the second floor, which is called “developing, optimizing and training the machine learning algorithm”.
Before any machine learning algorithm can be trained properly, feature selection must be perfected and completed. Otherwise, the machine learning algorithm may learn irrelevant things caused by the ambiguity, which is due to missing features. Not much time and efforts should be invested into optimizing, training and improving the algorithms until all features are properly selected. As long as feature selection is incomplete one must focus on finding the missing features instead of tuning the algorithm.
In other words, using our building analogy, here is the most important advice: Do not try to complete and perfectionate the 2nd floor called “training, tuning and optimizing the machine learning algorithm”, before you are certain that the ground floor, i.e. “feature selection”, has been fully and properly completed. If this is not the case, one must focus on discovering the missing features for the training samples first.
Lots of research has been dedicated to perfectionate algorithms before completing feature selection. Therefore, our algorithms have gradually improved whereas our feature selection has not.
Its like the waterfall model. The previous step (i.e. feature selection) must be fully completed before the subsequent step (i.e. developing and training the supervised machine learning algorithm to make correct predictions) can be started.
�g�R�~�7��3