02 February 2018 4 4K Report

I guess we may never stop this new feature discovering cycle to start over and over again, because as soon as we think we have got it to work, we hope to succeed in creating an exception, which causes are newly trained AI to fail, since this allows us to discover even another new relevant feature.

We started out with the feature “lifespan intervention (e.g. CR vs. YEPD) and discovered the no longer hidden objects/features “genotype” and “food media type”. The next Kaeberlein yeast lifespan dataset had features like temperature, salinity, mating type, yeast strain, etc., which also affect lifespan. Now for one loss-of-function mutant, we could have more than 10 different reported lifespans. According to my understanding this would make the concept of purely aging-suppressing gene or geronto-gene obsolete. This, in turn, would raise the number of components, which must be considered together as an indivisible atomic unit, of which none of its parts must be considered in isolation, to consist of already seven components that must be given with every supervised input training sample for our AI. If this trend keeps growing like that, then the number of components, which form a single data-point like entry, keeps growing by one new component for every new feature discovered/added. But would this not cause our data points to become too clumsy? But even if it does, for every new feature, which we decide to consider, our indivisible data unit must grow by one component. However, this would mean that 10 essential features would create data points of 10 dimensions. If we keep driving this to the extreme, when considering 100 new features, then we have 100 dimensional data points. But this would almost connect everything we can measure into a single point. This would put away with independent features because their dimensions will all get linked together. Is there something wrong with my thinking process here? I never heard anybody complaining about such kind of problems.

From this chapter we can conclude that the best AIs are those, which fail in a way that allows us to discover a new feature.

��g��

More Thomas Hahn's questions See All
Similar questions and discussions