03 March 2018 2 3K Report

This is a vague question. I will do my best, I think it has definite answers. I am hoping for answers of the form "Read book x, learn this specific topic, read this paper/s".

What is bothering me is that every AI system I have read about seems to have an internal model which is set up to operate in a specific way. The task of the learning algorithm is to then optimise the parameters of this model to solve the problem.

I am thinking that this is akin to setting up a custom description language which preserves or translates some structure of the thing to be modelled in such a way that it becomes amenable to solution through some understood algorithmic technique; iterative gradient descent, analytic, whatever.

The Question: Is there any framework in which the optimum model/description language can be discovered? For instance in a GA the mapping from the genome to the phenotype is critical to the effective operation of the algorithm. This mapping is manually defined and, I would posit, is the most critical component of the GA. Specifically how would one search through the space of mappings? Or going meta how would one search through the space of algorithms for finding that best mapping? It feels like you could dissapear up that ladder forever. In order to solve the problem you must first solve the problem.

Neural networks seem to represent an effective model for solving real world problems, why? How would one generate another model for a different set of problems? Cellular Automata seem good for modelling traffic flows or disease transmission, why? How would I automatically generate alternative systems which also model the set of problems effectively? etc.

Is this something that is thought about and discussed? Where would I find it?

More Ali Adnan's questions See All
Similar questions and discussions