Model-centric approach to AI lead to the Transformer architecture revolution. Data-centric approach is beginning to show gains in SLM (Small Language Models) i.e. phi-x models.

There could be some gains on Compute-centric approach too, bringing training/inference time and cost down significantly. Could LLM someday be trained in minutes rather than months or days?

Similar questions and discussions