Regularization techniques like L1 and L2 help control model complexity by adding penalty terms to the loss function. This prevents overfitting, where the model memorizes noise in the data rather than capturing general patterns.
Qualitatively speaking, regularization is a way to apply prior knowledge to a underdetermined optimization problem such that a unique solution can be generated that exhibits the required properties (generally some form of fitting that shows the desired data trends without concentrating too much on the background noise - a form of smoothing). Choice of regularization method depends very much on the properties of the data pattern that we would like to extract.