Cross validation is a proved good technique in machine learning, it is not compulsory, but it can outperform hold-out and leave-one-out techniques in machine learning.
Similar to cross validation, hold-out and leave-one-out are also widely used in machine learning. However, different from cross validation, which divides all the data into 5 or 10 divisions, and trains or tests them alternately and iteratively, hold-out just divides all the data in two parts, training and testing, and leave-one-out only employ one pattern in testing and use all the others as training dataset. The iteration number of leave-one-out is the number of patterns. Then it use the average to describe the performance of classification, which is like cross validation.
Definitely, leave-one-out is the most accurate approach, but it is also the most time-consuming. However, the hold-out depends on the random selection of patterns for training and testing. It is unstable. Researchers can repeat hold-out for many times, but it still has bias. Then cross validation seems to be the most effective way.It is not only efficient, but also accurate.
Therefore, cross validation is not compulsory, but it is the best.
Cross validation is a proved good technique in machine learning, it is not compulsory, but it can outperform hold-out and leave-one-out techniques in machine learning.
Similar to cross validation, hold-out and leave-one-out are also widely used in machine learning. However, different from cross validation, which divides all the data into 5 or 10 divisions, and trains or tests them alternately and iteratively, hold-out just divides all the data in two parts, training and testing, and leave-one-out only employ one pattern in testing and use all the others as training dataset. The iteration number of leave-one-out is the number of patterns. Then it use the average to describe the performance of classification, which is like cross validation.
Definitely, leave-one-out is the most accurate approach, but it is also the most time-consuming. However, the hold-out depends on the random selection of patterns for training and testing. It is unstable. Researchers can repeat hold-out for many times, but it still has bias. Then cross validation seems to be the most effective way.It is not only efficient, but also accurate.
Therefore, cross validation is not compulsory, but it is the best.
Beside the previous answer, I explain "why and when should we perform such task" of your question. Machine learning techniques are data driven approaches. They are prone to overfitting phenomena, a kind of memorizing training pattern rather than discovering the underlying relationship between input-output. By splitting the training set into two chunks as training set and validation set and using the concept of early stopping, i.e., spotting the occurrence of minimum error of model on validation set (on MSE v. epoch curve), we can overcome the overfitting problem.
As Mahmoud mentioned ML techniques are data driven and therefore experimental in nature. for example in a 10 fold cross validation it may be the case that in one instance due to random effects that you got a poor performance. should that be the baseline of your report? that is why you perform the test many times and report an average so that it is fair in terms of bad results(maybe it is a good technique) but also fair in terms of not misleading the readers of your work that your proposed ML technique outperforms every other due to random effects.