It is very much depending on what problem you are trying to solve, and the type of data you have in hand for training.. There isn't a absolute answer to which one is performing better than the other, until you specify what you are trying to achieve
Unluckily, there is not existing a theory that tells which and whether at all model is the best to solve a given problem.
Usually, it is necessary to run a whole battery of ML, AI, and other algorithms to decide which method is the best for a given problem.
To make things even more complicated, a very important part of the application of ML, AI, and similar algorithms is the feature extraction.
When something important is missed in the feature selection, it can easily happen that none of the methods succeed to find a solution. For this part is no absolutely one general algorithm of how to proceed.
This is more an art than mathematics. A strong intuition in solving such problems might be very helpful.
The performance of the algorithm selected is depending on the nature of your data and your problem itself in general, and also on the several pre-steps of your method : feature extraction and selection, reduction of dimension if necessary ...etc , for example if a bad feature selection step was made, the algorithm wont be performing as well as expected , so in order to choose the best algorithm you need to take in consideration all these conditions.
Totally depends on your DATA. Working on breast cancer histopathology image dataset , I find out deep learning models as the best solution among all machine-learning algorithms.
As a supplementary note to the answers above that the best algorithm depends on the problem/available data: There is even a proven theorem that no general best algorithm exists for all problems, the "no free lunch theorem(s)" for machine learning/statistical inference.
In practice, for many problems DNNs work quite well (and with convolutional layers at least if you are doing some image recognition problem). AFAIK randomized forests are also quite popular in industry.
One can't develop a deep learning algorithm, if data is small (of course we can apply transfer learning for small data). ML have the potential to learn from small data-set also as we training the ML algorithms through extracted features.
Seied Mahdi Hashemi Different AI algorithms are developed for specific tasks/purposes. For example, Convolutional Neural Network (CNN) works better with images/videos, it can be betterly used for computer vision related tasks but it will fail for time series tasks, same is the case with Recurrent neural network (RNN) which works better with times series data but it may not work better with images. Same is the case with the reinforcement learning algorithms. In short, every algorithm have their own worth. Howevery, many algorithms are developed for performing the same task in those situations you can compare different algorithms via difference performance measures i.e. Accuracy, Confusion Matrix, Precision, Recall or Sensitivity, F1 score etc.