First, you have to describe WHAT problem you intend to solve in your project. Once you have this answer, THAN you can start looking for the best algorithm or "algorithm family" to work on this problem.
My guess is that without describing your problem, you will get very few answers...
We would appreciate if you could provide us with more information about your problem, at least if it is a classification, clustering, or regression problem, and the type of data to be used (cross-section, time series, panel data, etc.) and ideally the final purpose or application.
The truth is that algorithms based on classical statistics provide extra information lodged in the coefficients and their significances, as well as in the fulfillment or not of the statistical assumptions.
If you are coming from the application side, the scikit-learn (Python package for ML) cheat sheet might be a good start to understand what algorithms are out there and what are possible relevant factors:
Use the simplest method that has a small number of parameters to be adjusted. Train the model and test it with unseen data. If it meets your expectations, then it's done. Otherwise, try the more complicated method.
If you don't have enough data, try the classical classification or regression methods instead of deep learning.
Start with XG-Boost, there will be no mistakes here. Then you will find hard to improve XGB's performance in most of the commonly known "prediction" tasks (regression and classification).
First your project problem which issue will be targeted, Project dataset, What is your input and expected output of your project. Ex. If you are focusing on similarity or similar group go for clustering algorithm and you want to classify things then go for classification algorithm svm etc.
Better to start with simple methods like kNN and svm for classification problems with less dataset, based on results you can explore moving to more complex methods and enhancing the dataset.