It depends upon the problem, in some problems you can comprise the model accuracy for speed. On the other hand there are issue where you cant compromise the accuracy specially the medical fields.
I would say, start with simplest approach and start making it complex until you feel the execution time is increasing more as compared to model performance
Here are some crucial factors to consider while selecting an algorithm.
The training data's size. To achieve solid predictions, it is normally recommended to collect a large amount of data. However, data availability is frequently a limitation. Choose methods with high bias/low variance, such as Linear regression, Nave Bayes, and Linear SVM, if the training data is short or the dataset has fewer observations and a higher number of characteristics, such as genetics or textual data. Low bias/high variance techniques such as KNN, Decision trees, and kernel SVM can be used if the training data is sufficiently large and the number of observations is more than the number of features.
It depends upon the nature of datasets, type, size the training and testing data, and number of input and output factors.
For example, you can use SVM model for small datasets with high features, however, for big datasets, RNN (RNNS, LSTM, GRU) networks are recommended. Also, for classification, the classifier machine learning networks such as KNN is recommended.
according to the current state of research there is no algorithm that is always superior to others. I would recommend you to evaluate as many algorithms as possible for your given dataset.
In doing so, you can use the following guiding questions as a starting point:
1. Definition of the prediction problem
Supervised learning (if target values are available)
Unsupervised learning (if no target values are available)
2. Preselection of suitable algorithms
Based on your prediction problem, you can choose algorithms that performed well in previous studies.
For supervised prediction problems, e.g., random forest, decision tree, neural networks, support vector machine, XGBoost, LightGBM
For unsupervised prediction problems, e.g., k-nearest-neighbor, principal component analysis.
3. Hyperparametertuning
Use random search, grid search, or bayesian search to optimize all important hyperparameters of each prediction model.
If you have enough time, try to evaluate as many configurations as possible to achieve the best results for each prediction model.
For an application-based example, please check out my recent article on revenue forecasting using machine learning:
Article Revenue forecasting for European capital market-oriented fir...
Hope I could help you.
If you have any further questions, feel free to contact me.
It depends upon the application domain. Different Machine learning algorithms works differently over the data set you have selected.
e.g. for regression: Logistic Regression can be used
for classification there are a lot of methods available.
Select ML also based on dataset, it's application and the size of the dataset.
There is no generic algorithm is available for all kind of problem statements.
Once ML algorithm is selected then for Supervised ML like Classification, evaluation matrix ( Recall, Precision, F-Score, Accuracy) and its derivatives can help you to analyze which one is working is fine. For unsupervised ML tasks Fowlkes Mallows Score, Silhouette Score , MAE, MSE etc. will help you to select best technique.