What are the strengths of classifiers such as: (SVM, random Forest, ...), in the field of regression?
https://elitedatascience.com/machine-learning-algorithms
https://towardsdatascience.com/comparative-study-on-classic-machine-learning-algorithms-24f9ff6ab222
Article Advantages and Disadvantages of Using Artificial Neural Netw...
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.8153&rep=rep1&type=pdf
https://ieeexplore.ieee.org/abstract/document/5997308/
https://www.pnas.org/content/88/24/11426.short
Article Comparing regression, naive Bayes, and random forest methods...
You can have SVR:
Support Vector Regression (SVR) using linear and non-linear kernels
Toy example of 1D regression using linear, polynomial and RBF kernels.
📷
print(__doc__) import numpy as np from sklearn.svm import SVR import matplotlib.pyplot as plt # ############################################################################# # Generate sample data X = np.sort(5 * np.random.rand(40, 1), axis=0) y = np.sin(X).ravel() # ############################################################################# # Add noise to targets y[::5] += 3 * (0.5 - np.random.rand(8)) # ############################################################################# # Fit regression model svr_rbf = SVR(kernel='rbf', C=100, gamma=0.1, epsilon=.1) svr_lin = SVR(kernel='linear', C=100, gamma='auto') svr_poly = SVR(kernel='poly', C=100, gamma='auto', degree=3, epsilon=.1, coef0=1) # ############################################################################# # Look at the results lw = 2 svrs = [svr_rbf, svr_lin, svr_poly] kernel_label = ['RBF', 'Linear', 'Polynomial'] model_color = ['m', 'c', 'g'] fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 10), sharey=True) for ix, svr in enumerate(svrs): axes[ix].plot(X, svr.fit(X, y).predict(X), color=model_color[ix], lw=lw, label='{} model'.format(kernel_label[ix])) axes[ix].scatter(X[svr.support_], y[svr.support_], facecolor="none", edgecolor=model_color[ix], s=50, label='{} support vectors'.format(kernel_label[ix])) axes[ix].scatter(X[np.setdiff1d(np.arange(len(X)), svr.support_)], y[np.setdiff1d(np.arange(len(X)), svr.support_)], facecolor="none", edgecolor="k", s=50, label='other training data') axes[ix].legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=1, fancybox=True, shadow=True) fig.text(0.5, 0.04, 'data', ha='center', va='center') fig.text(0.06, 0.5, 'target', ha='center', va='center', rotation='vertical') fig.suptitle("Support Vector Regression", fontsize=14) plt.show()
How to justify the partitioning of the samples to (70%; 30%): (Calibrating and Validation) in the context of general modeling and artificial intelligence in particular?
11 December 2018 314 0 View
I am using Rhodamine6G as gain medium and silver nanoparticles as scatterers on a microscope slide and laser input 532 nm comes from above.
09 August 2024 9,894 2 View
I am trying to analyse data from a survey examining what variables affect teachers perceived barriers to incorporating technology into their classroom. I have 5 predictor variables however my DV...
06 August 2024 1,752 3 View
I am using unit level data (IHDS round 2) & Stata 17
06 August 2024 5,725 2 View
I need the python code to forecast what crop production will be in the next decade considering climate and crop production variables as seen in the attached.csv file.
05 August 2024 2,977 3 View
Hello everyone, I am currently working on a research project that aims to integrate machine learning techniques into an open source SIEM tool to automate the creation of security use cases from...
04 August 2024 3,196 2 View
Machine learning (ML) has shown great potential in predicting the compressive strength of concrete, an important property for structural engineering. However, its practical application comes with...
03 August 2024 2,546 2 View
Hello I want a suitable journal in the field of remote sensing and machine learning to be judged quickly. Thank you for your guidance Thanks
01 August 2024 1,799 4 View
When we conduct linear regression, there are several assumptions. The assumption of normality is whether the residual errors are normally distributed, not whether a predictor is normal?
31 July 2024 6,164 3 View
Hello everyone What is your opinion about the introduction of an expert decision support system in which the rules are extracted from existing data without human intervention, instead of being...
31 July 2024 5,785 4 View
Hi, I'm curious to know if data on chemical compounds from PubChem, such as water solubility properties, can be used to train a machine learning model for commercial purposes. Will this infringe...
30 July 2024 8,707 1 View