MaxLik Regression is to determine the coefficients of the regression model so that the observed data will have the maximum probability under the resulting model (i.e. the "conditional probability). This is very general. In the standard regression, the conditional probability is evaluated for the response, in the orthogonal distance regression the conditional probability of the orthogonal distances is evaluated. See "Deeming Regression:
Hi Jochen, thank you so much for your reply and for the link.
OK, I think I see the terminology error that I made. If the error in the two variables is the same, the ratio is one, then it is actually Orthogonal regression. If the value is different, then, it is a Deming Regression. Thus, Orthogonal regression is a special case of Deming regression, where the ratio of the uncertainties is one.
According to the link you posted, second paragraph, it says that Deming and Maximum Likelihood are "equivalent". Are they identical?
Do you know of an example of a Deming regression for a nonlinear model? Everything I am finding on the internet is for y= ax + b.
The term "maximum likelihood" (ML) is denoting a benchmark statistic of a model; it simply is the highest probability of the observed data, that can be achieved by varying the coefficients. The coefficients, for which the likelihood function takes its maximum are called "maximum likelihood estimates" (MLEs). All regression analyses (I am aware of! - there can be more!) are based on "finding the maximum likelihood estimates". This includes the standard regression models (including ANOVA models, t-tests, linear regressions), generalized regression models (with non-Gaussian conditional probability models), non-linear regression models (where the coefficients are not linear in the functional model, and I don't know what all more. Notably, for a Gaussian probability model, the ML function is an expression of the "sum of squares" (SS; sum of quared residuals) , and finding the MLEs is mathematically identical to finding the coefficient values for which the SS is minimal (-> "least squares estimates").
Deming regression is nothing else, only that the conditional probability of the orthogonal distances instead of the vertical distances are used to calculate the likelihood. The question "are they [ML and Deming regression] identical?" is wrong (I hope you understand why). It's like the question if driving an automobile and burning fuel are identical.
Regarding orthogonal regression for non-linear models:
To see the difference, take a simple case, such as Ordinary Least Squares (which is a Maximum Likelihood Estimator in such cases) and compare it with orthogonal least squares. This is done in Figure 1 (cases (a) and (c) of the enclosed paper AJD 150F available on ResearchGate), which also refers to Leng et alii (2007) that I also enclose for an better treatment.