I am not quite clear about your question. Yet it is quite important, assuming that I interpret it correctly.
So, if genesis (in the plural, geneses) means origin or basess, my comments may be of some help. The topic is complicated even in the simplest of cases -- estimation statical or otherwise, is based on critical (to your results) assumptions that should be verified after an initial estimation of the coefficients of your model.
In general, the coefficients of linear and non linear models are developed from experiments so that the curves (e.g., solutions) are parameterized using statistical estimation methods such as the maximum likelihood, ordinary least squares and other methods that include training the model on some subset of the data and then use the remaining part of the data to test the model you have obtained from the training set. In other words, you may uses supervised or unsupervised methods (please check UTube for clarifications). basically, statistical analyses and sampling theory can provide parameter values (e.g., estimates of the coefficients (slopes, intercepts and so on) of the model, conditioned on the data) The data may be developed from a randomized study. Agin computer can do this for you, but you should check if the statical results violate the assumption inherent to the method. So, for example, the OLS (ordinary least squares) used as a statistical method of estimation, has 5 critical assumptions. If violated, then its results may be invalid or incorrect. Here, the web can be of much help.
Altenatively, you may assume parameters' values from prior knowledge of the model or models you are dealing with; the literature can guide your choices of actual parameter values. I would not use this approach unless I were very familiar with the literature.
You can also use entropy of information methods. Here, you may reflect on whether prior knowledge about the reality you are dealing with has a mechanistic basis or you want to be informed by the maximum entropy of information criterion. For this approach you may refer to the work of E. T. Jaynes (1982) and more recent work on entropy of information. The essence of the method is easu=y to understand although the mathematics requires using the calculus. But, the derivations are lees important when you can apply formulae directly and thus avoid having to deal with optimizations directly.
I think I said more than enough and I thank you for your very important question. I hope others join me in helping you.