Mathematical modeling is becoming a prominent factor in almost every field of Engineering along with simulation and modelling. The research becomes more viable but complex and to understand. Please state your comment and observations.
Mathematical modeling and engineering have been holding hands for a long time. The progressive increase of the computational capacity allows the creation of mathematical models more sophisticated and consequently more compexos, but with greater accuracy. As a consequence, engineers need to improve their skills in numerical methods and advanced programming. However, the development of prototypes in the laboratory will always be essential to validate mathematical models. I think both approaches are very important.
Mathematical modelling no doubt is a viable means of making complex real life problem simplified to a mathematically inclined researcher. Any complexity in the interpretation is an indication that the some or all the parameters in the developed model should be reviewed and the model itself re-validated.
Today's Engineers & Scientists solve problems with a “Find X” mind-set. With some Operational Research training they could expand their thinking to a “Find X to Optimize Y” mind-set. Then they would be ready for Optimizations, Calculus-level programming and software. (This would drop today’s design times that require months even man years to one or two days! Manufacturing processes could be optimized to the days demand and thus maximize their profits.)
“Find X to Optimize Y” thinking among professors will cause most engineering & science textbooks to be rewritten with optimization examples and discussions. This will be great stuff for industries and government; applied engineering and/or science not just theories.
A little history, in 1974 Calculus programming (for more visit https://www.researchgate.net/project/FC-Compiler-the-FortranCalculus-Compiler-Alpha-version-2) was introduced at a Society of Industrial & Applied Mathematicians (SIAM) conference. Two professors heading the conference would -not- allow this paper to be printed since they realized what it would do to their (numerical methods) field of work. (Few, if any, professors are aware of what can be achieved thru Calculus-level Programming.) Help, we need Engineering & Science professors to teach the “Find X to Optimize Y” mind-set.
Objective required for Optimization
• Engineers & Scientists need to move from a “Find X” mind-set to a “Find X to Optimize Y” mind-set teaching in schools.
• Industry/Company Leaders need to state their company objectives so ALL employees know it; Leadership by objectives.
Luciano Oliveira Daniel agree that the engineers need to improve their skills in numerical methods and advanced programming, but the approach is going leading towards more difficulty, e.g as you know many journals and reviewers even didn't consider the article if there is no mathematical model, with the expansion of software packages, its really hard to implement and analysis both at the same time, but you are right that both approaches are important to validate the studies. David A. Okunowo Its very hard to express a model for each scenario, then in such cases we only consider the simulation analysis, as you know that for validation of studies the combination of both approaches is not always possible, its possible but still need new approaches to combine them, as many of the journal reviewer asked that why you combine this model with xyz approach, and they give you list of other approaches which always resemble to your academic studies. Phil B Brubaker optimization is the key element in every scientific studies, but as the research is on-going many fields are still exploring even the way, how to approach the solve the problems, but your argument is right, hence we all are still learning, and the combination of both models is possible for optimum result, but along with much complexity.
Human intuition and experience tell a simple, at first glance, the truth. For a small number of variables, the researcher gets a rough picture of the process under study.
In turn, the huge number of variables recorded can allow a deep and complete understanding of the structure of the phenomenon. However, this apparent attractiveness of each variable brings its own uncertainty in the integrated (theoretical or experimental) uncertainty of the model or experiment. Moreover, the complexity and cost of computer modeling and field tests increase tremendously.
Thus, an optimal number of quantities that is specific to each studied process needs to be considered in order to evaluate the physical-mathematical model.
In this case, the theory of information came to the aid of scientists. It happened because of the fact that modeling is an information process in which a developed model receives information about the state and behavior of the observed object. This information is the main subject of interest in the theory of modeling.
The idea is to quantify the uncertainty of a conceptual model based on the amount of information embedded in the model and caused only by the selection of a limited number of quantities that must be taken into account. It is based on thermodynamic theory, concepts of Mark Burgin’s general theory of information [Theory of Information: Fundamentality, Diversity and Unification]. It includes two guidelines…
Observation is framed by a System of Base Quantities (SBQ)
The harmonic construction of modern science is based on a simple consensus that any physical laws of micro- and macro-physics are described by quite certain dimensional quantities: base and derived quantities. Taking quantity as a fundamental unit generally means it that can be assigned as standard of measurement, which is independent from the standard that chosen for the other fundamental quantity. The base units are selected arbitrarily, while the derived quantities are chosen to satisfy discovered physical laws or relevant definitions.
The quantities are selected within a pre-agreed system of base units (SBQ) such as SI (International system of units) or CGS (centimeter–gram–second system of units). The SBQ is a set of dimensional quantities, which are base and can generate derived quantities. They are necessary and sufficient to describe the known laws of nature, as in the quantitative physical content. SBQ is a result of the collective imagination of scientists. It exists precisely because of the general agreement of scientists to behave as if it really existed.
Number of quantities taken into account in the physical-mathematical model is limited
SBQ includes the base and derived quantities used for descriptions of different classes of phenomena (CoP). In other words, the additional limits of the description of the studied material object are caused due to the choice of CoP and the number of derived quantities taken into account in the mathematical model. For example, in mechanics SI (International system of units) uses the basis {L– length, M– mass, Т– time}, i.e. CoPSI ≡ LMT. Basic accounts of electromagnetism here add the magnitude of electric current I. Thermodynamics requires the inclusion of thermodynamic temperature Θ. For photometry it needs to add J– force of light. The final primary variable of SI is a quantity of substance F. The total number of primary units in SI is ξ = 7. If SBQ and CoP are not given, then the definition of "information about researched object" loses its force. Without SBQ, the modeling of phenomenon is impossible. Finally [Menin B., Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena, American Journal of Computational and Applied Mathematics, vol. 7, issue 1, 2017, pp. 11-24. Available: https://goo.gl/m3ukQi.] it is possible to calculate the absolute measurement uncertainty of the developed model before starting any experiment or computer simulations. The overall uncertainty model including additional uncertainties associated with inaccurate input data, physical assumptions, the approximate solution of the integral-differential equations, etc., will be larger than the absolute uncertainty caused by choosing the limited number of quantities.
Numerical solutions can sometimes be very valuable but each numerical example is a special case. Numerical methods are valuable for providing the special-case solution that some user needs. However, a numerical result is a prediction, not an explanation. This is a major limitation.
For example, physics predicts a variety of conservation laws. These predictions did not originate from citing a collection of numerical results that are all special cases. The predictions came from old-school math applied to the physical postulates. If you want to derive a conservation law (for example) you don’t use a set of numerical results. You use old-school math.
Numerical methods are valuable when used as supplements instead of replacements for old-school math. The big problem happens when computer simulations are used to replace old-school math. I have seen two categories of this problem. One is misinterpretation of the numerical results, and the other is laziness.
An example of misinterpretation is in the category of charge-collection by a semiconductor device when struck by a heavy-ion that liberates electron-hole pairs. A misinterpretation of computer simulation results is the existence of a “funnel”. This is reported to be a strong-field (electric field) drift region that promptly collects all charge liberated within. However, if the investigators paid more attention to the spacing between equipotential surfaces plotted by the simulations, they would have recognized that this region they call a funnel is actually where the electric field is weakest. The funnel myth persisted for several decades before the community finally got past that.
Laziness is the idea that you don’t have to understand math if the computer can do it for you. The example that comes to mind is an electronic device (called a “MEMS”) affected by electrostatic charging. Conventional physics together with old-school math was able to predict the charging effect. After presenting that, a spectator asked “Why use that math when the computer code that I have been using can predict the same numbers?” My answer was another question “Why use a simulation for a specific example when old-school math can treat a variety of examples?”
I agree L.D. Edmonds and Boris Menin but i think the mathematical modeling made research more viable, validated but complected. The beginner researcher will face a lot of problem, you need to be good either in Mathematical Modeling or extreme grip Simulations.