Almost all engineering scenarios can be converted into mathematical problems. Advancements in the engineering research/applications are hindered due to unsolved/only partially solved mathematical problems. One such example is the CNC machines with multiple degrees of freedom. Lets begin a discussion for summarizing and grouping such problems, so that researchers can try to resolve issues more easily.
In principle we could include as well -for example- at least some of the Millennium Problems (see http://www.claymath.org/millennium-problems), because the solution to them might entail a great leap in our understanding of many things. I am thinking on:
1) the P vs NP, which somehow implies that if you find an algorithm to solve Non-polynomial problems in polynomial time, then you have an algorithm for all the non-polynomial problems, which means that we could have tremendously faster ways of solving many problems.
2) The Riemann hypothesis, if solved in a way that tells how to find prime numbers, may change the way all of cryptography is being used today.
3) The global solutions to the Navier-Stokes equations: do they exist and if they are unique? we solve them numerically alright, but there is *NO proof* of the existence and uniqueness of their solutions.
4) Finally Yang-Mills and the mass gap is another mathematical problem whose solution may render a far better understanding of quantum mechanics, and then possibly applications would follow suit.
Not all the unsolved problems in mathematics have such a clear consequence in science and engineering, but arguably every solved conjecture in mathematics makes a shift in our global understanding, and eventually technology or part of science just catches up and takes the concept. Even popular concepts, like fractals, it is not always clear how do we take them to model phenomena in science or implement them in engineering. For example, take fluid flow in porous media. What is fractal? the matrix-fractures of the porous media? the permeability? the anomalous diffusion, which may be modeled with fractional derivatives?
In statistics:
Can we really assume the normal distribution when experimenting? I mean, after all the normal distribution is (among other features) the result of an infinity of instances of the same process. No engineering process achieves infinity, and assuming that at each repetition we have an instance of the same process might be questionable on its own. This and other assumptions on using statistics are discussed, for example, in the book "Statistics as Principled Argument" by Robert. P. Abelson.
In Biology:
1) protein folding and 3D structure of biomolecules.
2) how organelles (or organs in macroscopic organisms) know where to position themselves in the 3D structure of the cell? (See, e.g., Benjamin Lewin's book Genes)
3) In general, the understanding of morphogenetic fields and autopoiesis (see the works of Rupert Sheldrake or Humberto Maturana).
Energy engineering:
1) Better understanding of self-organizing process because of the expulsion of entropy
2) Q-infinity (more energy out than energy in) nuclear fusion achieved through magnetic or inertial confinement. There is a lot of physics that we have not yet observed at the levels of plasmas in combustion, simply because we haven't got any yet.
3) how to create a practical Dyson sphere, to capture the energy of the Sun.
Physics:
1) Quantum field theory and its macroscopic manifestations. Can we actually implement quantum teleportation? Quantum tunneling? see the book: Blasone, Massimo, Petr Jizba, and Giuseppe Vitiello. Quantum Field Theory and Its Macroscopic Manifestations: Boson Condensation, Ordered Patterns, and Topological Defects. World Scientific, 2011.
2) Gravity (for obvious reasons). At the quantum level, so far as I understand, there is no final model or physical evidence of it.
In general each scientific endeavor has its own frontier-like problems (or, more often that not, its own list of misconceptions departing from the wrong assumptions). Listing those problems indeed tell us a lot on where we are and perhaps what to do next to achieve a better level of science and engineering.
1. Analytical solutions to the full Hodgkin-Huxley equations
2. Analytical solutions to nonlinear reaction-diffusion systems
if you are meaning the mathematical concepts still now used in engineering . they are many, for example Forrier transform, Laplace transform, frequency domain, sigularity, ...
In principle we could include as well -for example- at least some of the Millennium Problems (see http://www.claymath.org/millennium-problems), because the solution to them might entail a great leap in our understanding of many things. I am thinking on:
1) the P vs NP, which somehow implies that if you find an algorithm to solve Non-polynomial problems in polynomial time, then you have an algorithm for all the non-polynomial problems, which means that we could have tremendously faster ways of solving many problems.
2) The Riemann hypothesis, if solved in a way that tells how to find prime numbers, may change the way all of cryptography is being used today.
3) The global solutions to the Navier-Stokes equations: do they exist and if they are unique? we solve them numerically alright, but there is *NO proof* of the existence and uniqueness of their solutions.
4) Finally Yang-Mills and the mass gap is another mathematical problem whose solution may render a far better understanding of quantum mechanics, and then possibly applications would follow suit.
Not all the unsolved problems in mathematics have such a clear consequence in science and engineering, but arguably every solved conjecture in mathematics makes a shift in our global understanding, and eventually technology or part of science just catches up and takes the concept. Even popular concepts, like fractals, it is not always clear how do we take them to model phenomena in science or implement them in engineering. For example, take fluid flow in porous media. What is fractal? the matrix-fractures of the porous media? the permeability? the anomalous diffusion, which may be modeled with fractional derivatives?
In statistics:
Can we really assume the normal distribution when experimenting? I mean, after all the normal distribution is (among other features) the result of an infinity of instances of the same process. No engineering process achieves infinity, and assuming that at each repetition we have an instance of the same process might be questionable on its own. This and other assumptions on using statistics are discussed, for example, in the book "Statistics as Principled Argument" by Robert. P. Abelson.
In Biology:
1) protein folding and 3D structure of biomolecules.
2) how organelles (or organs in macroscopic organisms) know where to position themselves in the 3D structure of the cell? (See, e.g., Benjamin Lewin's book Genes)
3) In general, the understanding of morphogenetic fields and autopoiesis (see the works of Rupert Sheldrake or Humberto Maturana).
Energy engineering:
1) Better understanding of self-organizing process because of the expulsion of entropy
2) Q-infinity (more energy out than energy in) nuclear fusion achieved through magnetic or inertial confinement. There is a lot of physics that we have not yet observed at the levels of plasmas in combustion, simply because we haven't got any yet.
3) how to create a practical Dyson sphere, to capture the energy of the Sun.
Physics:
1) Quantum field theory and its macroscopic manifestations. Can we actually implement quantum teleportation? Quantum tunneling? see the book: Blasone, Massimo, Petr Jizba, and Giuseppe Vitiello. Quantum Field Theory and Its Macroscopic Manifestations: Boson Condensation, Ordered Patterns, and Topological Defects. World Scientific, 2011.
2) Gravity (for obvious reasons). At the quantum level, so far as I understand, there is no final model or physical evidence of it.
In general each scientific endeavor has its own frontier-like problems (or, more often that not, its own list of misconceptions departing from the wrong assumptions). Listing those problems indeed tell us a lot on where we are and perhaps what to do next to achieve a better level of science and engineering.
@Arturo. You stated P vs. NP incorrectly. It asks if the class of problems that can be solved in non-deterministic polynomial time (NP) is the same (or not) as the class of problems that can be solved in deterministic polynomial time. Keep in mind that this is usually under the classification of decision problems, though optimization problems can be reduced into decision problems by asking questions.
There are several reasons why P vs. NP is important. The common one is that if P=NP, then there exist polynomial-time algorithms for NP-complete problems; many of which have practical applications in almost any area you can fathom. Most researchers believe problems that are NP-hard (or their decision versions as NP-complete (if need be the case)) are intractable, and have done research a lot of the time under this assumption. For example, the study of approximation algorithms has undergone this approach for a while. If I could name a very important problem that researchers have been dedicating a lot of time developing theorems around is P vs. NP.
Whether phase change problems are clearly solved !!!
Melting, Solidification, Boiling, Condensation, Sublimation....
In general, analytical solutions to non-linear differential equations second order.
In Algebraic Cryptography "Cryptanalysis of public key cryptosystems based on algebraic problems".
Agreeing with Prof. A.O. Tapia, may I add some unsolved area :
1. Problems of Differential Geometry-Geodesics, Torsion, Curvature, Space-time relations.
2. Problems of Weather forecasting-Tsunami, Tornado, Earth Quake etc.
3. Volcanic waves
4. Electro- Magnetic wave in core of Earth
5, Human Psychology
6. Complex fuzzy differential Equation.
1) Fast factorisation of integers (better than AKS algorithm )
2) Improving LLL and PSLQ algorithms
are examples of unsolved/only partially solved and are both on borders of Mathematics and engineering
1) Accurate and exact mathematical models of complex systems such as the weather.
2) General solution to the Hamilton-Jacobi-Bellman equation, used in optimization.
3) Solution of two point boundary problems.
Accurate and exact mathematical models of complex systems such as the quake
There are many unsolved problems in mathematics. Some prominent outstanding unsolved problems (as well as some which are not necessarily so well known) include
1. The Goldbach conjecture.
2. The Riemann hypothesis.
3. The conjecture that there exists a Hadamard matrix for every positive multiple of 4.
4. The twin prime conjecture (i.e., the conjecture that there are an infinite number of twin primes).
5. Determination of whether NP-problems are actually P-problems.
6. The Collatz problem.
7. Proof that the 196-algorithm does not terminate when applied to the number 196.
8. Proof that 10 is a solitary number.
9. Finding a formula for the probability that two elements chosen at random generate the symmetric group S_n.
10. Solving the happy end problem for arbitrary n.
11. Finding an Euler brick whose space diagonal is also an integer.
12. Proving which numbers can be represented as a sum of three or four (positive or negative) cubic numbers.
13. Lehmer's Mahler measure problem and Lehmer's totient problem on the existence of composite numbers n such that phi(n)|(n-1), where phi(n) is the totient function.
14. Determining if the Euler-Mascheroni constant is irrational.
15. Deriving an analytic form for the square site percolation threshold.
16. Determining if any odd perfect numbers exist.
For more information:
http://mathworld.wolfram.com/UnsolvedProblems.html
&
http://www.claymath.org/millennium-problems
There are many unsolved problems in mathematics. Some prominent outstanding unsolved problems (as well as some which are not necessarily so well known) include
1. The Goldbach conjecture.
2. The Riemann hypothesis.
3. The conjecture that there exists a Hadamard matrix for every positive multiple of 4.
4. The twin prime conjecture (i.e., the conjecture that there are an infinite number of twin primes).
5. Determination of whether NP-problems are actually P-problems.
6. The Collatz problem.
7. Proof that the 196-algorithm does not terminate when applied to the number 196.
8. Proof that 10 is a solitary number.
9. Finding a formula for the probability that two elements chosen at random generate the symmetric group S_n.
10. Solving the happy end problem for arbitrary n.
11. Finding an Euler brick whose space diagonal is also an integer.
12. Proving which numbers can be represented as a sum of three or four (positive or negative) cubic numbers.
13. Lehmer's Mahler measure problem and Lehmer's totient problem on the existence of composite numbers n such that phi(n)|(n-1), where phi(n) is the totient function.
14. Determining if the Euler-Mascheroni constant is irrational.
15. Deriving an analytic form for the square site percolation threshold.
16. Determining if any odd perfect numbers exist.
For more information, visit:
http://mathworld.wolfram.com/UnsolvedProblems.html
&
http://www.claymath.org/millennium-problems
Existence and smoothness of 3-dimensional Navier-Stokes equations is a problem of high relevance for understanding hydro- and aerodynamical problems.
1. As far as I know, there still is no universally applicable model for predicting multiaxial fatigue failures. Good models I am aware of predict failure EITHER due to tension failure or shear failure, so to pick the correct model you need to test the particular material. This is particularly important for materials that have one failure mode in the "low" cycle range and the other failure mode in the "high" cycle range. I am not aware of a model that can be applied to every material before doing multiaxial fatigue tests on that material.
2. I am definitely not a physicist but I just finished "The Elegant Universe" which makes clear that string theorists still don't even know the exact equations that need solving, let alone solving those equations.
Mathematical Formulation discribes the behaviors of Radiofrequency electromagnetic fields inside human tissues.
One can add :The Syracuse conjecture , the cardinality of the set of primes p=n² +1
and the Kurepa Conjecture !
I think that many models in physics come from experimental results but a deep knowledge is needed. For instance the concept of field in which a particle is put into an area and if then a force is applied to the particle, so, the field exist. The gravitational field is now with another perspective due to the Higgs particle. Also, Coulombs law, general gravitational law, Faraday law, Newton laws, Kepler laws, etc. are originally formulated experimentally.
Many engineering and physical problems involve the third and fifth Duffing equation (even without solved exactly)
Many engineering problems can be posed as optimization problems.
Classical approaches are not useful in many cases because problems can be multimodal. In these cases, evolutionary algorithms can be used.
There are many unsolved problems about these algorithms, like convergence, speed of convergence, meta-evolution.
To my opinion this question is not very well-posed. Indded, if one thinks of a list of concrete hindering problems, then he must be aware that such a list would be practically infinite. A more reasonable question is whether the existing mathematics
is sufficient for the adequate modelling of wide range of phenomena in quantum physics, mechanics of continua, etc. For these and many other areas the answer is negative. For instance, the long crisis in quantum field theory is, essentially, due to the fact that foundations of general theory of nonlinear PDEs
are not still sufficiently developped. The problem of turbulence is of the same nature. The reasons of all that are rather nontrivial and cannot be explained in a few words. So, the interested persons are invited to
visit the site www.levi-civita.org to get further details or to attend the next Diffiety School.
Please look this paper: Applied Mathematics for Real Time Applications of Sukumar Senthilkumar ( http://www.omicsgroup.org/journals/applied-mathematics-for-real-time-applications-2168-9679.1000e136.php?aid=24095 ) or ( http://www.omicsgroup.org/journals/applied-mathematics-for-real-time-applications-2168-9679.1000e136.pdf ). Another example is the Riemann hypothesis linked to Number theory and then chryptography and its applications (e-security,....)
Indeed, as said above, such a list would surely be infinitely long. Therefore I will restrict myself only to my field of research which I hope to overlook to some degree,
namely optimal control.
Concerning problems with odes, theory and numerics are well developed - what does not mean complete -, but concerning problems with pdes, particular multiphysics problems described by systems of differential equations of different type, are far from being well understood neither theoretically nor numerically. While the simulation can often be performed satisfactorily optimization is hindered by the size of the problems in real life applications.
In optimal control for odes direct methods based on transcription methods to nonlinear programming problems have outperformed, particularly in handling codes, the indirect approach based on first order necessary conditions. (By the way sufficient conditions are mostly not checkable for real life problems.)
Concerning PDE constrained optimization one would need nonlinear programming methods able to handle at least billions of variables and constraints, and they do not exist, at least not for gerneral nonlinear problems without an exploitable structure. This approach is known as first discretize then optimize.
In addition the first-optimize-then-discretize approach is hindered by establishing first-order necessary conditions for complex systems, appropriate numerical methods including addaptivity, domain decomposition, etc. Practically each PDE has its own research field.
Finally, in contrast to engineering problems the mathematical modelling of applications in life sciences is by far not so far developed due to their complexity, but nevertheless it is a most interesting research field of extreme value.
These are only a very few open fields, practically "a set of zero measure" of the desired answer of Hakeem Niyas question, which indeed should be modified,
in order to restrict the field of research.
I think unexplored structure of real lne. It exhibits some thing like wave particle duality in physics. Continum hypothesis.non linear partialdifferential equations.
Stochastic functional differential equations
The sign problem. It is known to be NP complete but perhaps more important special cases can be identified and solved in reasonable time.
From the viewpoint of gas turbine engine design, one major shortcoming is the lack of a robust technique for optimization involving many (up to 10) variables. This is a serious impediment to the speed of design.
By optimization I mean multi-dimensional versions of the simple hill-climbing technique. Techniques which occasionally produce answers do exist, but in my experience their rate of success (in identifying the optimum combination of existing variables) is considerably less than 50%, possibly due to the existence of several "hills" of varying "height" in the field of all possible solutions.
The problem is not trivial: one specific design optimization (layout of the gas path of a free power- or fan turbine) which currently takes the better part of a month to complete could be done in less than two days when automated, running on today's desktop computers. We are not talking about savings of man-hours only – this phase of design is on the Critical Path of the design of any new gas turbine engine, and is thereby contributing to the time to completion of the entire project.
the inverse problem of wave propagation as an example of ill-posed problem, which results both from non-uniqueness and instability of the time reverse solution. The latter is explained by disappearence of information about the initial state of the system after a wave diffusion. The quasi-reversibility methods (Lattes et al. , 1967) help to overcome the problem by finding a sufficiently well approximated solution, existing for some reduced space of initial conditions. However, it is clear that there is an infinite number of quasi-reversible solutions.
@ Ulo Okapuu: you may want to check the research of Cambridge professor David J. Wales, about energy landscapes. If I remember correctly, one of his students managed to develop a program which finds a global minimum, "jumping" over local minima. This is done for molecular simulations, but it just might be applicable elsewhere. The name of the student was (I think) a certain John, who was there in 1994-1996. See http://www-wales.ch.cam.ac.uk/
I don´t know if it has been explored in all branches of dynamical systems, but the theory of Lie groups and corresponding algebras it seems to me essential for a modern understanding of almost all physical and engineering problems.
@Ulo Okapuu: did you try Simulated annealing, genetic algorithms, Ant colony optimization, Swarm optimization, and other metaheuristics inspired by Nature? I have good results using them in combinatorial optimization.
Ulo Okapuu:
I've used evolutionary algorithms in the solution of optimization problems with much more than 10 variables, with good success. For instance, in problems like the Job Shop Scheduling Problem.
In a highly multimodal problem, I've gotten a suboptimum that varies from run to run in less than 5%, using ~200 generations per run.
Arturo Ortiz Tapia:
About "in statistics".
The point is not to understand if we can assume the normal distribution in real conditions but to know what to do when we can't.
E.g. there is a lot of data indicating that noise is not Gaussian when it comes to real environment (water or air).
‘To begin, distributions are never normal’ (R. Wilcox., Introduction to Robust Estimation and Hypothesis Testing).
‘Everyone believes in the normal law, the experimenters because they imagine that it is a mathematical theorem, and the mathematicians because they think it is an experimental fact’ (G. Lippmann. Conversation with Henri Poincaré.)
Many powerful techniques are developed for non-Gaussian processing, google "non-parametric".
In reality, genarlly speaking, all stochastic processess in nature are divided into two groups - Markovian and not-Markovian, with delay, and with no delay. Taking into account the statement that any stochastic process can be considered as some Markovian process under specially prepared constraints, which represent some boundary conditions and their peculiarities, the latter give rise to great deal of varieties of different stochastic processes which well model the experimental data and can be widely used for robastic forecasting diverse phenomena.
Anyway, the tools developed for studying Markovian processes serve well as the backgrounds for treating other ones.
Perhaps the satisfying answer doesn' exist. The main problem in applied mathematics is to formulate a problem. The one who has done it has made a half of work.
In civil engineering no formal procedure has ever been evolved for analyzing a three dimensional membrane (a space structure that is subjected primarily to tensile stresses). To begin with the shape the membrane assumes is ab initio unknown and solution proceeds via a blind trial and error process of assuming plausible membrane shapes. The solution is not easy though computer algorithms have alleviated the tedium in recent times
I think that the issue of finding the solution to nonlinear systems is a challenging issue, and if it is solved will lead to the solution of many problems in scientific world.
In the area of thermofluid: I can say that the turbulent flow or in general turbulence is still unresolved problem in fluid dynamics.
We need to change our window of thinking. I believe that the nature is ruled by simple laws. However, because we look from a narrow window, we did not able to get full picture of the nature. For instance, why we stick our self to model the problem using old calculus (differential calculus developed long time ago). We need new mathematics in different space....
Recently (10 years ago), I started working on Lattice Boltzmann Method, which is very simple equation is momentum space compared with Navier Stokes equation. Solving single equations gives all details of solving four equations of conservation principles (mass and x,y,z- momentum).
The bottom line " We need to think out of the box".
Since Arturo Ortiz Tapia already covered the Millenium problems, let me now propose some problems that are less theoretical in nature and more practical. To solve more complex problems many scientists (engineers and applied mathematicians alike) have turned to using computational mathematics. The problem that we all face in this discipline is that computer architectures change and we (the scientists) usually have no say in how this industry changes. We saw this paradigm shift more than 15 years ago from vector to distributed computing and now from distributed computing to GPUs (due to the gaming industry). We all made the necessary adjustments to be able to use distributed-computing and now we are faced with heterogeneous computing (using accelerators such as GPUs, Intel MICs, etc.). This is a major issue because our computers are getting much bigger (measured in terms of threads or compute-cores) but clock speeds have remained stagnant. Therefore, we need ever more clever algorithms to solve our complex problems (such as Navier-Stokes) on large computers. To do so efficiently for time-dependent multi-scale nonlinear problems requires the use of implicit methods in time, iterative solvers (to solve Ax=b, as Moulay HIcham Tber suggested), and preconditioners. However, as we use more and more compute-cores we need to redesign our iterative solvers so that they communicate less - this will force a radical change in how we solve many linear algebra problems as well as numerical ODEs. This is especially true because algorithms that do not scale (i.e., take advantage of the large number of compute-cores) will be "dead in the water". IBM already has built Blue Gene computers with over a million compute-cores (Sequoia at LLNL). This will continue and we need to be prepared to handle different kinds of computer chips (this is where the name heterogeneous or many-core computing comes in) to ensure that we successfully design methods to run on such architectures. I forget who put it thusly: "those that cannot compute cannot compete". This is true in all fields of engineering, computational chemistry, computational biology (pharmaceuticals, etc.). Bottom line: scientific computing has come of age and we need to be quite mindful of how best to design algorithms that are accurate and efficient on the current AND future computing architectures. In the recent DOE Applied mathematics report on Exascale computing, preconditioners were singled out as one of the main challenges for achieving scalability at the exascale range. We are currently at the petascale range (in terms of computing) but probably only at the terascale range in terms of our software. There is a lot of catching up to do!
It think as engineer, computer scientists and may be mathematicians should develop smart logarithms that take advantages of the computer architecture rather than as user or NS solver to learn every time a new method or technique to adapt his/her code to the new developed architecture. Every field now expanding exponentially and it is becoming very difficult to catch all those lines...
Abdulmajeed Mohamad writes that computer scientists and mathematicians should develop smart algorithms that can take advantage of the computer architectures and that is certainly a worthwhile pursuit. In fact, this has already started to happen (see recent works by Andreas Kloeckner at UIUC, called LOO.PY and that of Tim Warburton at Rice University called OCCA, as well as that by Mark Govett at ESLR called F2C-ACC). All of these tools will help engineers develop algorithms to translate Fortran or C code into CUDA or OpenCL. However, my comment above was more concerned with the actual development of time-integrators, iterative solvers, and preconditioners that will take advantage of new hardware. One cannot develop a method for a computer that does not yet exist and this is the dilemma we are currently facing in Scientific Computing.
So, if our "existing mathematics is [not] sufficient for the adequate modelling of wide range of phenomena in quantum physics, mechanics of continua, etc" (A. Vinogradov). and we should 'think out of the box' - what might we need to address to advance mathematics in a direction not included in the above lists?
I will suggest there is an area on the boundary between Theoretic and Applied Mathematics, where mathematics involves invention rather than discovery, which might be the next area to advance mathematics (there will be more areas in the future).
This area is the representation of numeric values and the ability to represent additional systems of numbers. Early humans had marks on a stick or on a papyrus pad. Romans invented Roman numerals, and Greeks had ratios. Between the Hindus and the Arabs, our current decimal numeric system was developed - which did not come into general use until the Renaissance over 500 years ago.
We have been using this system of numeric representation (or a similar positive based positional system) the entire life of 'modern science'. This system is only capable of representing Real numbers - as fully defined values. Roman numerals sort-of map to whole numbers (expand to Integers) and ratios map to Rationals. Decimals, and positive based positional numerics (eg. Hexadecimal) map to Reals.
We use Complex numbers many places these days - yet we have no system to represent these values, as fully defined values. We have 'complex values', however these are decimals with an undefined term and so cannot actually represent Complex numbers fully.
In so inventing such a new numeric system, both Applied and Theoretic Mathematics will be impacted (thus on the boundary between them). Many equations are made complicated by having to deal with current two-part complex values (x + iy; including an undefined term). These would be simplified by a single complex value without an undefined term, which would, in turn, simplify equations.
This is the 'thinking out of the box' area we need to concentrate on. It will require developing mathematics to handle negative-based logarithms and require a re-working of all calculation engines to 'upgrade' to this new system. It is quite possible this system will not be able to be represented simply on paper, but require computers to represent. It will allow progress on many problems identified above - and it will also introduce new problems.
Donald Knuth worked on a system of negative based numbers - so some early work has been done. If the system requires computers, it will not be developed in isolation.
In my concern, the most difficult problem is to make uncertain things definitely confirmed. That's to say, we can predict precisely what will be the next. Mathematical models are not so precise enough, and uncertainty measures only give answers of a certain range or belief or probability of happening next. who will be the prophet?
Lauv Chen,
to make things definitely confirmed is not a mathematical problem.
Mathematics must analyze and process the real uncertainty, develop some suitable recommendations, and give them to decision makers. Of course the effectiveness of recommendations is to be estimated.
The problem with the original question of this thread is that it presumes the major hindrance to advances in science and engineering is the resolution of known, but unsolved, complex mathematical problems. Most posts to this thread presume the same thing.
However, the question did come up in the thread - "whether the existing mathematics is sufficient for the adequate modelling of wide range of phenomena in quantum physics, mechanics of continua, etc."
This is a crucial question, as an inadequate mathematics will trump any problems listed above in hindering the "advancements in science and engineering" of the original thread. And resolving the 'inadequacies' of our current mathematics could lead to solutions to at least some of the complex mathematical problems listed above.
I submit that our current mathematics is inadequate for "modelling of wide range of phenomena in quantum physics, mechanics of continua, etc.". Further, the specific inadequacy we need to address is our inability to represent complex values as fully formed values - without any undefined terms.
Note that our entire universe of modern science would not be possible without the decimal numeric system (and related positive-based positional systems). We could not perform our science using Roman numerals or ratios. Logarithms, scientific notation - these are not possible without this 1500+ year-old mathematical invention. The mathematics of today would not be possible without this system - including most theoretical mathematics.
This has been a crucial pre-requisite to modern science, which we have taken for granted.
Yet it has limitations. In particular it can only represent values of Real numbers (fully). As we have found increasing - even pervasive - need to include Complex numbers into our models, this Real numeric system is inadequate to (fully) representing a complex value.
We found a 'work-around' to representing Complex values through defining them something like 'x + iy' - however this is not a full representation of a Complex value precisely because it includes an undefined term i = sqrt(-1). This problem gets worse with Hamiltonian and Quaternions.
Our mathematics is inadequate and the primary area needed to resolve this issue is to define an area of mathematics currently undefined (negative-based exponents and logarithms) and to invent a new numeric system capable of fully representing Complex values.
As decimals were a pre-requisite for current Modern Science (v1.0), a new Complex-valued numeric system is a re-requisite for Modern Science and Engineering v2.0
Donald -- I like your answers and train of argument. However, I don't know why you classify I = sqrt(-1) as undefined. Wasn't the concept of zero at one time a similarly new concept in the world of number theory? And yet, today, we all accept it and intuitively understand it.
Ivan:
The concept of 'i' = sqrt(-1) has been accepted. Representing this concept with a value that we can measure and calculate with is a different issue.
The concept of 'zero' existed long before it was represented by a symbol (even if Aristotle banned it in the western world). And this symbol also needed to be included in a numeric system that represented values and could be included in calculations. It was this numeric invention that expanded the world and has allowed modern science, not the symbol for zero or the concept of it.
We have essentially 'tacked on' the 'i' symbol to our existing numeric system and work around it in calculations. This is not an adequate representation of complex values. A Complex value needs to be defined such that (for example) -2^ 1/2 (minus two to the one half power) equates to a logarithm using a base of -2. The ability to represent negative base logs is currently undefined in mathematics, so there is a bit of theoretical work to perform here.
However, from an applied perspective, we need to understand what a complex value would actually represent in the 'real' (or complex) world. What does it measure? Currently we have a 'real' part, which we have a value for and can measure. Then there is that 'imaginary' part, which always remains separated from the 'real' part. What about the entire complex value?
If electrical impedance is a single concept, it is our numeric system which forces us to always separate it into two components (resistance and reactance). If it is a single concept, why can we not handle it as such - as a single value?
It is the inadequaciy of our 'Real Number' based numeric system which forces this division. There is no reason it should always be represented by two values, when it is a single concept. We should be able to measure and manipulate this value - as a single value. We need to break it into parts when needed, but there is no reason why we must always represent it by its components - except due to the limitations of our (mathematical) tools.
I do "believe" that the nature is controlled by simple physics and mathematics. However we are still doing math based on Leibniz–Newton's Calculus. We need a new look..
Thanks for the interesting answer, Donald. You are suggesting that we seek a system that could, in theory, report the measurement of amplitude and phase as a single number. The (alternative) representation of complex numbers as ordered pairs would reduce to a single number. It would require representation along an area-filling curve on an area that stretched to infinity in all four directions -- a tall order, but perhaps amenable to definition via a limiting process?
Ivan:
Your first statement is correct - the single number part.
The second section moves back into looking at complex numbers as ordered pairs that lay along a 2-D (Real numbered) plane.
One of the properties of a complex numeric system is that we could put these numbers on a (complex) line and define every complex number as greater than, less than, or equal to another complex number. This defines a complex object (line, curve, surface, and volume), which will wreak havoc with everyone that believes in an ultimate 'Real' number line (or plane or volume) where all points possible on any line (or plane or volume) can only be Real Numbers. This changes what we call 'geometric space' - from one comprised of Real values to one comprised of complex values (will it stop there?)
Note that Real numbers can be expressed as an ordered pair of rationals, using a non-rational symbol, such as x + qy - where x and y are rational numbers and q = sqrt(2) is a symbol for a non-rational number (undefined over the rationals, like 'i' is undefined over the Reals).
Until a numeric system that can represent Real values is introduced (invented), this would be a possible method that, say, Archimedes, could have used to represent the Real numbers with ratios. But it would not have been powerful enough to produce our modern science - which sits on the foundation of decimal numerals - of actually having a representation for every Real value - and being able to define a 'Real Number' line (or curve, or plane, or volume).
This previous stage evolved without us 'really' being aware of it. For a long time, people did not believe in Real numbers (remember Pythagorus and the story of killing anyone who divulged the secret of sqrt(2) as not a 'number' - since numbers were ratios) - this Real number concept is only a few hundred years old (essentially the same timeframe as the beginnings of modern science) and post-dates the invention of decimals. It has become ingrained in our thinking - and is the box we find ourselves in.
We are now looking at the next stage, where a complex number line (or curve or plane or volume, or hypervolume) can exist. A complex numeric system is required to handle this complex object.
So there is much upheaval involved with this - on the theoretic as well as applied mathematics sides (to say nothing of the science and engineering aspects). This is an evolutionary revolution - defined by a mathematical invention.
It is unlikely to be me who invents this system - it will likely take many people and a re-organization of representations of numbers in computers.
So, just as there are an infinite number of irrationals between each rational, there are an infinite number of imaginaries between each real, and it is simply a matter of developing the numerical representation (either conceptually or electronically)? And, if so, would we then expect that the concept of numerical precision would be extended to allow for only a limited representation of the "imaginariness" of a number on any finite computing machine, just as we can only capture limited representation of "irrationalness" on today's computers? (And, of course, that would compound for "quaterion-ness," et cetera ... )
Hard to say what the limitation of complex numbers will be. Fractions are exact, but are far from unique - as there are many fraction representations for the same value (eg. 1/2 = 2/4 = 3/6, etc.). Decimals are mostly unique (except for repeating decimals ending in 0. ... 99999), however they are not always exact (eg. fraction of 2/3 is exact while 0.6666... is not). Irrational numbers, with non-repeating decimals, have practical precision limitations beyond those of repeating decimals (I am assuming these are precision limitations you are referring for, Ivan?)
I will speculate that a complex numeric system will be more accurate with irrationals than decimals are (pi might be 'better' represented via a complex system than decimals), yet have additional limitations representing 'imaginary' numbers.
Any ideas on a complex numeric system?
Donald -- Yes, I was referring to precision. I am not a number theorist, but, to me, the most fundamental representation of a real number is in the form of a simple continued fraction. It is base-independent and, subject to certain constraints on the numerators, provides a unique representation. Unfortunately, generalized continued fractions that extend the concept to complex numbers appear to require complex coefficients, so they don't get us any closer to your objective of representing a complex number as a single number.
Another research area that has defied modern computational geomechanics is the evolution of simple elastoplatic constitutive models for computing the deformation of geomaterials (soils and rocks) under a wide range of conditions, viz: drained, undrained, deviatoric monotonic and cyclic loading, Realistic models make use of numerous parameters (8 or more) whose calibration is quite tedious and hence rendering these models of limited practical utility. This outstanding mathematical problem is quite worrisome because it is quite impossible to predict the settlement of a foundation on a geomaterial with high degree of precision. What this means in economical terms is that a more expensive piled-raft foundation may be wrongly prescribed in place of a raft foundation if the settlement of the latter is wrongly computed. The development of multi-surface yield and potential functions including anisotropy and nonassociated flow has not produced the desired accuracy.
A real and potentially important problem is the management of different components in a smart grid. Smart grid is highly complex and challenging problem with integration of renewable energy sources, etc.
It seems that some complex phenomena in science and specially in Biology would be rather modeled and solved using "Natural algorithm" than using equations (too big number of parameters, non linearity...).
https://www.google.com/#q=natural+algorithms
Mathematicians like nonlinearity. Only those problems are truly interesting. For example, I am an expert on Stefan-like problems (describing phase transitions), which are strongly nonlinear. Generally, they can be solved numerically.
DIOPHANTINE EQUATIONS are also the" dark beast " for mathematicians !
and this is conforted by the negative solution's Yuri Matiyasevich to the tenth Hilbert's problem
Highly Ill-conditioned problems in linear & non-linear systems. Parametrized Approximation and heuristic techniques help to optimize these problems.
I agree with all above views and beyond that may i add one thing ,in dynamical system of motion , use of complex fuzzy numbers. Thank u all.
Basically I am talking over the paper " Complex Neuro fuzzy system using complex fuzzy sets and update the parameters by PSO-GA and RLSE method" by Thirunavukarasu et.al, (2013); International Journal of Engineering and innovative Technology(IJEIT), 3(1), pp-117-122.
I would reverse the original question in the way: What are the mathematical tools
that foster 'dynamical system solutions' in the present century?
I would answer: Lie groups and Lie algebras.
Concerning the use of complex numbers, a great Russian mathematician Acad. Kolmogorov wanted to show that mathematics can operate without complex numbers.
@Natalia:
What do you mean by 'operate without complex numbers'?
By 'operate' do you mean mathematics does not include complex numbers anywhere? Or that calculations are possible without representation of complex numbers?
If mathematics does not include any reference to complex numbers, then it would seem there can be no solution to x^2 + 1 = 0. This would mean there are solutions that we can resolve today (with complex numbers) - which would be 'off limits' without complex numbers. This would limit the scope of mathematics.
If the situation is that we can perform calculations ('operate') without complex representations, then we are already there - we only have Real number representations of complex values.
There are a number of stories of 'lazy' mathematicians - and of this character leading to simpler ways of 'doing' mathematics.
Consider:
It is possible to represent Real numbers as fractions - with an undefined irrational value (similar to 'i'), such that [a + br] = x (a Real number) where a, b are Rational and 'r' is an undefined irrational (e.g. sqrt(2)). From this perspective, we could operate only with representations of Rationals.
This would make representation of a complex value: [(a + br) + (c + dr)i] - now with two undefined values.
The work required to manage calculations and proofs quickly becomes very large - but it is theoretically possible. We could operate without the decimal system - theoretically.
Those lazy mathematicians would likely not listen, however. Making calculations easier has a large technological impact and is where the most use of complex numbers exists. This is why devising a single representation of complex values without the undefined 'i' (=sqrt(-1)), could be a very powerful step.
So, by 'operate', do you mean removing any reference to complex numbers (and hence limiting the scope of mathematics) - or performing calculations without them, which might be interesting, but increase the difficultly of performing calculations?
@Bernd
A question to consider - Can we do better than using Real values for complex values?
Can we represent complex values as single representations of numbers, as with decimals and Reals?
Can complex numbers become central tomorrow, like Real numbers are today?
What an strange discussion. All Digital Signal Processing is based on complex numbers. Do you want to live without mobile phones and GPS? Then and only then you should abolish complex numbers.
About "what is complex field?" The basic concepts are defined by axioms. Thus the answer is "It is the object that satisfies such-and-such axioms"
Very often , complex numbers are used like real numbers...e.g. the proof by Dirichlet of the Prime Number Theorem uses functions of complex variables like we use real variable functions .
Bernd,
And?
I've looked his site. The first question I've seen was "Is 0.999... = 1?"
Is it really interesting? To my mind the answer is "as agreed" and that's all.
Imho to study seriously such questions is as we call it in Russia "to look for fleas on the clean dog"
The real life uses complex numbers like Fresnel's representation in alternative electricity circuit (RLC).
For a pure mathematician : 0.999...=1 is very interesting and important , cause this shows that the writing of real numbers is not unique; it's true that this fascinating fact comes from the secret behind the suspension points (ellipsis) .
Valeriy Vengrinovich Institute of Applied Physics. Minsk. Belarus
One of the challenges for mathematicians is: time updated big matrix data analysis aiming for the recognition of pecularities of a phenomena or an object under investigation.
@Zekiri
Pure mathematics should be able to step away from the representation of numbers. So that there are the Integers, the Rationals, the Reals, Complex, (etc.)
We need methods of representing these numbers in order to work with them and calculate with them (Applied Mathematics).
It is the representation of numbers that causes limitations, inaccuracies, and non-alignment with the numbers they represent to occur.
The example you provide ( of 0.999... = 1.000...) is such a non-alignment. So too are the many (infinite) representations of the same Rational value by fractions - eg. 1/2, 2/4, 3/6, 4/8, ... These are non-alignments of the representation with the 'pure' or 'theoretic' number being represented. The number we represent with the symbols "1/2" or ".5" or "2^-1" is the same (pure) number.
Representations of numbers is one area that could be agreed is invention in mathematics - since the symbols we use (eg. 1, 2, 3, 4, ... or I, II, III, IV, V, ...) are our own invention and could be wildly different.
There is an intriguing interplay of numbers and their representations. It was a long time after the decimal system was invented that irrationals were accepted as actual ('real') numbers. There is the story of the Pythagoreans and the death of the person who proved there were numbers incommensurable with ratios.
What numbers we are able to represent might also be invention in mathematics. Why can we represent Real numbers as single values but not complex values? Can we devise a numeric representation that 'gets around' the 0.999... = 1.000... issue with decimals?
Perhaps the ability to represent number in different ways is an advantage.
I think, the basic problems are to construct or to fit a complex function of the graphics/image/shadows of natural object/bodies isn't it? For details, please see works ( on complex function) of great scientist ( Germany ) Johan Charl Frederich Gauss in the free encyclopedia.
Valeriy Vengrinovich,
This problem (like many other) has many different forms varying considerably in assumptions and restrictions which issue from the concrete application. Some of these forms may be very interesting and complicated but any general formulation hardly may be of any interest.
Here I am attaching a hint of complex function downloading from http://en.wikipedia.org/wiki/file: color_complex_plot.jpg. Please find it
Hi all.
An interesting problem is scalability of phenomena. For example in geophysics: what is the validity of the prediction of mechanical properties of a whole reservoir from an analysis of a grain of rock? Sometimes we do not have more information than a tiny amount of data of an unkown volume. Industry has to make conclusions all the time with this big uncertainty.
@Carlos
I agree with you - this is a large problem. How can we (directly) predict human-scale medical outcomes from molecule-scale biologic processes?
Statistical coincidence is not really a sufficient method. If we could show direct relationships across scale, then we would have a much better model.
However, we don't really have the mathematical (and modeling) tools to address this situation. Probability and statistics are about all we have. We need cross-scale mathematical tools.
In applied mathematics and numerical analysis, the absence of methods of
inversion of BIG (>1000000000 equations) systems of linear and non-linear equations hampers the possibility of handling large spatial resolutions in most implicit physical problems (mechanics, heat etc)
with systems of linear equations, powered methods like Gauss' elimination exist but non- linear stay yet difficult; actual algorithms are too long
After seeing all the answers - solving highly non-linear systems seems to be critical and having applications in many fields
Just as interesting: Where can one get support or funding for such important research?
@Christopher J. Winfield. "Where can one get support or funding for such important research?" [for highly non-linear systems] I suppose it depends on where you are, and what is your field of research. Non-linear Partial Differential Equations must be studied almost each case separately, and devote to it on its own (see, for example, http://en.wikipedia.org/wiki/List_of_nonlinear_partial_differential_equations). Anyway, for funding, if you are in USA, you got the National Science Foundation (http://www.nsf.gov/funding/), or the American Mathematical Society. You go to their main page www.ams.org, and then, on the horizontal menu bar, you got the word "programs" and you can find in there several instances for "funding" ( for example http://www.ams.org/programs/funding/funding).
The question as how to find funding for non-linear research seems to me almost like saying how to fund cancer research. Are you immunologist? do you want to know the effects of combined radiation and chemotherapy? protein binding? is your research over cancer-inducing substances? what type of cancer are we talking about (in the liver, lungs...)? do you want to stop metastasis or you want to target oncogens? The point is, we are talking in both cases (cancer and non-linearity) about such a VAST field, that I am certain there could be fund, provided the research is well defined on its objectives.
In the US funding is pretty much winner-takes-all. I wish NSF and AMS would stop helping us. I mean that in the sense that they don't really help our "Nation" or "Americans" per say. They give funding where it is least needed in terms of supporting careers in mathematics and focus more on political bang for the $$. As for applicants beyond 5 year of the PhD, you are expected to stay your rank, pay your debts and eventually die.
BTW: After the Cold War ended and Neo-Liberal (corporatist) policies became the rage in US politics it took a lot of damaged careers before the US Math community thought enough to lobby for a little money for post docs - as it eventually became more politically correct than the "just say "downsize'" policies. Now, NSF and AMS as sources of funding are measures status; and, as in such societies, status in something you don't get unless you already have it. Funny that NSF and AMS have plenty of money for well-to-do foreign nationals.
But I digress.
I would say one of the down to Earth problems begging solution is scalable solution to satisfiability (SAT) problem. This has large number of applications and extensions to cover applications. This is also the simplest hard problem or if you say hardest of simplest problems. By scalable I mean the solver must be able to utilize parallel computation and provide uniformly good efficiency as problem size (in number of variables) and number of nodes (PEs) increase. Scalability has been defined in parallel computation texts. Practical experience is, efficiency goes down with number of PEs and also with problem size. The complexity of SAT is exponential as it is an NP complete problem. But as a practical scientist it is worthwhile trying to get a more scalable algorithm among the exponential ones. There are also many subproblems of SAT and problems of their characterization. I am sure funding will be available for such research on application of SAT if you can deliver scalable implementation.
The U.S. Dept of Energy instigates reviews of such questions and documents answers: see for example the following
D. L. Brown, J. Bell, D. Estep, W. Gropp, B. Hendrickson, S. Keller-McNulty, D. Keyes, J. T. Oden, L. Petzold, and M. Wright. Applied mathematics at the U.S. Department of Energy: Past, present and a view to the future. Technical report, A Report by an Independent Panel from the Applied Mathematics Research Community, http://www.sc.doe.gov/ascr/ProgramDocuments/Docs/Brown_Report_May_08.pdf, May 2008.
what does it exactly mean by " Inversion of equations "? Inversion of coeff. matrix or solving the equations ?
Inversion is a term correctly used for operators. But often it is said that one has the problem of inverting equations suchas Ax=b by inverting the matrix A. If the equations have rank different from number of variables then inversion is more generally solving the equations.
In analysis inverse function theorem allows local inversion of a function which extends solving linear equations. A local inverse of a non linear function is another function which when composed wih the original gives an approximate identity map in a neighborhood of a point. A careful reading of Lang's analysis is worthwhile to understand this.