My field of expertize is in CFD and not in climatology. But I would start a discussion about the relevance of the numerical methods adopted to solve physical models describing the climate change.
I am interested in details in physical as well as mathematical models and the subsequent numerical solution.
Hi, it is a very interesting topic. I would like to share my idea for this climate change simulation.
I think that the earth climate is being affected by numerous factors that come from nature side such as the Sun, geothermal, weather, ocean currents etc, and from human side such as industrial area, city transportation, etc.
I think that looking for the only CFD itself solution is impossible. It suggested that a hybrid approach that combines CFD and a data-driven model that could provide the above factors contribute to the CFD model. I believe that if we have enough data, this hybrid approach can predict climate change.
Hi, it is a very interesting topic. I would like to share my idea for this climate change simulation.
I think that the earth climate is being affected by numerous factors that come from nature side such as the Sun, geothermal, weather, ocean currents etc, and from human side such as industrial area, city transportation, etc.
I think that looking for the only CFD itself solution is impossible. It suggested that a hybrid approach that combines CFD and a data-driven model that could provide the above factors contribute to the CFD model. I believe that if we have enough data, this hybrid approach can predict climate change.
Thanks for your contribution.
I would also address my general idea (that could be not correct) about the physical and mathematical statement of the problem.
I would consider a control volume around the Earth and formulate a general balance equation for the properties written as:
time variation of the averaged property = sum of all entering/leaving fluxes of the property over the surface + averaged production/distruction of the property in the volume
Is that correct? That would lead to a system of coupled ODEs (I don't think it makes sense to consider PDEs at this level of averaging). How many (statistical?) variables are considered? How about the way of evaluating the several effects over the surface and in the internal volume?
How about the numerical methods?
Dear Filippo Maria Denaro
You are essentially right, but we must also consider ocean circulation, the dynamic model of ice cover, complex chemistry, radiative transfer, surface physics, magnetic fields and biological processes (all to be described over the enormous range of time and space scales).
One can even speculate that the climate is best described by the optimum transportation theory. In purely mathematical sense we can consider changing the traditional set of the parabolic equations by the elliptic ones (of Monge-Ampere class). Certain parameters of the elliptic system can be obtained from the traditional climate models. This approach is not standard but I expect that it will be used in the near future.
General Circulation Models or GCMs have already designed to solve physical processes in the atmosphere, ocean, cryosphere and land surface with numerical methods (e.g. finite volume method for parabolic equations). The core of GCMs generally is based on geophysical fluid dynamics for synoptic and mesoscale motions. What increases the uncertainty of the prediction of models relevant to climate change are, for example:
1) lack of understanding about physical processes
2) lack of suitable parametrization
3) reliable formulation of tracer fields and their diabatic consequences
4) atmosphere - biosphere interaction, both formulation and parametrization
5) multi-phase cloud processes (micro-physics, thermal radiation, water vapor, Cumulus Precipitation Microphysics, etc)
6) including turbulent flows solution
7) including various feedback mechanisms
8) coupling atmosphere and ocean circulation
Dear Janusz Pudykiewicz
thanks for the contribution. That is exactly what I would know and if such terms are considered as fluxes over the surface of the control volume or are modelled as internal production terms.
Also the ocean circulation is an internal source if we use a spherical control volume with the surface in the external atmosphere (so that we can consider the solar radiation over the surface but a lot of phenomena become internal to the volume). That means we need to couple the solution with a subset of equations for the ocean circulation. I suppose that only statistically averaged equations are used.
Finally, changing from parabolic to elliptic (I am not sure if you intend elliptic in space-time) means we should face a system of PDEs, isn't that?
For example, what about the evolutive equation for the fraction of CO2 that is coupled to the internal energy?
Maybe these are stupid questions but I have very limited experience in climatology and I would know the basic equations and their numerical solutions
Dear all,
Thank you for your discussions, I learned much from that.
I am not sure 100% but I believe that global warming and climate change are caused by:
1. The use of fossil fuel: This source is originally located beneath the earth's surface. We converted it into useful energy but that energy makes the Earth becomes warmer. In addition, in almost power plants, the energy efficiency is less than 50%.
2. Production of Greenhouse gases such as CO2: The final product of almost industrial processes is CO2.
3. Global population growth: Each people is an energy source. Also, the more population, the more energy will be consumed.
4. Since the global becomes warmer, the water phase change in the North and South poles will make the sea level becomes higher. As a result, the stable circulation state of Earth's climate will be broken
5. The Earth's orbit radius change.
In the modeling point of view, if we can take into account the above terms into the global energy balance, I think that climate prediction is viable.
Any reference to scientific papers about the physical/mathematical/numerical models that are commonly used?
Dear Filippo Maria Denaro
I will compile a list of the most promising models and methods shortly.
The current Climate Models are relatively complex but there is still a significant potential for their improvement with respect to numerics and parameterization of unresolved scales. The idea of using the finite volume methods is very good considering their conservation properties.
The most uncertain element in the climate models is parameterization of clouds and aerosols.
As far as the optimum transportation theory in climate research is concerned this is a new idea and I’m still working with the basic formulation.
Concerning the question about the transport of CO2, this is a classical problem of atmospheric chemistry and I think that any conservative and non-oscillatory advection diffusion scheme can be used for this purpose.
Dear Janusz Pudykiewicz
thanks again for your contribution, one of my doubts is indeed about the proper numerical method for variables such as the CO2 (ot methane, which is actually due to animal activity and largely affects the total presence of greenhous gas). Specifically, owing to local production terms, the equation for the CO2 has not the physical property of non-producing new extrema. Therefore, standard non-oscillating methods that are usually adopted in FV formulations of hyperbolic equations could affect and limiting artificially the generation of new physical extrema, isn't that?
Dear Filippo Maria Denaro
concerning the numerical methods used in climate modelling, you can find a general overview in this publication
Chapter Numerical Algorithms for ESM: Future Perspectives for Atmosp...
About your general concept of considering, if I understand correctly, a single control volume for the whole Earth, the problem I see with this '0-dimensional' approach is that sources and sinks of energy and CO2 (and other quantities) are not uniformly distributed, so that some spatial resolution is necessary in order to make correct quantitative predictions of what is going on, since the interaction between different regions and different components of the Earth System (oceans and ice sheets as well as atmosphere) has to be considered. Some researchers think that this resolution does not necessarily have to be very high, an interesting example is in my opinion the low resolution model SPEEDY
https://www.ictp.it/research/esp/models/speedy.aspx
These low resolution models have the advantage that large ensembles of them can be run in parallel to make a probabilistic prediction, which is the only correct one for these intrinsically chaotic systems. One interesting idea which has not been pursued as far as I know is to use modern finite volume or finite element techniques to realize a low resolution model in which the grid is not uniform, but more refined close to complex orography, ice sheets and more generally critical areas. A major obstacle to realize this goal is the fact that presently most parameterizations of subgrid phenomena are strongly resolution dependent and
have to be tuned to a specific resolution.
Concerning the doubt in your last post, monotonization techniques used in FV and FD methods are meant to avoid that the homogeneous advection equation does spuriously introduce unphysical maxima or minima. Obviously this does introduce some extra numerical diffusion, but in general this is not enough to affect significantly well resolved source terms, so that I would not worry about this. One real modelling issue is instead that modelling the complete CO2 cycle would require considering a large number of supplementary equations to account for all the chemical reactions involving CO2, which is far from being just a passive tracer. Accounting for all processes is computationally unfeasible, so that simplified models have to be employed, possibly reducing the effectiveness of the prediction. Deep knowledge of atmospheric chemistry is required to understand what an appropriate model is. Janusz Pudykiewicz has great experience with models of atmospheric chemistry and can comment further on this.
Many thanks Luca Bonaventura for your contribution.
Yes, I wonder why a conservative FV approach at moderate resolution is not used. And also what would be the best control volume to consider is not very clear to me. A single spherical volume with the surface around the atmosphere? That would necessary include the source terms as pointwise equations to consider into the volume. And what about a control volume having two sperical surfaces, one around the atmoshere and one at the ground level? This way one could work only in terms of added/subtracted fluxes over the two surfaces and the internal production terms are confined to the air volume.
A great doubt I have is about the validation in the time integration. I know that researchers try to validate the models going back in time and checking the results with some available database of past climate (see https://www.climate.gov/maps-data/primer/past-climate). Is that really correct? From a numerical point of view is not true that a numerical scheme is always reversible in time (I remember some relevant lectures given by P. Roe). Has that a relevance on the validation? There are, conversely, models that are validated starting from the initial condition in the past and running up to the present to match the presente conditions?
Sorry for some stupid questions but I wonder what is the state of the art in a field that I personally do not know...
While I am just a novice interested in this topic, this book which is available freely right here on ResearchGate gave me some initial ideas about what goes into climate models and the numerical methods used.
Book Numerical Techniques for Global Atmospheric Models
In addition I recently read this remarkable paper from 1966, which was basically a method to enhance stability by modifying the discretization to preserve structure, in this case vorticity and kinetic energy. These are topics that even today we are trying to introduce into modern numerical methods.
Computational Design for Long-Term Numerical Integration of the Equations of Fluid Motion: Two-Dimensional Incompressible Flow. Part I
Akio Arakawa
Dear Vikram Singh
thanks for your contribution. Yes, the Arakawa scheme is a hystorical method in CFD, now somehow no longer used.
Dear Filippo Maria Denaro
Thank you for your comments. I fully agree with the observation expressed in your last posting: “Yes, I wonder why a conservative FV approach at moderate resolution is not used”.
Based on the recent numerical experiments with the advection-reaction-diffusion equation we can identify several algorithms well suited for the problems encountered in climate modeling. One of the natural choices is the finite volume method combined with the flux correction procedure. The Flux Correction (FCT) procedures eliminate oscillations generated by numerical advection terms while retaining physical extrema generated by sources (emission) and sinks (scavenging and deposition). The same algorithms are successfully applied to the aerosol species with the integral terms representing coagulation.
As far as the state-of-the-art climate models are concerned, there are new contenders for the leading position. Please see for example a brief description of the recent model from the USA
https://climatemodeling.science.energy.gov/projects/ultra-high-resolution-global-climate-simulation
Many thanks, I will read that... what about your opinion of the validation procedure going back in time in the numerical integration?
Dear Filippo Maria Denaro
concerning the validation procedure:
1) it is unclear to me what do you mean by 'going back in time'. If that means just solving the equations backward in time, that would be impossible with the full models, which contain a number of dissipative terms modelled as diffusion operators, for which integrating backward in time is intrinsically unstable, so I doubt anybody is doing that.
2) there are several stages in the procedure of model validation and whatever has to do with the numerical discretization is usually included in the tests of the so-called 'dynamical core', e.g. the part of the model that approximates dry inviscid dynamics. These tests are not carried out against climate data, but rather in idealized settings. A number of standard benchmarks is available and new ones are often proposed. Once this stage is passed, the other physical process are also taken into account and comparison with real data becomes feasible. For a recent example of climate model validation one can have a look at these papers
However, it has to be kept in mind that at this stage, and even if very idealized and simplified descriptions of the more complex atmospheric physics are employed, see e.g.
Ensemble Held Suarez Test with a Spectral Transform Model: Variability, Sensitivity, and Convergence DOI: 10.1175/2007MWR2044.1
Sorry, previous post was incomplete due to a typing mistake. When model physics beyond the dry inviscid dynamics is included, it should be kept in mind that tests and validation should be of a statistical nature, due to the intrinsically chaotic nature of the modelled system. The 'data' against which one compares are averages of space and time, analogously to what is done when validating models of turbulent flow.
Yes Luca Bonaventura , I read on climate.gov of NOAA that a validation is performed integrating the method back in time ...
from https://www.climate.gov/maps-data/primer/climate-models
" How are Climate Models Tested?
Once a climate model is set up, it can be tested via a process known as “hind-casting.” This process runs the model from the present time backwards into the past. The model results are then compared with observed climate and weather conditions to see how well they match. This testing allows scientists to check the accuracy of the models and, if needed, revise its equations. Science teams around the world test and compare their model outputs to observations and results from other models."
Dear Filippo Maria Denaro
that is a misleading definition of hindcasting. Probably who wrote that explanation was trying to make it accessible to a larger audience, but ended up writing something technically wrong. No complete climate model can be run backwards in time because of the numerical reasons explained in a previous post. Hindcasting means to start the model at some (far) time in the past with some 'plausible' initial conditions to see if the model can reproduce (not so far) past climate data. So in spite of the name the integration is performed forward in time.
A discussion of hindcasting experiments can be found for example in
https://pure.mpg.de/rest/items/item_1765216/component/file_1765214/content
(to be quoted as Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor, 2007: Cilmate Models and Their Evaluation. In:Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.)
Totally agree about the technical error in proceeding back in time!
I understand the process of starting from past and going to present, however that would address the great issue about how to get a set of reliable initial conditions in the past....
Dear Filippo Maria Denaro
I agree wit the comments of Luca Bonaventura
The first step in the assessment process is to verify the ability of the models to reproduce current climate conditions. According to many studies, this simple test is not easy at all. For example please see the recent article:
“Can Climate Models Reproduce the Decadal Change of Dust Aerosol in East Asia?”
https://doi.org/10.1029/2018GL079376
Thanks, I read some relevant issues in determing the prognostic variables...
Then, reading the validation process I see only a decade used, isn't that a too small period of observation to assess the validation?
It is reported:
" As the operational archive at ECMWF does not provide high-resolution analysis data for the years 1978 and 1979, 2015 and 2016 have been selected and relabeled to 1978 and 1979, respectively. Considering the decadal duration of the AMIP experiments, and the climatological evaluation, this relabeling of years is acceptable."
that seems a quite strong assumption on the periodicity ...
Just to highlight this discussion
https://www.researchgate.net/post/Global_Warming_Part_1_Causes_and_consequences_of_global_warming_a_natural_phenomenon_a_political_issue_or_a_scientific_debate
In particular, perhaps one of the researchers on this discussion could recommend a flagship model from the period in which the following survey was conducted, which indicates a majority in favour of model predictions of global temperature over the 10 year period since 2008. Being 2019 now it would be interesting to see how the simulations compare with the records.
https://ncse.com/files/pub/polls/2010--Perspectives_of_Climate_Scientists_Concerning_Climate_Science_&_Climate_Change_.pdf
Dear Filippo Maria Denaro and Luca Bonaventura
I must first tell you that I am no way involved in weather prediction. Still I would like to add here that marching back in time is possible. There are two technical difficulties:
1) Marching back in time, diffusion would appear as anti-diffusion. Anti-diffusion is inherently numerically unstable for conventional scientific computing. However, if you look at the physical process, then that is exactly what is actually happening! Is there a way to avoid this numerical pit-fall? My answer is a guarded 'yes'. One can use a time-filter and carefully calibrate the filter parameter to proceed.
2) The second issue in marching-back-in-time would relate to tracing the physical dispersion relation in the numerical sense. This is related to space-time operator of convection process. In fact, in acoustics, this is done to trace the source of sound. For such calculations, of course diffusion term is omitted (problem 1 above!) and one solves Euler equation. However, action in the field is in nascent stage. People are still not very comfortable with dispersion relation preserving (DRP) scheme. Most of the work published so far are not worth reading!
I still am optimistic that these will be circumvented and many weather forecasting model will benefit in improving the scope of 'forecasting' and 'hind-casting'.
Dear Tapan K. Sengupta
the first time I had an approach to time-reversibility of numerical scheme was at a lecture of Roe at the VKI in 1997. That was also addressed here https://epubs.siam.org/doi/abs/10.1137/S1064827594272785?journalCode=sjoce3
The key is that the scheme induces a so-called "numerical entropy" that has to fulfill some physical constraint. This appears relevant especially in solving the Euler equations. To tell the true I never personally experienced a method in tracing-back in time the solution. Does the round off error can be reversible?
Dear Filippo Maria Denaro,
If you bring in equilibrium thermodynamics, then always entropy increases. But when you time-march backward with anti-diffusion, then the diffused state becomes coherent! You can do a thought-experiment for an isolated system and the results are surprising! WIth Euler equation, you cannot think of viscous losses and the shock-related drag will decrease when you study the transients of shock formation.
Why should we need anti-diffusion in case that the forward time integration has no diffusion at all as addressed by Roe?
I would be curious to explore more backward/forward time-marching methods to see if we get reversible solution. I have doubts about the reversibility in case of non-linear advection equation such as the Burgers equation with zero viscosity. Forward time integration produces a singularity and we can go over only in the sense of a weak solution. What about reversibility (backward time integration) in this case? Can we reconstruct a regular solution from the discontinuous one? The characteristic line theory seems to say not ...So, should we always consider regular solution?
I do not think that you need any anti-diffusion! I am referring to solution of viscous flows with physical diffusion. When you integrate backward in time, the numerical counterpart of diffusion then must act like anti-diffusion. If you are solving Euler equation, then of course you will not have to worry about anti-diffusion. If there is some numerical diffusion added for forward-in-time mode, then also you will be in trouble. The anti-diffusion will pick up suitable scales from round-off error to cause focusing (a catastrophic solution breakdown!). If you want to see reversibility, then you must have extreme accuracy, and you will probably get limited time reversibility. This is my hunch. Let me know what you. Also please report your dispersion relations- both physical and numerical values!
I see the case of the presence of physical diffusion, I agree.
I don’t know if backward time integration has been really realized in the climate models as an assement....
When considering the backward and forward integration of a dynamic system, it is helpful to introduce the concept of an adjoint model. It is often used in data assimilation to ensure the consistency of a numerical model and measurements. For an example in the meteorological context, please see the following paper:
https://www.tandfonline.com/doi/pdf/10.3402/tellusa.v37i4.11675
The adjoint model is also useful in the sensitivity studies to evaluate the role of the various parameters of the system. In a widely disputed interpretation of the climate models, we should consider the formal sensitivity analysis in the near future.
Janusz Pudykiewicz : Can you please amplify why and how the adjoint helps one march backward in time? And how this relates to sensitivity analysis? Is it something like tracing the disturbance back to its origin? That will be very interesting! We do something similar in tracing coherent structures in turbulent flows backward in time to the receptivity stage. However, we do it as post-processing the DNS data.
Tapan K. Sengupta : The concept of an adjoint model is based on the the duality relation introduced by Lagrange. In the most abstract sense, the solution of the adjoint problem is an “influence function”. The support of this function is usually associated with the “influence region”. I have used the adjoint methods extensively to solve the inverse problems related to atmospheric tracers.
Based on my experience with the scalar fields in chaotic flows I think that the adjoint techniques are also useful for the study of the Lagrangian coherent structures. I will try to provide a mathematical proof in a few days.
Regarding the applications related to CFD I am convinced that the following paper is interesting as a general illustration of the problem
https://pdfs.semanticscholar.org/83fb/18548ddfab82c02126d9582f048a38de2bff.pdf
You drew my attention to this question
thus 2 points
CO2 is not a dynamic problem or of scales because it is not reactive at the short time scales as other components
Cloud: in predictive weather models the formation of clouds especially the most relevant stratocumulus with the highest global coverage can not predicted and it often added by now-casting: we see it with satellite/lidar so it must be there
Harry ten Brink
thanks for your observations, CO2 is not reactive but it interacts with the life on ground (fauna) as well as on the ocean (phytoplankton).
But what about all other variables in the numerical simulation of climate?
We have a lack of good models or we see numerical issue in the time-integration?
My opinion is that a climate model has / must have many simplifications and those are determining factors rather than the mathematics
-air sea exchange
-clouds
etc
For weather the time horizon for a likely projection is nowadays a week and then there is the meteorologists who can turn a knob when the actual cloud situation appears to be different over night for instance
Agreed about CO2 in a climate model is treated as slowly but increasing over the years
Harry ten Brink
" My opinion is that a climate model has / must have many simplifications and those are determining factors rather than the mathematics "
I totally agree that a wrong physical model has consequences that spread over and affect the mathematical and then the numerical approaches.
I am no sure about what you mean exactly for "mathematics" ... Every physical model is always translated in a mathematical model, that is we assume a set of PDE (or ODE) that provide a solution satisfying the global physical model assumptions. And the mathematical model is translated in a discrete model to be solved numerically, provided that accuracy, consistence, stability are satisfied.
At present, are we sure that these steps (I mean the mathematical model and its discretization) are performed at the state-of-the-art level in the climatology field?
I insist on the observation that IPCC report has shown us that all the climate models have a wrong temperature prediction during the last ten years. Physical, mathematical or numerical failure?
Filippo Maria Denaro ,
Can you be a bit more specific about where the IPCC has shown the models are wrong? I believe you are correct because the main driver of global climate is not the direct effect of CO2 on which the models are based. The main driver of climate is albedo, specifically ice sheets and sea ice, which is controlled by CO2 levels. The current models are based on the lapse rate feedback where the greenhouse effect acts throughout the troposphere. In fact, the greenhouse effect only operates directly on the air near the surface, since it is produced by absorption of radiation from the Earth's surface.
The models are failing because they have the physics wrong. See my
Working Paper "Travels in the Alps" Volume 2, Chapter 35 by H-B de Saussur...
Alastair Bain McDonald
See fig TS.9 (a-c) here https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_TS_FINAL.pdf
These models fail the prediction of temperature in the last decade
Filippo Maria Denaro ,
Only TS.9 (a) is relevant. The other two figures, (b) & (c), only show components of the modeled global temperature. I think that you will find when the graph is brought up to date to 2018 that what you see that the discrepancy is only climate variability. Here you can see that the warmest years ever recorded all occurred since 2014: https://assets.climatecentral.org/images/made/2019GlobalTemps_Bars_F_en_title_lg_900_506_s_c1_c_c.jpg
Yes, no doubts about the increasing of temperature in absolute. But the rate predicted by the models in the anomaly is not correct in the last decade while the discrepancy is not so evident before.
Why?
Filippo Maria Denaro ,
"There is high confidence that the El Niño-Southern Oscillation (ENSO) will remain the dominant mode of natural climate variability in the 21st century ..." ["TS.5.8.3 El Niño-Southern Oscillation"]
The anomaly was caused by ENSO, and ended with the El Nino in 2016. It seems that although the models can include the effects of the solar cycle and volcanic eruptions, they are still unable to model ENSO.
Alastair Bain McDonald
I see, but if the ENSO is not properly modelled, why the temperature anomaly appears quite better described in the previous decades?
Filippo Maria Denaro El Niños can't be modeled until they occur. The 2016 El Niño had not occurred in 2014 when that figure was produced.
Alastair Bain McDonald
that makes some sense but we see that the TS.9 figure has a quite longer years interval. So, I presume that some ENSO modelling was properly added to produce accordance. That seems more a prediction issue than a real model one.
In the report I have not found a clear discussion about the ENSO role in the prediction
The chaotic dynamics of ENSO escape any attempt of deterministic prediction. This fact, however, does not limit our ability to simulate climate because we are interested mainly in averaging over several cycles of oscillation. We can make the same comment about other chaotic systems; it is difficult to forecast for a short time but it is relatively easy to predict the asymptotic behaviour.
That is an intriguing issue, what is the exact mathematical definition of the temperature anomaly field T(t) that is computed? The use of averaging over several cycles of oscillations is something strictly resembling the statistical formulations used for the simulation of turbulence. But when we work in turbulence, we know that statistical or ensemble averaging converge towards the same result only under ergodicity assumption. And if we have a non-equlibrium energy problem, the time averaging cannot be used but one has to adopt an ensemble averaging. This latter averaging is not clear to me how can be applied in the climatology (maybe the time period is subdivided in windows assuming the role of a realizations?) and, what is more, each type of averaging should produce a specific modelling.
Filippo Maria Denaro ,
The problem with climate change is that we do not know whether the system does converge. We only know it is non-linear, i.e. anything can happen. That is why global warming is so dangerous. We could well sustain a rapid warming of 5C in as short a period as three years similar to what happened at the start of the Holocene.
Earth scientists don't have all the answers, but they do know that we are entering dangerous territory!
I agree with Alastair Bain McDonald it is not possible to model the ENSO so one just have to accept the model is supposed to predict the temperature average over some years.
If one consider the temperatures up to present there are several very hot years just after the dataset end in TS.9. Considering these next years the model prediction is very accurate.
Alastair Bain McDonald
I agree. But the issue you are referring is typical of dynamical systems, for example in simulation of turbulence is one of the main open questions. For this reason, I addressed above the example and the statistical tool that one should use depending on the physics.
Again, it is not clear to me the exact definition of the function we are discussing (the temperature anomaly). It is defined by a statistical averaging? It is a time-filtered function? The consequence of the definition are seen on the non-linear equation that governs its evolution. But I can think about pysical conservation eqations for mass, momentum, total energy, the chemical reactions, the thermodynamics law and so on.
Talking about temperature anomaly means we are describing the evolution of the residulal internal energy of the system in terms of difference from the averaged internal energy?
Henrik Rasmus Andersen
according to your observation, the temperature anomaly has to be considered a locally time-filtered function F(t; Delta_t). This is not strictly a statistical definition but underlies a somehow deterministic evolution. That is, at each represented time, the reading of the function a temperature locally time-averaged. And it seems that the filter width is not centered on t but is taken backward. In my idea that means the a real prediction is never really provided even for a longer time from now. This way seems that the function is just corrected when the known exact data are provided. And the range of confidence is constrained to the present time.
Again it seems that some math contractidions are present in the formulation, isn't that?
Dear Filippo Maria Denaro
Thank you for the comments, I agree with your statements. As far as the ensemble averages are concerned, they are usually calculated from a large set of the model runs (for different models). The link between the use of the ergodic hypothesis in turbulence research and climate modeling is stronger than is usually assumed and there are many interesting new contributions in this area, see for example: Tantet (2016): Ergodic climate theory: variability, stability and response (the text is available on the Web).
https://dspace.library.uu.nl/handle/1874/329240
Dear Janusz Pudykiewicz
many thanks, I see you valuable comment, what do you think about the fact that the ensemble averaging from different models are not rigorously as same as an ensemble averaging of several sample of the same experiment?
And what about the fact that a prediction is first done up to a certain time T, then is corrected from previous data (but after they are experimentally known) to improve the quality of the time-averaged solution only up to T? This is quite different (in my opinion) from what we usually do in simulating turbulence and resemble much more (always in my opinion) a certain extrapolation procedure (Richardson?).
What about the formulation for the governing equation for the temperature anomaly in atmosphere?
Dear Filippo Maria Denaro
Thanks for the thought provoking questions, my answers are as follows:
“...what do you think about the fact that the ensemble averaging from different models are not rigorously as same as an ensemble averaging of several sample of the same experiment?”
Yes, that's right, but the models use many similar components and can be considered as a "generator" for calculating the ensemble average.
“And what about the fact that a prediction is first done up to a certain time T, then is corrected from previous data (but after they are experimentally known) to improve the quality of the time-averaged solution only up to T? “
In the case of a climate model, I would say "simulation" rather than prediction. Theoretically, after the sufficiently long simulation time, the model loses its sensitivity to the initial conditions and responds mainly to the boundary conditions and variations of the internal parameters. For this reason, climate modeling is so different from numerical weather prediction where the correct specification of initial conditions is so crucial.
“What about the formulation for the governing equation for the temperature anomaly in atmosphere?”
The answer depends on the type of model. In the near future, with migration to non-hydrostatic models, calculations will be performed for the fields defined as the deviations from the basic reference state.
Dear Janusz Pudykiewicz
These are intriguing topics and drive me to think:
- Changing a model generally implies implicitly that the computed variables is likely to be intepreted in a non unique way. You know that NSE apparently are similarly used for modelled simulations like URAN/LES/DES and change only by different models. So, it seems to me that the ensemble averaging can act also on non homogeneous set of data
- The fact the the simulation need to run for a long time to forget the initial conditions is often a common issue in simulation of turbulence. We let the code run for long time until a statistical equilibrium is reached and the solution has forgetten the arbitrary initial field. But, then we are used to collect the field in time and perform a time-averaging which is assumed to converge to the ensemble averaging for the ergodicity.
- I have seen the use of hydrostatic models but they appear to me largely affected by the assumptions to produce realistic results. Such a model should also assume a statistical equilibrium and in case the temperature anomaly is considered as the deviation from the basic reference state, but this reference state is still a function of time, there is a strong interaction between fluctuations and basic temperature as the averaging of the product between them is not zero.
New climate models give warmer scenarios ( 5deg warming) for doubling CO2 instead of previous generation (2-4.5 deg C).
The reason is not obvious:
https://www.sciencemag.org/news/2019/04/new-climate-models-predict-warming-surge
Clouds and aerosols are still not well taken into account.
What about bioaerosols ? They are more effective as cloud condensation nuclei and ice nuclei.
Article Bioaerosols in the Earth System: Climate, Health, and Ecosys...
The unprecedented developments in numerical methods in the field of applied meteorology, combined with the new computer systems, will lead to more realistic climate models over the next decade. One of the remarkable models of this new generation is developed by the Climate Modeling Alliance (CLiMA), which is operated by researchers from Caltech, MIT, Naval Postgraduate School and JPL / NASA. Given the expertise of these institutions, I believe that their new model will be on the level described as the state of the art in the original question.
https://phys.org/news/2018-12-climate-built-ground.html
Recently, Wozniak et al. showed that by including pollen rupture inner particles as condensation nuclei could suppress up to 30% of the precipitation over continents
doi:10.1029/2018GL077692
Janusz Pudykiewicz : Nice to note about the new initiative. Surely, it will be a great step forward. Are you also trying something along this line? Thank you for sharing the link.
It is interesting to read about such progresses but, actually, from the reading of many papers I am not able to distinguish the lines along which the advances are scheduled. It seems a lot of work is done on new physical modelling to add as well as some improvements in the old ones. But is the numerical formulation really at the state of the art? For example, what the role of numerical simulation of turbulence in atmosphere and oceans to couple with the global model? Accepting the fact that we cannot resolve all the scales, the modern formulations are now more similar to the modern LES or still remains more linked to the hystorical RANS formulation?
Filippo Maria Denaro
There are problems with turbulence in simulating the boundary layer. e.g.
"Physical laws and equations of motions, which govern the planetary boundary layer dynamics and microphysics, are strongly non-linear and considerably influenced by properties of the Earth's surface and evolution of the processes in the free atmosphere. To deal with this complicity, the whole array of turbulence modelling has been proposed. However, they are often not accurate enough to meet practical requests. Significant improvements are expected from application of a large eddy simulation technique to problems related to the PBL." https://en.wikipedia.org/wiki/Planetary_boundary_layer"
I can't tell you what these problems are but I can tell you the answer. The turbulence is the result of absorption of outgoing long wave (IR) radiation radiation, not convection as is widely believed.
There are some papers here which discusses turbulence as it affect climate modelling https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/stable-boundary-layer
Alastair Bain McDonald
at present I am not sure if the state-of-the-art knowledge about simulation of turbulence in LES formulation is fully transferred in solving the problems you addressed. I am not sure that the two scientific fields fully communicate each other.
But I can give my idea: in such a complex problem, where there are so many lacks in the physical models one thing should be sure, that any type of global simulation has to do with unresolved conditions of a range of characteristic scales. In other words, it seems to me that the problem is - by definition - in a framework of (very) large eddy simulation.
Tapan K. Sengupta
Than you for asking this question.
Significant errors in the climate and weather prediction models are caused by the IMEX class time integration schemes (in meteorology they are described as the semi-implicit schemes) because of distortion of the phase speed of the gravity waves. I’m actively developing the exponential time integration methods to eliminated these errors.
The second activity is directed towards elimination of the paradigm of splitting the dynamical core and physical process by including the effects of the unresolved scales directly in the Jacobian operator.
Alastair Bain McDonald
For the description of atmospheric turbulence please consult the excellent book by John Wyngaard “Turbulence in the atmosphere”:
https://www.cambridge.org/core/books/turbulence-in-the-atmosphere/60C43BE9C14A0511C0B03ACD42731068#
Filippo Maria Denaro and Janusz Pudykiewicz
Janusz,
Thank you for recommending Wyngaard (2010). It has confirmed my idea that turbulence in the Boundary Layer that is an outstanding problem. However, I wont be pursuing it further as the mathematics of turbulence is a bit above my pay grade.
Filippo,
I have read the introductions to the chapters on Turbulence in the Atmospheric Boundary Layer, and I am still of the opinion that the turbulence there, in both the convective and stable boundary layers, is caused by radiation, not by convection. Note:
Article A Scale-Adaptive Turbulent Kinetic Energy Closure for the Dr...
write " The turbulent gray-zone problem appears as one of the most challenging issues in atmospheric boundary layer modeling. It was elaborated in detail by Wyngaard (2004) and is associated with the transition from fully three-dimensional subgrid-scale (SGS) turbulence, characteristic of small-scale models such as large-eddy simulation (LES) models, to a one-dimensional (1D) SGS representation characteristic of coarse-resolution weather and climate models."
If this problem is of interest, or you want help with radiation schemes then get back to me.
Alastair Bain McDonald
maybe using a specific denomination for the driving force in turbulence of geophysical problems (atmosphere + ocean) is not the key problem. Clearly we have the Sun providing energy to a volume of fluids (air+water) by means of an energy flux (radiation) through the surface of the volume (the border of our atmosphere). In this volume we have also ice, ground, vegetation and human activity acting as sources. There is the further action of the rotation and the action of gravity as well as a lot of other physical effects to take into account.
If we start from the fundamental physical law expressed by the transport theorem, the conservation equations for mass, momentum, total energy are written in terms of a set of integral PDEs. Then we could start defining a finite volume over which such laws are still valid and define pragmatically a finite lenght for separating large scales from the worlds of SGS scales. One of the questions I immagine is hard to answer is due to the real character of what lies below this lenght. Is such a world really "small" ? Can we really define a clear border between resolved and modelled physics?
But I would be happy to see a very general approach for the conservation equations expressed in this term.
As specific comments on the JAS article you posted :
- have a look to the volume-averaging as filter operation, it is nothing but the main principle to be used in the integral conservations equations
- The concept of the "eddy viscosity" is hystorical but more modern SGS model could be introduced. for example the deconvolution-based SGS models seems quite feasible for very large resolved scales, coupled with the dynamic procedure.
- I believe that the total energy equation should be considered being the kinetic energy equation only a part of the global energy transfer.
Dear Alastair Bain McDonald
Thank you for the message, I share your opinion on the state of our knowledge in the field of atmospheric turbulence.
Regarding your observation in the commentary to Filippo Maria Denaro
“I can't tell you what these problems are but I can tell you the answer. The turbulence is the result of absorption of outgoing long wave (IR) radiation radiation, not convection as is widely believed”
I would like to add a small clarification:
The role of the interaction between radiative transfer and turbulence in a medium containing water vapor and CO2 is discussed in Brunt's classic and very accessible paper: “The Transfer of Heat by Radiation and Turbulence in the Lower Atmosphere” published in the Proc. Royal Soc. in 1929. Brunt presents an elegant and simple discussion which remains valid even from the point of view of recent progress.
Concerning the role of convection, it is obvious that convective instability is one of the main factors responsible for the emergence of turbulence in the Earth's atmosphere; we simply do not have sufficient resolution to simulate the effect in large scale models and the parameterization that is valid for a wide range of scales is still unknown.
Dear Janusz Pudykiewicz and Alastair Bain McDonald
it is very unlikely that a good and general parameterization for such a wide range of scales will be developed. One of the reason is the difficulty in formulating a model for physics in which backward effects are present such that they could be the source of generation of large effects.
But I suspect a further reason owing to some experience I had about the transfer of the state-of-the-art knowledge from the turbulence community to the geophysical one. Some years ago when I submitted with my collegue a paper to a geophysical journal, among several reviewers comments I got these statements:
" The oceanographic community needs a LES model that can do a decent job in simulating turbulence in stratified shear flows.
... Engineering CFD community is more advanced in numerical techniques and SGS designs than the oceanographic LES community. "
I suspect that also in the climatology community is difficult to transfer the knowledge about the best formulations for turbulence.
What bother me is that reading papers about climatology simulation I see poorly described the governing equations with the parameterization, the numerical approach and the relevance of the grid resolution. I still wonder about the finest resolution is achieved at present.
Just today I found this article
Article Invited review Mechanisms of millennial-scale atmospheric CO...
again the focus of the analysis is in the relevance of the physical models describing fine details in the energy transfer. However, no infos about the relevance of the mathematical and numerical formulations seems considered.
Maybe I am wrong but seems that the climatology community have no a clear feeling of the relevance of parameterization when that is translated into a discrete scheme....
Dear Filippo Maria Denaro
Thank you for your thoughtful comments.
One of the reasons for the weak interaction between two communities mentioned in your first note is the difference between their respective mathematical formalisms. In all CFD applications, we have the Navier-Stokes equations, whereas in climate modeling, primitive meteorological equations (PME) are used. This situation will probably change in the future with the arrival of a new generation of atmospheric models that do not rely on a hydrostatic approximation.
The set of primitive meteorological equations has never been studied as rigorously as the Navier-Stokes equations. The story is quite complicated, but I am convinced that the attached paper will shed some light on this issue.
Given the fundamental difference between the Primitive Meteorological Equations and the Navier-Stokes equations, it is not surprising that the interface with the parameterizations is difficult. This fact is particularly evident when considering convection, turbulence and radiative transfer.
The radiative transfer equations are relatively advanced, but their interface with the turbulent mixing is not yet completely rigorous. Gone are the times of a simple approximation proposed by Brunt. At the same time, the philosophy of the ad hoc operators splitting is the basis of the construction of the model. As always, the optimal solution will be reached through the iterative process involving many disciplines. Perhaps this will be the essence of the state of the art in climate simulation (in addition to more sophisticated dynamical cores).
Dear Janusz Pudykiewicz
Many thanks for the details you provided.
Several years ago I discussed about hydrostatic model limitations during a presentation. My feeling was that the geophysical community was aware of the steps required to go beyond such limit. I was now thinking that NSEs were more used ....
Dear Filippo Maria Denaro
Progress in our ability to predict the distribution of atmospheric variables has been possible only because rigorous methods of physics have been applied to weather problems. This fact is reflected in the title of Bjerknes' seminal paper which started all activities in the field of Numerical Weather Prediction (NWP) and climate simulation: Das Problem der Wettervorhersage, betrachtet vom Standpunkte der Mechanik und der Physik (The problem of weather prediction, considered from the viewpoints of mechanics and physics).
Coincidentally, the paper was published just one year before the 1905 Einstein papers marked the beginning of a deeper understanding of Nature.
The criterion to consider the modeling to be the state of the art is to adhere to the basic principles of physics. Perhaps by doing this we can eliminate some of your doubts with respect to the clarity of the model description expressed by the statement:
“What bother me is that reading papers about climatology simulation I see poorly described the governing equations with the parameterization, the numerical approach and the relevance of the grid resolution. I still wonder about the finest resolution is achieved at present”.
We must also answer your question about the model resolution. Most current computer codes used for climate simulation are operated on grids with a horizontal resolution of tens of km. This resolution is relatively coarse and very specific methods of parameterizing convection and radiative effects are required.
The highest horizontal resolution used in the experimental simulation of the atmosphere with non-hydrostatic models is around few hundred meters (NICAM executed on the icosahedric grid). This poses a huge computational problem, but the grid remains still very coarse compared to CFD models. Parameterizations are therefore inevitable.
I have attached a picture showing the changes of the resolution in climate modelling from
https://scied.ucar.edu/longcontent/climate-modeling
Many thanks Janusz Pudykiewicz for your contribution.
Yes, I am aware of grid resolution in the "horizontal" plane but I read
" Model grids for atmospheric (including climate) models are three dimensional, extending upward through our atmosphere. Early climate models typically had about 10 layers vertically; more recent ones often have about 30 layers. Because the atmosphere is so thin compared to the vast size of our planet, vertical layers are much closer together as compared to the horizontal dimensions of grid cells. Vertical layers might be spaced at 11 km intervals as compared to the 100 km intervals for horizontal spacing. "
This is what I really do not understand. Why do not they fully use the 3D NSE with a better vertical resolution? To make a comparison with the CFD field in engineering, we know that also using a typical RANS formulation (which is the most similar to the parameterization in climatology) the vertical resolution is relevant and must be quite adequate. Just as example, I was thinking about the relevance of describing at sufficient accuracy the energy flux as BCs at the frontier of the atmosphere volume as well as the interaction at the air-ocean and air-ground interfaces. And, thinking more about the fundamental laws, I would always write the integral conservation equations for mass, momentum, total energy plus the other required transport equations in integral form. This is especially relevant in case of low resolution, this way you guarantee that the fundamental physical principles for the averaged quantities are fulfilled also with the numerics. Conversely, I see that the quasi-linear form of the equations is resolved, a fact that has well-known issues in the numerics (just think about spurious error in waves propagation). The analysis of the coupling between spatial and temporal discretization of such equations has ever been deeply performed. Splitting formulations can further introduce some issues.
At present, state-of-the-art numerical simulations of turbulence reached the potentiality of computing the solution on about 1010-1011 spatial nodes. Is such computational power used for doing the future IPCC reports?
Maybe my doubts are stupid, are already addressed and are very well answered by the climatology community?
Janusz Pudykiewicz
I found interesting this very recent article
Article Assessing the scales in numerical weather and climate predic...
1kmwhere the potential 1km resolution is addressed. Yet, finite difference is addressed as the chosen numerical method to solve non-hydrostatic model. I think that a great improvement would be obtained using conservative FV method and a proper time integration with phase-error constraints.
Filippo Maria Denaro
Thank you for the interesting comments, questions and a copy of the paper.
The ratio of the horizontal and vertical scales in the atmosphere is such that the Primitive Meteorological Equations of current climate models are the natural choice for the global modelling. However, models based on such equations are executed on grids with extreme stiffness. It is not uncommon to have the mesh interval of 50000 m in the horizontal direction and 2 meters, close to the ground, in the vertical direction. The flow resolved on such grid is not easy to explain in terms of CFD, however, it sufficiently meaningful for the large scale flow.
The emergence of the recent finite volume methods in meteorology will lead to elimination of the traditional problems of the meteorological models. The new models will present other challenges such as the so-called gray zone discussed on page 9 of Neumann et al. (2019).
High resolution grids will not eliminate the need for parameterizations; however, it will be easier to use existing CFD codes to facilitate the development of such parameterizations.
Janusz Pudykiewicz
from what I understand, one of the main problem of the stiffness appears within the usage of FD method that are well known to have such a issue for this kind of grids.
Conversely, FV are much more feasible, also on very complex and deformed grid you have to compute the fluxes over the faces in the physical space and then sum them according to a physical criterion. Telescopic property of the numerical flux function preserves conservation also for high ratio between horizontal and vertical grid sizes. Furthermore, yes also high resolution of a grid still requires the parameterization but the characteristic lenght associated to that is strictly induced by the dimension of the finite volume. Conversely, FD do not identify such scale in a clear way. I worked a lot on such ideas when I published a paper on JCP, FV and FD methods clearly have deep differences.
Energy conservation can be a fundamental issue. The surface flux of energy from the Sun should be applied on the external sphere (frontier of atmosphere) and total energy should be considered as constraint. The same could be applied for the fluxes at the ground. Only using an integral finite volume representation discrete conservation is ensured. Immagine what can happens using FD and having local spurious production of energy due to the local truncation error.
In my opinion, the key is that we do not need to search for the real pointwise resolution that the (strong) solution of the differential equations (and of the implied FD method) would produce but we need to compute a set of physically averaged variables for climatology goals. I think that the framework of the weak solution should be more suitable.
A further doubt I ask to all for understanding.
What is the meaning of the resolved variables in the PME formulation? From the parameterization it seems to me that the whole range of scales are modelled, a fact that is congruent to the RANS formulation. But RANS is a formulation that makes physically sense for statistically steady phenomena. Now, what about the physical meaning of the time derivatives in PME, what they represent? Should we think something like the URANS formulation? If yes, what is the characteristic time introduced in the time-averaging?
Filippo Maria Denaro and Readers,
Since its introduction by Richardson in the 1920s, the set of primitive meteorological equations has been essential to the advancement of meteorological models. The resolved variables in these equations are filtered by removing the spectral components that represent scales below a certain resolution limit. The dissipation terms on the right are in fact derived in the same way as in RANS.
The mathematical analysis of primitive meteorological equations is often distributed among different sources. Perhaps the work of Lions et al. (1991): New formulations of the primitive equations of atmosphere and applications (Nonlinearity, 5.1992, pp. 237-288) can comment on the most important points of the last question. This paper is strongly recommended for anyone interested in the mathematical foundations of the dynamic cores of modern climate models.
Dear Janusz Pudykiewicz
I remember also von Neumann's paper on Tellus (1950), the vorticity equation solved for weather forecast with baroclinic assumption.
However I see a contradiction in the statement " The resolved variables in these equations are filtered by removing the spectral components that represent scales below a certain resolution limit. The dissipation terms on the right are in fact derived in the same way as in RANS. "
Even if apparently one resolves a finite bandwidth of components by means of a spectral filtering, when a parameterization like the additive dissipation is added in a RANS manner (actually with a spurious time derivative added in the equations), the resolved characteristic scales are parameterized, too. This a concept that distinguishes clearly the approach from a real LES wherein the SGS model is built to take into accont only for the unresolved components.
When we solve for the time dependent variables it is implied that they assume the meaning of a local time-averaged one and the role of spatially filtered variable is somehow lost from the original LES aim. In my opinion, what is computed is already by definition a statistical variable, no longer a filtered one. Maybe I am wrong...
It seems we are now touching the topics I asked for in a different post
https://www.researchgate.net/post/URANS_what_is_the_meaning_for_statistically_steady_flows_and_what_compared_to_LES
On the other hand, the time dependent formulation could imply the use of a statistical ensemble averaging but the parameterization appears not fully congruent.
Dear Filippo,
I would like to add a word of caution about what we add in a eddy viscosity type model. We add diffusion and NOT dissipation. This is a common mistake and is not a matter of semantics. In fact, it is well known that diffusion can cause instability, even in physical problem. It is the same with numerical diffusion. That is why classical SGS model in LES will slowly disappear. Implicit spectral filtering or explicit filters are a much better way to go! We have been highlighting this for nearly last three decades!
Dear Tapan K. Sengupta
you are definitely right in the framework of our common world of LES of incompressible flows, where eddy diffusivity SGS term is added in the filtered momentum equation. And we know that such diffusion only implicitly introduces a kinetic energy dissipation as the kinetic energy equation is often never explicitly resolved.
I think that Janusz Pudykiewicz addressed the term dissipation as, talking about climatology models, we are in a different framework of the PDE modelling. The climatology community uses a set of equations where energy transfer is explicitly taken into account in the computation. From what I understand about the PME and other mathematical model, the kinetic energy equation is also supplied by a parameterization that, is such equation as the real role of additional dissipation.
In conclusion, I see (but maybe in a wrong manner) that there are combined effects of the parameterization, the momentum and the energy are modelled and I am not sure that the diffusion of the momentum corresponds then to the contribution of dissipation added inton the equation for kinetic energy. For that reason I addressed above that a possible use of the total energy equations would be a better choice.
Maybe we should better clarify this issue...
Dear Filippo Maria Denaro
I surely understand the implication of what you are trying to say. But I am not sure that I have come across any definitive document, which shows that diffusion leads to attenuation of total mechanical energy, not merely kinetic energy. In this regard, last year we have proposed a new theory based on disturbance enstrophy, that helps tracking disturbances from evanescent to coherent structure stage for flow transition. There definitely we have shown that diffusion actually leads to physical instability. Of course, there is also a part of diffusion that is strictly dissipative. What one notices as proof in many text books and monograph are for homogeneous periodic flows. Our work is for inhomogeneous flow and should be viewed as generic. Hence, I would look forward to any new input from you.
Dear Filippo Maria Denaro , Tapan K. Sengupta and Readers,
The mention of a famous article in Tellus (Charney, Fjørtoft and von Neumann) brings historical reflections quite relevant to our discussion.
The results presented in this paper constitute the first numerical prediction obtained with a non-divergent barotropic vorticity equation, which represents an extreme simplification of the real atmospheric flow. However, this fundamental work paved the way to the early primitive equations models of the 1960s. The main concern at that time was to specify properly balanced initial mass and velocity fields. The second problem was the limitation of a time step due to gravity waves.
Such a nuances as the selection of the closure considered in simulation of the turbulence have not been formally considered with appropriate mathematical rigour.
In the same time it is clear that diffusion of momentum is essential to transfer the kinetic energy from the resolved to the unresolved part of the spectrum. The details of the modern consistent diffusion-dissipation scheme are discussed by Burkhardt and Becker (2005): A Consistent Diffusion–Dissipation Parameterization in the ECHAM Climate Model
Janusz Pudykiewicz
“In the same time it is clear that diffusion of momentum is essential to transfer the kinetic energy from the resolved to the unresolved part of the spectrum. “
This statement is quite intriguing...
No, why should we introduce such additive diffusion ? In principle, the characteristic climatology lenght scale would produce a numerical scale separation (by mean of the grid as well as by the discrete scheme) at a level of an inviscid transfer of energy. A dynamic scale similar model should work fine. What I suspect is that the added diffusion of the model is a numerical artefact necessary to get stable solutions. And that i see again the key more in RANS/URANS formulations rather than a real spatially filtered-based formulation.
Am I wrong?
Janusz Pudykiewicz , Filippo Maria Denaro and other interested readers,
I fully agree with the statement that diffusion is responsible for a cascade of enstrophy which can be shown for incompressible inhomogeneous flows. Interested readers can read in the following, a step by step procedure to demonstrate this:
Article "Diffusion in inhomogeneous flows: Unique equilibrium state ...
Any critique is welcome.
Filippo Maria Denaro : You are right. One can solve canonical problems from laminar to fully developed stage without any need for added diffusion by compact schemes. On the contrary, to provide isotropy of diffusion operator, one ends up filtering physical diffusion (Laplacian) drastically at high wavenumbers. This can be compensated, and at the same time prevent high wavenumber numerical instabilities by adding hyperviscosity. In our case, we add minimal amount of fourth diffusion term while discretizing convection terms.
Yes I agree, but the climatology models seems strictly founded on the concept of turbulent diffusion/dissipation distributed over all the scales in a typical RANS idea. In practice, the theoretical scale separation appears cancelled by the parameterization...
Filippo Maria Denaro : Please take a look at calibration of discretization of diffusion terms and consequences in:
Article Further improvement and analysis of CCD scheme: Dissipation ...
This is not related to atmospheric dynamics, but relevant in solution of Navier-Stokes equation via a good model problem.
Thanks for addressing the paper. Even using such sophisticated methods, I wonder what happens if the grid size is so large (as happens in climate models) to be debatable also the concept of grid-induced scale separation. Are we sure that a computational grid of common use in climatology is enough fine to be able to lie the cut-off in an inviscid inertial region? Or we should consider the possibility it lies somewhere just around the peak of energy ? And what can be addressed, also for modern formulations, about the resulting solution in such an unresolved condition?
In terms of numerical analsysis semantic, what if the local truncation error enters into the solution by means of the derivatives that cannot be of O(1) to ensure proper scaling of the error? That can be formalized in a theoretical framework, we have to rethink the general construction of the local truncation error.
Great job, this the idea that I'm thinking to it for months. I guess clouds transfer algorithm can be evaluated by CFD preciously. I really interested to do start this research program. that's my pleasure to cooperate with you in this idea. Filippo Maria Denaro
Amirreza Rezayan
you should definitely ask for a collaboration to scientists in the specific field of climatology. At present I have no practical activity planned in climatology. Thanks for your interest
Dear Filippo Maria Denaro Tapan K. Sengupta and Readers,
Thank you for your comments and question.
The most characteristic property of the atmospheric turbulence spectrum is the presence of two different regions. The first region has the k^{-3} spectrum, typical for a quasi-geostrophic turbulence, while the second region represents a 3-D turbulence with the well known k^{-5/3} spectrum. The transition occurs around the wave numbers corresponding to the length scale of approximately 600 km.
The dissipation in the atmospheric models with a horizontal resolution of about 50 km is due to the combined action of the circulation systems at all scales below, with the most important role being played by the convective motions. The corresponding flow structures are very complicated and it is unlikely that they will be fully resolved, even by the expected high-resolution global models using rather enormous computing resources.
Another interesting property of mixing in the atmospheric models is their tendency to develop a very fine structure in scalar fields, even on a coarse grid. The attached figure describes a simple Hemispheric Lagrangian tracer experiment performed with the model having a resolution of about 100 km. It is obvious that the scalar field's sub-grid structure with entangled filaments is quite complex and that the corresponding fluxes, even in such a simple case, can’t be calculated using Fickian diffusion.
Richardson studied the problem of anomalous atmospheric diffusion in 1926, just five year after his pioneering work with the weather prediction by the numerical process. Fractional order diffusion models are very likely to be needed to describe the dissipation process in atmospheric models. This problem will be addressed in the new generation of non-hydrostatic models mentioned previously.
Dear Janusz Pudykiewicz
The atmospheric pattern has a very complex physics, from my CFD background I can only address my doubts:
1) What when no parameterization is added and the equations are solved this way? Is the numerical solution still stable? There are comparisons like those we do generally to compare LES with explicti SGS and without any model?
2) in 2D/quasi 2D atmospheric flow problem there is not only a different scaling in the energy transfer but I think is relevant to consider the backscatter transfer. It is known that small vortical structures merge in larger structures. This phenomenon cannot be parameterized in any way by diffusive/dissipative turbulence model. What is done to prevent such a problem?
3) Again, I have doubts that the computational grid size is the measure for a correct definition of a scale separation when parameterization like RANS is supplied. Using a grid resolution of 100km where the Nyquist frequency pi/h lies in terms of the 2D energy spectra?
Dear Janusz Pudykiewicz ,
There is a collection of data in a couple of papers by Natrom and Gage in 1984 (Nature) and 1985 (JAS), which provides the energy spectrum for zonal and meridional wind, along with potential temperature. The data clearly shown the k^{-3} spectrum followed by k^{-5/3} spectrum. These experimental data are just the opposite to what scholars have suggested. See in page 59 of the monograph by Doering and Gibbon (Applied Analysis of the Navier-Stokes Equation, Cambridge Univ. Press, 1995). While the experimental data of Nastrom and Gage makes intuitive sense, I do not know why the theoretical analysis projects it in opposite sense? Do you have any explanation?
Incidentally, 2D part of spectrum accounts for more than 98% of total kinetic energy, corresponding 3D part is less than 2%!
Dear Filippo Maria Denaro
Thank you for your comments. I agree with all observations. Concerning the pattern of mixing shown in the figure from my comment I would like to add some clarifications. The filamentary structure produced by low-resolution winds illustrates the stirring by a 3D flow in the same manner as in the attached paper by Pierrehumbert and Yang addressing a 2D problem. The general conclusion from this experiment is that the complexity appears even before the effects of turbulence are included.
You can interpret the mixing shown in the figure as an example of chaotic advection in the sense of the definition introduced by Aref. In the actual atmospheric mixing the chaotic advection interacts with turbulent mixing acting on a scale of the filaments produced by the stretching of material lines. In order to have confidence that the model represents correctly dissipation of momentum and chemical reactions both stirring and turbulent mixing should be represented realistically. This fact is often overlooked in the simulation of transport processes in the atmospheric models.
Dear Janusz Pudykiewicz
many thanks for the article. Unfortunately, the topic of the lagrangian transport of a passive tracer, as is described in the article, introduces further theoretical lacks in the numerical results of climate model.... Starting from the assumption that we have unresolved spectral conditions when considering the common computational grid, we have not a real pointwise velocity field where the tracer obeys dx/dt = v(x(t),t). You can easily see that if the RHS of the equations is the filtered velocity, what appears in the LHS is not the position of the pointwise particle but it has to do to a local averaging of the material volume around a position. What is worst in this scenario is the fact that we can neither talk of a filtered velocity as in a pure LES but the resolved field is contaminated in the large scale but the action of the RANS-like parameterization. This way, talking of a transport in terms of a pure lagrangian equation appears critical. What we really see by theoretical point of view? In my opinion, if we use an average field and compute the lagrangian trajectory, they are more a physical sense if the transported particle has some inertia in such a way that it follows "better" an averaged velocity and is less subject to the unresolved velocity fluctuations. What are now the "filaments" and what they really represent?
I asked above my question if some papers addressed the location of the grid-induced filter frequency on the 2D energy spectra. Where is it located?