In the simulation of stirred tank reactors, it is necessary to finely resolve the region near the impeller so as to adequately capture turbulence properties such as the energy dissipation rate. However, in doing this, one might end up with mesh sizes that are smaller than a given bubble size (as also would happen at boundary inflation layers). What is the impact in terms of error of such a scenario?

How does one balance the need for fine mesh (to capture turbulence properties) versus the prevailing bubble size?

Similar questions and discussions