Large eddy simulation

[2] LES is currently applied in a wide variety of engineering applications, including combustion,[3] acoustics,[4] and simulations of the atmospheric boundary layer.

Such a resolution can be achieved with direct numerical simulation (DNS), but DNS is computationally expensive, and its cost prohibits simulation of practical engineering systems with complex geometry or flow configurations, such as turbulent jets, pumps, vehicles, and landing gear.

Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution.

There are differences between the incompressible and compressible LES governing equations, which lead to the definition of a new filtering operation.

[16] Large eddy simulation involves the solution to the discrete filtered governing equations using computational fluid dynamics.

Ghosal[17] found that for low-order discretization schemes, such as those used in finite volume methods, the truncation error can be the same order as the subfilter scale contributions, unless the filter width

Implicit filtering recognizes that the subfilter scale model will dissipate in the same manner as many numerical schemes.

While this takes full advantage of the grid resolution, and eliminates the computational cost of calculating a subfilter scale model term, it is difficult to determine the shape of the LES filter that is associated with some numerical issues.

Theoretically, a good boundary condition for LES should contain the following features:[20] (1) providing accurate information of flow characteristics, i.e. velocity and turbulence; (2) satisfying the Navier-Stokes equations and other physics; (3) being easy to implement and adjust to different cases.

The synthesized turbulence does not satisfy the physical structure of fluid flow governed by Navier-Stokes equations.

[20] The second method involves a separate and precursor calculation to generate a turbulent database which can be introduced into the main computation at the inlets.

However, the method of generating turbulent inflow by precursor simulations requires large calculation capacity.

Researchers examining the application of various types of synthetic and precursor calculations have found that the more realistic the inlet turbulence, the more accurate LES predicts results.

Without a universally valid description of turbulence, empirical information must be utilized when constructing and applying SGS models, supplemented with fundamental physical constraints such as Galilean invariance[9] .

The significance of the identity is that if one assumes that turbulence is self similar so that the SGS stress at the grid and test levels have the same form

He noted that the Germano identity required the satisfaction of nine equations at each point in space (of which only five are independent) for a single quantity

The latter fact in itself should not be regarded as a shortcoming as a priori tests using filtered DNS fields have shown that the local subgrid dissipation rate

This so-called "backscatter" of energy from small to large scales indeed corresponds to negative C values in the Smagorinsky model.

Simply setting the negative values to zero (a procedure called "clipping") with or without the averaging also resulted in stable calculations.

Lilly's modification of the Germano method followed by a statistical averaging or synthetic removal of negative viscosity regions seems ad hoc, even if it could be made to "work".

An alternate formulation of the least square minimization procedure known as the "Dynamic Localization Model" (DLM) was suggested by Ghosal et al.[27] In this approach one first defines a quantity with the tensors

This tensor then represents the amount by which the subgrid model fails to respect the Germano identity at each spatial location.

Instead, one defines a global error over the entire flow domain by the quantity where the integral ranges over the whole fluid volume.

The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme.

This DLM(+) model was found to be stable and yielded excellent results for forced and decaying isotropic turbulence, channel flows and a variety of other more complex geometries.

The DLM can be modified in a simple way to take into account this physical fact so as to allow for backscatter while being inherently stable.

This approach, though more expensive to implement than the DLM(+) was found to be stable and resulted in good agreement with experimental data for a variety of flows tested.

Furthermore, it is mathematically impossible for the DLM(k) to result in an unstable computation as the sum of the large scale and SGS energies is non-increasing by construction.

The Dynamic Model originated at the 1990 Summer Program of the Center for Turbulence Research (CTR) at Stanford University.

A series of "CTR-Tea" seminars celebrated the 30th Anniversary Archived 2022-10-30 at the Wayback Machine of this important milestone in turbulence modeling.

Large eddy simulation of a turbulent gas velocity field.
A velocity field produced by a direct numerical simulation (DNS) of homogeneous decaying turbulence . The domain size is .
The same DNS velocity field filtered using a box filter and .
The same DNS velocity field filtered using a box filter and .