3.8. Discretization

In this chapter, we discretize the governing equations and their boundary conditions in space and time 1. There are several approaches that can be used to accomplish this. Aria currently first semi-discretizes in space using the Galerkin method, and then discretizes in time using finite differences. Alternative approaches include first semi-discretizing in time using finite differences, then discretizing in space. Another possibility would be to discretize in space and time simultaneously using a space-time finite element formulation. For clarity, we will be using the stationary and transient forms of the general conservation equation.

In Weak Statement of the Stationary Problem, we use the method of weighted residuals 2 to obtain a weak statement for the general conservation equation, given by general conservation equation. We then use this weak statement to form the associated Galerkin approximations in Galerkin Approximation for the Stationary Problem; in Weak Statement of the Transient Problem, we form a similar weak statement for the transient problem; in Galerkin Approximation for the Transient Problem, we discretize the resulting weak statement in space using Galerkin’s method and use finite differences to discretize in time. We continue the analysis in Finite Element Approximation, where we introduce the finite element approximations. The development in this chapter closely follows that of Becker, et al. [22]. The interested reader should consult this reference for a more detailed presentation.

3.8.1. Weak Statement of the Stationary Problem

We begin with the stationary form of the general conservation equation, which we rewrite as

(3.128)\div(\symSlnVar\v) = - \div\f + \symSlnVar_\Vol

and is therefore defined over each subdomain \Omega_{\iota}. In fact, because of the possibility of discontinuities in the material properties at the interface between two subdomains, the second derivatives of \symSlnVar may not exist on such interfaces. This means that it is not possible to simply integrate (3.128) over the entire domain \Omega. In fact, the lack of well-defined second derivatives is precisely the reason why this equation was written in each subdomain in the problem statement given by the domain definition in the first place. Accordingly, we begin by multiplying (3.128) by an arbitrary, admissible test function, w, and integrating the result over a single subdomain \Omega_{\iota}. This integral may be written as

(3.129)\int_{\Omega_{\iota}} w \left[ \v \div \symSlnVar + \symSlnVar  \div \v + \div \f - \symSlnVar_\Vol \right]  \,d V = 0

Next, in order to integrate the second term in (3.129) by parts, we introduce the following identity:

(3.130)\int_{\Omega_{\iota}}
\grad  \cdot \left( w \f \right)  \,d \Omega
    = \int_{\Omega_{\iota}} \grad w \cdot \f \,d\Omega + \int_{\Omega_{\iota}} w \div f \,d\Omega

Upon substitution of (3.130) into (3.129)), we obtain

(3.131)\int_{\Omega_{\iota}}
  \left( w \v \cdot \grad \symSlnVar  -  \grad w \cdot \f - w \symSlnVar_\Vol \right)  \,d \Omega + \int_{\Omega_{\iota}} \div \left( w \f \right) \,d\Omega = 0

where the assumption of a solenoidal velocity field drops terms containing \div \v. Next, we introduce the Gauss Divergence Theorem, which converts a volume integral to a surface integral, and may be written as

(3.132)\int_{\Omega_{\iota}} \div \left( w \f \right) \,d\Omega
         = \int_{\partial \Omega_{\iota}} w \f \cdot \n \,d\Gamma

Upon substitution of (3.132) into (3.131) and rearranging terms, we obtain

(3.133)\int_{\Omega_{\iota}} \left( w \v \cdot \grad \symSlnVar  - \grad w \cdot \f - w \symSlnVar_\Vol \right) \,d \Omega
+\int_{\partial \Omega_{\iota}} w \f \cdot \n \,d\Gamma = 0.

The next step in obtaining the weak statement is to sum the contributions from (3.133) over each subdomain \Omega_{\iota} to obtain the integral over the entire domain, \Omega. This is now valid, because no second derivatives appear in (3.133). Hence, we may write

(3.134)\sum \limits_{\iota = 1}^{N}\int_{\Omega_{\iota}} \left( w \v \cdot \grad \symSlnVar  - \grad w \cdot  \f -  w \symSlnVar_\Vol \right) \,d \Omega = - \sum \limits_{\iota = 1}^{N}\int_{\partial \Omega_{\iota}} w f_n \,d\Gamma,

where f_n \equiv \f \cdot \n.

The sum of the volume integrals on the left-hand side of (3.134) may be readily combined into a single integral, namely

(3.135)\sum \limits_{\iota = 1}^{N}\int_{\Omega_{\iota}} \left( w \v \cdot \grad \symSlnVar  - \grad w \cdot \f - w \symSlnVar_\Vol \right)\,d \Omega \\
   = \int_{\Omega} \left( w \v \cdot \grad \symSlnVar  - \grad w \cdot \f - w \symSlnVar_\Vol \right) \,d \Omega

The sum of the surface integrals on the right-hand side of (3.134) requires more care. Consider that the surface integral associated with each subdomain \Omega_{\iota} consists of two parts. The first part is over the portion of \partial\Omega_{\iota} which intersects the exterior surface \partial \Omega, which we denote \partial\Omega_{\iota} \cap \partial\Omega. The second part is what is left over, and consists of internal surfaces that divide one subdomain from another, which we denote \partial\Omega_{\iota}\setminus \partial\Omega. The sum of surface integrals on the right hand side of (3.134) is therefore

(3.136)\sum \limits_{\iota = 1}^{N}\int_{\partial \Omega_{\iota}} w f_n \,d\Gamma
=
\sum \limits_{\iota = 1}^{N}\int_{\partial \Omega_{\iota} \cap
  \partial \Omega} w f_n \,d\Gamma +
\sum \limits_{\iota = 1}^{N}\int_{\partial \Omega_{\iota}\setminus
  \partial \Omega} w f_n \,d\Gamma

Now, the sum of the integrals over the exterior surfaces \partial\Omega_{\iota} \cap \partial\Omega is simply the integral over the entire boundary surface \partial \Omega. The second sum in (3.136) reduces to the sum of the integrals of the flux jumps \ljump f_n \rjump over each interior interface [22]. Since the problem statement specifies that these jumps are zero, (3.136) reduces to

(3.137)\sum \limits_{\iota = 1}^{N}\int_{\partial \Omega_{\iota}} w f_n \,d\Gamma = \int_{\partial \Omega} w f_n \,d\Gamma

Upon substitution of (3.137) and (3.135) into (3.134), we have

(3.138)\int_{\Omega} \left( w \v \cdot \grad \symSlnVar - \grad w \cdot  \f \right) \,d \Omega = -\int_{\partial \Omega} w f_n \,d\Gamma + \int_{\Omega}    w \symSlnVar_\Vol \,d\Omega

Finally, we describe the class of admissible test functions w, which appear in (3.138). Clearly, the values of w and its first derivatives must exist so that the integrals in (3.138) are well-defined. Furthermore, the value of w should vanish on \Gamma_\symSlnVar, which we previously defined in Statement of the Stationary Problem to be that portion of \partial \Omega on which \symSlnVar is specified.

Now, we can completely replace the set of differential equations and interface conditions given in Governing Equations with the following alternative problem: Find the function \symSlnVar (\bfx), such that \symSlnVar  = \symSlnVar_b on \Gamma_\symSlnVar, w=0 on \Gamma_\symSlnVar, and

(3.139)\int_{\Omega} \left(
  w \v \cdot \grad \symSlnVar  - \grad w \cdot \f \right) \,d \Omega = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar } w f_n \,d\Gamma +
\int_{\Omega}    w \symSlnVar_\Vol \,d\Omega

for all admissible w.

Equation (3.139) is often called a weak statement of the problem given by (3.128) because the second derivatives of the scalar variable \symSlnVar (or alternately, first derivatives of flux \f) do not appear. More specifically, the original differential equation (3.128) requires that the solution u(\bfx) have second derivatives that exist in each subdomain \Omega_{\iota}, whereas (3.139) only requires that the first derivatives of \symSlnVar(\bfx) be integrable over the entire domain \Omega. We remark that a solution to the original problem (3.128) is also a solution to the weak statement (3.139). However, the converse is not necessarily true: a solution to the weak statement is only a solution to the original problem if it is sufficiently smooth.

3.8.2. Well-posed Problems

In order that a unique solution exists and behaves itself, for both the stationary and transient problems, there are many requirements on the input data that must be satisfied. By input data we mean the supplied domain geometry, initial conditions, boundary conditions, and material coefficients. A fully detailed enumeration of the requirements on the input data is beyond the scope of this manual, and some requirements fall in the realm of research. We have, however, hinted at some of the known requirements at various points in the text, and below in this Section we mention a couple other issues. This is an important topic, and we hope in a future edition to be able to cover it more fully.

As an aside, consider the meaning of the term well-posed problem. We say that a problem is well-posed if there exists a solution, it is unique, and it depends continuously on the data—otherwise the problem is said to be ill-posed. A well-posed problem is not always more physically realistic than an ill-posed one, and many times a well-posed problem may be unrealistic. As for the term stability, it is most often used to mean that the problem is continuous with respect to the data. That is, if we change the problem slightly, the solution changes only slightly.

Regarding specific requirements on the heat source \symSlnVar_\Vol, for the stationary problem, in order that the solution exist, a compatibility condition between \symSlnVar_\Vol and the applied boundary data must be satisfied. The compatibility condition is a statement of the global conservation principle for \Omega. Its exact form depends upon the individual terms which appear in the differential conservation equations in , as well as the applied boundary data [22]. For the transient problem, any function \symSlnVar_\Vol that is sufficiently smooth over each subdomain \Omega_i is admissible, as long as it is finite and integrable in space and time.

When using a set of boundary conditions that do not specify temperature at any point, the solution is only defined up to an arbitrary constant, i.e., the solution is not unique. Aria may or may not be able to automatically check for this requirement.

Regarding the smoothness of input functions, we must assume that the necessary derivatives of input quantities exist in order that the solution, \symSlnVar(\bfx, t), exists. In other words, there are restrictions that must be imposed on the smoothness of the various input data, including material properties and boundary conditions. The smoothness restrictions are stronger in the strong form of the equations, and correspondingly weaker in the weak form of the equations discussed in this chapter. It is the weak form of the equations to which the Aria program actually tries to provide an approximate solution. No matter what the smoothness restrictions, Aria cannot reliably enforce them—especially inside user subroutines.

To summarize, it is primarily up to the user to provide valid admissible input data. Exercise caution to ensure a proper problem formulation.

3.8.3. Galerkin Approximation for the Stationary Problem

Solutions to the weak statement given by (3.139) lie in a certain infinite-dimensional space of functions that have derivatives; we denote this space H. Galerkin’s method seeks approximate solutions to the weak statement represented by a linear combination of a finite set of basis functions \{\symWeightFunc_1, \symWeightFunc_2, \ldots,\symWeightFunc_N\} that defines a finite-dimensional subspace H^h \subset H. We then seek a function \symSlnVar_h \in H^h of the form

(3.140)\symSlnVar_h = \sum \limits_{i=1}^{N}\alpha_i \symWeightFunc_i,

which satisfies the weak form in H^h and where the \alpha_i are unknown constants. If the \symWeightFunc_i are Lagrange basis functions, then the constants \alpha_i = \symSlnVar_i: the constants correspond to the evaluation of the approximation at a node.

Upon substitution of (3.140) into (3.139) and allowing the test function w to be each element of H^h, we arrive at the discrete form of our weak statement: Find the function \symSlnVar_h(\bfx) \in H^h, such that \symSlnVar_h = \symSlnVar_b on \Gamma_\symSlnVar and

(3.141)\sum \limits_{j=1}^N \left [ \int_{\Omega} \left( \symWeightFunc_i \v \cdot  \grad \symWeightFunc_j  + \grad \symWeightFunc_i \cdot  k \grad  \symWeightFunc_j \right) \,d \Omega \right] \alpha_j \\ = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar} \symWeightFunc_i f_n \,d\Gamma + \int_{\Omega}  \symWeightFunc_i \symSlnVar_\Vol d\Omega,

for all \symWeightFunc_i, i=1,\dots,N. We have assumed a diffusive-style flux such that \f = -k\grad \symSlnVar,

Here, (3.141) represents N fully discrete equations for the unknown constants \symSlnVar_j. For known flux functions f_n \neq f_n(u), we may write this system of equations as the matrix system

(3.142)\begin{array}{ll}
\sum \limits_{j=1}^N \left( U_{ij} + K_{ij} \right) \alpha_j = f_i, & i = 1,\ldots,N
\end{array}

where

(3.143)U_{ij} & = \int_{\Omega} \symWeightFunc_i \v \cdot  \grad \symWeightFunc_j \,d\Omega \\

 K_{ij} & = \int_{\Omega} \grad \symWeightFunc_i \cdot k \grad \symWeightFunc_j  \,d\Omega \\

    f_i & = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar} \symWeightFunc_i f_n \,d\Gamma
            + \int_{\Omega}  \symWeightFunc_i \symSlnVar_\Vol \,d\Omega

The matrix U_{ij} represents the convection matrix, K_{ij} the diffusion matrix, and F_i is the forcing vector. Note that if \vecvel \neq 0, then the matrix system in (3.142) is non-symmetric.

The finite element method provides a convenient, systematic way to construct the basis functions \symWeightFunc_i. We address this important issue in Finite Element Approximation, in which we also discuss the case where f_n is a function of the temperature, wherein the essential difference is that the associated boundary integral has contributions to the matrix entries K_{ij} as well as the forcing function f_i.

3.8.4. Weak Statement of the Transient Problem

The development of the weak statement for the time-dependent case closely follows that for the stationary case, which we developed in Weak Statement of the Stationary Problem. We begin with the time-dependent scalar conservation equation, which we rewrite here as

(3.144)\left( \frac{\partial \symSlnVar}{\partial t} + \v \cdot \grad \symSlnVar \right) + \div \f  - \symSlnVar_\Vol = 0

The key difference between (3.144) and (3.128) is the introduction of the new independent variable, t, and the associated time derivative term. All of the discussion in Weak Statement of the Stationary Problem regarding subdomain interfaces, flux jumps, and solution smoothness is relevant for the present case. If the method of weighted residuals is applied to (3.144), for the Statement of the Transient Problem, the following weak statement is obtained: Find the function \symSlnVar(\bfx, t), such that \symSlnVar(\bfx, t_0) = \symSlnVar_0, \symSlnVar(\bfx,t)=\symSlnVar_b on \Gamma_\symSlnVar, and w=0 on \Gamma_\symSlnVar, which satisfies

(3.145)\int_{\Omega} \left(w \frac{\partial \symSlnVar}{\partial t} + w \v \cdot \grad \symSlnVar  - \grad w \cdot \f \right) \,d \Omega = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar} w f_n \,d\Gamma + \int_{\Omega} w \symSlnVar_\Vol \,d\Omega

for all admissible w.

3.8.5. Galerkin Approximation for the Transient Problem

As in Galerkin Approximation for the Stationary Problem, let the infinite-dimensional space of solution functions be denoted H. We again introduce a finite set of basis functions \{\symWeightFunc_1, \symWeightFunc_2, \ldots,\symWeightFunc_N\}, which defines a finite-dimensional subspace H^h \subset H. Given a value of t, we then seek a function \symSlnVar_h(\bfx, t) \in H^h of the form

(3.146)\symSlnVar_h(\bfx, t) = \sum \limits_{i=1}^{N}\alpha_i(t) \symWeightFunc_i(\bfx)

which satisfies the weak form in H^h, and where the \alpha_i(t) are unknown, time-dependent coefficients. If the \symWeightFunc_i are Lagrange basis functions, then the parameters \alpha_i(t) = \symSlnVar_i(t): the interpolation parameters correspond to the evaluation of the interpolation function at a given node. We assume that t is bounded,

t_0 \leq t \leq t_1 < \infty

Upon substitution of (3.146) into (3.145), and setting w = \symWeightFunc_i, we arrive at the semi-discrete form of our weak statement: Find the function \symSlnVar_h(\bfx, t) \in H^h, where t_0 \leq t \leq t_1, such that \symSlnVar_0) = \symSlnVar_0, \symSlnVar_h(\bfx, t) = \symSlnVar_b on \Gamma_\symSlnVar, and

(3.147)\sum \limits_{j=1}^N & \left [ \int_{\Omega} \symWeightFunc_i \symWeightFunc_j d \Omega\right]  \dot{\alpha}_j(t) \quad + \sum \limits_{j=1}^N \left [ \int_{\Omega} \left( \symWeightFunc_i \v \cdot  {\bf \nabla} \symWeightFunc_j  + {\bf \nabla} \symWeightFunc_i \cdot k {\bf \nabla} \symWeightFunc_j \right) \,d \Omega \right] \alpha_j(t)  = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar} \symWeightFunc_i f_n \,d\Gamma + \int_{\Omega}  \symWeightFunc_i \symSlnVar_\Vol \,d\Omega

for each \symWeightFunc_i, i=1,\dots,N, and where \dot{\alpha}(t) indicates the time derivative of \alpha(t).

Here, (3.147) represents N ordinary differential equations for the N unknown functions \alpha_j(t). For known flux functions f_n \neq f_n(u), we may write this system of equations as the matrix system

(3.148)\begin{array}{ll}
 \sum \limits_{j=1}^N  M_{ij} \dot{\alpha}_j(t) + \sum \limits_{j=1}^N \left( U_{ij} + K_{ij} \right) \alpha_j(t) = f_i, & i = 1,\ldots,N
 \end{array}

where

(3.149)M_{ij} & = \int_{\Omega} \symWeightFunc_i \symWeightFunc_j d\Omega \\

U_{ij} & = \int_{\Omega} \symWeightFunc_i \v \cdot  \grad \symWeightFunc_j \,d\Omega \\

K_{ij} & = \int_{\Omega} \grad \symWeightFunc_i \cdot k \grad \symWeightFunc_j  \,d\Omega \\

f_i & = -\int_{\partial \Omega\setminus \Gamma_\symSlnVar} \symWeightFunc_i f_n \,d\Gamma
            + \int_{\Omega}  \symWeightFunc_i \symSlnVar_\Vol \,d\Omega

The matrix M_{ij} is often referred to as the mass or capacitance matrix. Note that if \vecvel \neq 0, then the matrix system in (3.148) is non-symmetric. Also, if any of the parameters in (3.149), such as the material properties, applied fluxes, or volumetric heating are time dependent, then the corresponding matrices or vectors will also be time dependent. The system of ordinary differential equations given in (3.148) must be numerically integrated in time. We discuss this issue in Time Discretization.

3.8.6. Mass Lumping

In contrast with the consistent mass formulation, a lumping procedure instead uses the nodally evaluated quantity effectively “lumping” the contribution, i.e., the first term in (3.148) becomes

(3.150)\dot{\alpha}_i(t) \sum \limits_{j=1}^N  M_{ij}

Mass lumping can also be applied to source terms, or boundary fluxes via an analogies procedure.

3.8.7. Time Discretization

There are many ways to integrate the system of ordinary differential equations given in (3.148). The method employed by Aria uses is usually referred to as the variable-implicit method or theta-method. Aria has the ability to adaptively step the ODE to reduce local error with respect to time, but we emphasize that this says nothing about the global error in the ODE, i.e., the tolerance on the local error is not a tolerance on the total accuracy of the ODE solution. More significantly, we should point out that a small error in the temporal discretization does not necessarily imply that there is a small error in the spatial discretization.

We begin the description of Aria’s time integration strategy by rewriting (3.148) as

(3.151)\bfM(t) \dot{\symSlnVec}(t) + \hat{\bfK}(t) \symSlnVec(t) - \bff(t) = 0,

where \dot{\symSlnVec}(t) and \symSlnVec(t) are the vectors of entries \dot{\alpha}_j(t) and \alpha_j(t), respectively, and where \bfM is the matrix with entries given by (3.149), \hat{\bfK} is the matrix with entries \hat{K}_{ij} = U_{ij} + K_{ij}, and \bff is the vector with entries given by (3.149).

Numerical time integration of a partial differential equation requires both a discretized form of the governing equation and a time integrator. The discrete form of (3.151) for time level n can be written as

(3.152)\bfM^{n} \dot{\symSlnVec}^{n} + \hat{\bfK}^{n} \symSlnVec^{n} - \bff^{n} = 0.

A time integrator requires that the discrete approximation of \symSlnVec(t) be expressed as a function of calculated data and one choice of this approximation is

(3.153)\symSlnVec^{n+1} = f(\symSlnVec^{n+1}, t^{n+1},\symSlnVec^{n}, t^{n},\symSlnVec^{n-1}, t^{n-1},...)

where f is often a function of computed derivatives of \symSlnVec.

One form of (3.153) used in Aria is an implicit method which employs

(3.154)\symSlnVec^{n+1} = \symSlnVec^{n} +
\Delta t ( 1 - \theta ) \dot{\symSlnVec}^{n} +
\theta \Delta t \dot{\symSlnVec}^{n+1}

where \Delta t is the time step size and \theta denotes the evaluation level within the time step, \theta=0 \Rightarrow t(n) and \theta=1 \Rightarrow t(n+1).

The degree of implicitness of a given time discretization of (3.151) is determined by the time level at which \hat{\bfK}(t) \symSlnVec(t) is evaluated—if at the old time level, then the scheme is said to be explicit, if at the new time level, then fully implicit, and if at some average of the two, then semi-implicit. Combining (3.154) with (3.152) at time levels n and n+1 one obtains a variable implicit method

(3.155)\left( \frac{1}{\Delta t^n} \bfM + \theta \hat{\bfK}^{n+1} \right ) \symSlnVec^{n+1} = \theta \bff^{n+1} + (1-\theta) \bfM \dot{\symSlnVec}^n + \frac{1}{\Delta t^n} \bfM \symSlnVec^n

The case of \theta = 1 corresponds to Euler implicit, \theta = 1/2 corresponds to the trapezoid rule, and \theta = 2/3 corresponds to Galerkin implicit. In general, this method is accurate only to first order in time, but the special case of \theta = 1/2 can be shown to be to be second-order accurate. It is important to note that if \hat{\bfK} or \bff depend upon \symSlnVec, then (3.155).

The time derivative \dot{\symSlnVec}^{n} can be calculated using (3.154) as

(3.156)\dot{\symSlnVec}^n = \frac{\symSlnVec^n - \symSlnVec^{n-1} }{\theta \Delta t^{n-1}} - \frac{1-\theta}{\theta}\dot{\symSlnVec}^{n-1}\;.

Note that for \theta = 1, \dot{\symSlnVec}^{n} is not needed for evaluation of (3.155). If \theta \neq 1, then the recursive nature of (3.156) introduces some difficulty to the solution scheme since \dot{\symSlnVec}^{n-1} is not initially known. To circumvent this potential issue, Aria uses the fully-implicit case (\theta = 1) and (3.155) becomes

(3.157)\left( \frac{1}{\Delta t^n} \bfM + \hat{\bfK}^{n+1} \right ) \symSlnVec^{n+1} = \bff^{n+1} + \frac{1}{\Delta t^n} \bfM \symSlnVec^n \;

or as one might expect

\frac{1}{\Delta t^n} \bfM  \dot{\symSlnVec}^{n+1} + \hat{\bfK}^{n+1}  \symSlnVec^{n+1} -  \bff^{n+1} \;.

Thus in the fully-implicit formulation one solves the discretized set of equation using information from the current time plane with a selected representation of the time derivative.

In the simplest case one begins with an initial guess for \symSlnVec^{n} and uses the first-order accurate time derivative known as BDF1 (Backward Finite Difference 1).

(3.158)\dot{\symSlnVec}^{n+1} = \frac{\symSlnVec^{n+1} - \symSlnVec^{n}}{ \Delta t^{n}}

Another choice for the time derivative \dot{\symSlnVec}^{n} is the second-order approximation known as BDF2 (Backward Finite Difference 2)

(3.159)\dot{\symSlnVec}^{n+1} = \frac{3 \symSlnVec^{n+1} - 4 \symSlnVec^{n} + \symSlnVec^{n-1}} {2 \Delta t} \;.

Obviously utilization of the above requires that the first two solution steps of (3.157) be obtained using the first-order derivative (3.158). Theoretical time step bounds are available for the BDF2 time integrator and the scheme is often used in time accurate simulations.

While it is possible to use a fixed, constant \Delta t while iterating (3.157), it is often preferable to allow the timestep size to vary. Aria uses an automatic timestep size selection algorithm that allows \Delta t to vary while satisfying several constraints 3. The approach first calculates a candidate timestep size, using local time truncation error estimates, then adjusts this value using several heuristic rules.

To calculate the candidate timestep size, assume that two time integrators of comparable accuracy are available. If we compare local truncation error estimates for each scheme, and if we have a target value of a truncation error norm in mind, then we can estimate \Delta t. The implicit scheme given by (3.155) is one such integrator, but a second is needed. Since this second scheme is only required for the time step selection algorithm, computational speed is important, and therefore we consider only explicit schemes. If \theta is chosen so that the implicit scheme is first-order accurate in time, then we use forward Euler differencing, which may be written as

(3.160)\symSlnVec_p^{n+1} = \symSlnVec^n + \Delta t^n \dot{\symSlnVec}^n

where \symSlnVec_p indicates the temperature that is predicted as a result of applying (3.160), and the acceleration vector \dot{\symSlnVec}^n is calculated using (3.156). For \theta = 1/2, a second-order accurate explicit scheme is required, and the Adams–Bashforth difference formula is used, namely

(3.161)\symSlnVec_p^{n+1} = \symSlnVec^n + \frac{\Delta t^n}{2}\left[ \left( 2 + \frac{\Delta t^n}{\Delta t^{n-1}} \right) \dot{\symSlnVec}^n - \frac{\Delta t^n}{\Delta t^{n-1}}\dot{\symSlnVec}^{n-1} \right]

where the acceleration vectors are calculated using \theta = 1/2 in (3.156).

For \theta > 1/2, the candidate time step size, (\Delta t)_c, is estimated according to

(3.162)(\Delta t)_c^{n+1} = \Delta t^n \sqrt{\frac{\theta \varepsilon}{(\theta - 1/2) \|\symSlnVec^{n+1} - \symSlnVec_p^{n+1}\|}}

where \varepsilon is the given, allowable value of the truncation error norm 4. A typical value of the truncation error norm would be 0.0001. Here, the norm \| \cdot \| indicates the non-dimensional root mean square norm of a vector,

(3.163)\|\mathbf{v}\| =\frac{1}{v_\mathrm{max}} \sqrt{\frac{1}{N} \sum \limits_{i=1}^N v_i^2}

where

v_\mathrm{max} = \max_{1 \leq i \leq N} | v_i |

Note that in (3.162), the timestep grows as the square root of \varepsilon. For \theta = 1/2, the candidate timestep size is calculated according to the second-order formula

(3.164)(\Delta t)_c^{n+1} = \Delta t^n \sqrt[3]{3 \left(1+\frac{\Delta t^{n-1}}{\Delta t^n} \right) \frac{\varepsilon}{\| \symSlnVec^{n+1} - \symSlnVec_p^{n+1}\|} }

Note that in (3.164), the timestep grows with the cube root of \varepsilon. We emphasize that the initialization problem caused by the recursive nature of (3.161) is circumvented by using the initial \Delta t with \theta = 1 for the first two timesteps, then switching to \theta = 1/2 and using (3.164) to calculate \Delta t thereafter.

In practice, it is useful to limit the candidate timestep size calculated by the above procedure. Aria imposes three limits on the timestep. The first limit is enforced only when there is volumetric heating due to chemical reactions, as their effect is not directly included in either (3.162) or (3.164). Aria uses the CHEMEQ library [23] for time integration of chemistry. CHEMEQ uses a stiff ordinary differential equation integrator, which adaptively subcycles species equations according to their stiffness. Usually, the chemical time scales are much shorter than that of conduction. Hence, if (\delta t)_s^n represents the minimum timestep size used to integrate the chemical species at timestep n, then we limit (\Delta t)_c^{n+1} according to

(3.165)(\Delta t)_c^{n+1} = \min{\left( (\Delta t)_c^{n+1}, \chi (\delta t)_s^n\right)}

where \chi is a multiplication factor, typically on the order of 100.

The second limit on the timestep allows control of how fast the solution changes from one timestep to the next. Let |\Delta u|_m denote the maximum change in temperature at a single mesh node from timestep n to n+1, and |\Delta T|_a denote the allowable value of this difference. Then we enforce

(3.166)\text{if} \quad \left( |\Delta T|_m \frac{(\Delta t)_c^{n+1}}{\Delta t^n} > |\Delta T|_a \right) \quad \text{then} \quad (\Delta t)_c^{n+1} = \frac{0.95 |\Delta T|_a \Delta t^n} {|\Delta T|_m}

Equation (3.166) states that if the estimated maximum change in temperature exceeds the allowable change, then the candidate timestep size is set to 5\% less than the value required to prevent this condition.

Finally, the user is allowed to specify minimum and maximum values of the timestep size, \Delta t_{\text{min}} and \Delta t_{\text{max}}, respectively. For the BDF2 method a theoretical limit of \Delta t_{\text{max}} = (1 + \sqrt2) \Delta t^{n} is also imposed. Therefore, we also restrict the candidate timestep size so that

(3.167)\Delta t_{\text{min}} \leq (\Delta t)_c^{n+1} \leq \Delta t_{\text{max}}

In summary, the candidate timestep size is first estimated using (3.162) for \theta > 1/2 and (3.164) for \theta = 1/2. Next, each of the applicable limits given in (3.165)(3.167) are enforced to determine the timestep size for the next iteration of (3.155), \Delta t^{n+1}.

Footnotes

1

The development in this chapter closely follows that in Finite Elements: An Introduction, by Becker, et al. [22].

2

This classical method is treated in depth in The Method of Weighted Residuals and Variational Principles, by Finlayson [24].

3

This algorithm is based on the approached described by Gresho, et al. [25] and is similar to that which is implemented in COYOTE [26] [27].

4

We do not derive the local truncation error estimates here, since the method of using Taylor series analysis for this purpose is well established, for example [25].