3.8. Discretization
In this chapter, we discretize the governing equations and their boundary conditions in space and time 1. There are several approaches that can be used to accomplish this. Aria currently first semi-discretizes in space using the Galerkin method, and then discretizes in time using finite differences. Alternative approaches include first semi-discretizing in time using finite differences, then discretizing in space. Another possibility would be to discretize in space and time simultaneously using a space-time finite element formulation. For clarity, we will be using the stationary and transient forms of the general conservation equation.
In Weak Statement of the Stationary Problem, we use the method of weighted residuals 2 to obtain a weak statement for the general conservation equation, given by general conservation equation. We then use this weak statement to form the associated Galerkin approximations in Galerkin Approximation for the Stationary Problem; in Weak Statement of the Transient Problem, we form a similar weak statement for the transient problem; in Galerkin Approximation for the Transient Problem, we discretize the resulting weak statement in space using Galerkin’s method and use finite differences to discretize in time. We continue the analysis in Finite Element Approximation, where we introduce the finite element approximations. The development in this chapter closely follows that of Becker, et al. [22]. The interested reader should consult this reference for a more detailed presentation.
3.8.1. Weak Statement of the Stationary Problem
We begin with the stationary form of the general conservation equation, which we rewrite as
(3.128)
and is therefore defined over each subdomain . In fact, because of the possibility of discontinuities in the material properties at the interface between two subdomains, the second derivatives of
may not exist on such interfaces. This means that it is not possible to simply integrate (3.128) over the entire domain
. In fact, the lack of well-defined second derivatives is precisely the reason why this equation was written in each subdomain in the problem statement given by the domain definition in the first place. Accordingly, we begin by multiplying (3.128) by an arbitrary, admissible test function,
, and integrating the result over a single subdomain
. This integral may be written as
(3.129)
Next, in order to integrate the second term in (3.129) by parts, we introduce the following identity:
(3.130)
Upon substitution of (3.130) into (3.129)), we obtain
(3.131)
where the assumption of a solenoidal velocity field drops terms containing . Next, we introduce the Gauss Divergence Theorem, which converts a volume integral to a surface integral, and may be written as
(3.132)
Upon substitution of (3.132) into (3.131) and rearranging terms, we obtain
(3.133)
The next step in obtaining the weak statement is to sum the contributions from (3.133) over each subdomain to obtain the integral over the entire domain,
. This is now valid, because no second derivatives appear in (3.133). Hence, we may write
(3.134)
where .
The sum of the volume integrals on the left-hand side of (3.134) may be readily combined into a single integral, namely
(3.135)
The sum of the surface integrals on the right-hand side of (3.134) requires more care. Consider that the surface integral associated with each subdomain consists of two parts. The first part is over the portion of
which intersects the exterior surface
, which we denote
. The second part is what is left over, and consists of internal surfaces that divide one subdomain from another, which we denote
. The sum of surface integrals on the right hand side of (3.134) is therefore
(3.136)
Now, the sum of the integrals over the exterior surfaces is simply the integral over the entire boundary surface
. The second sum in (3.136) reduces to the sum of the integrals of the flux jumps
over each interior interface [22]. Since the problem statement specifies that these jumps are zero, (3.136) reduces to
(3.137)
Upon substitution of (3.137) and (3.135) into (3.134), we have
(3.138)
Finally, we describe the class of admissible test functions , which appear in (3.138). Clearly, the values of
and its first derivatives must exist so that the integrals in (3.138) are well-defined. Furthermore, the value of
should vanish on
, which we previously defined in Statement of the Stationary Problem to be that portion of
on which
is specified.
Now, we can completely replace the set of differential equations and interface conditions given in Governing Equations with the following alternative problem: Find the function , such that
on
,
on
, and
(3.139)
for all admissible .
Equation (3.139) is often called a weak statement of the problem given by (3.128) because the second derivatives of the scalar variable (or alternately, first derivatives of flux
) do not appear. More specifically, the original differential equation (3.128) requires that the solution
have second derivatives that exist in each subdomain
, whereas (3.139) only requires that the first derivatives of
be integrable over the entire domain
. We remark that a solution to the original problem (3.128) is also a solution to the weak statement (3.139). However, the converse is not necessarily true: a solution to the weak statement is only a solution to the original problem if it is sufficiently smooth.
3.8.2. Well-posed Problems
In order that a unique solution exists and behaves itself, for both the stationary and transient problems, there are many requirements on the input data that must be satisfied. By input data we mean the supplied domain geometry, initial conditions, boundary conditions, and material coefficients. A fully detailed enumeration of the requirements on the input data is beyond the scope of this manual, and some requirements fall in the realm of research. We have, however, hinted at some of the known requirements at various points in the text, and below in this Section we mention a couple other issues. This is an important topic, and we hope in a future edition to be able to cover it more fully.
As an aside, consider the meaning of the term well-posed problem. We say that a problem is well-posed if there exists a solution, it is unique, and it depends continuously on the data—otherwise the problem is said to be ill-posed. A well-posed problem is not always more physically realistic than an ill-posed one, and many times a well-posed problem may be unrealistic. As for the term stability, it is most often used to mean that the problem is continuous with respect to the data. That is, if we change the problem slightly, the solution changes only slightly.
Regarding specific requirements on the heat source , for the stationary problem, in order that the solution exist, a compatibility condition between
and the applied boundary data must be satisfied. The compatibility condition is a statement of the global conservation principle for
. Its exact form depends upon the individual terms which appear in the differential
conservation equations in , as well as the applied boundary data [22]. For the transient problem, any function
that is sufficiently smooth over each subdomain
is admissible, as long as it is finite and integrable in space and time.
When using a set of boundary conditions that do not specify temperature at any point, the solution is only defined up to an arbitrary constant, i.e., the solution is not unique. Aria may or may not be able to automatically check for this requirement.
Regarding the smoothness of input functions, we must assume that the necessary derivatives of input quantities exist in order that the solution, , exists. In other words, there are restrictions that must be imposed on the smoothness of the various input data, including material properties and boundary conditions. The smoothness restrictions are stronger in the strong form of the equations, and correspondingly weaker in the weak form of the equations discussed in this chapter. It is the weak form of the equations to which the Aria program actually tries to provide an approximate solution. No matter what the smoothness restrictions, Aria cannot reliably enforce them—especially inside user subroutines.
To summarize, it is primarily up to the user to provide valid admissible input data. Exercise caution to ensure a proper problem formulation.
3.8.3. Galerkin Approximation for the Stationary Problem
Solutions to the weak statement given by (3.139) lie in a certain infinite-dimensional space of functions that have derivatives; we denote this space . Galerkin’s method seeks approximate solutions to the weak statement represented by a linear combination of a finite set of basis functions
that defines a finite-dimensional subspace
. We then seek a function
of the form
(3.140)
which satisfies the weak form in and where the
are unknown constants. If the
are Lagrange basis functions, then the constants
: the constants correspond to the evaluation of the approximation at a node.
Upon substitution of (3.140) into (3.139) and allowing the test function to be each element of
, we arrive at the discrete form of our weak statement: Find the function
, such that
on
and
(3.141)
for all . We have assumed a diffusive-style flux such that
,
Here, (3.141) represents fully discrete equations for the unknown constants
. For known flux functions
, we may write this system of equations as the matrix system
(3.142)
where
(3.143)
The matrix represents the convection matrix,
the diffusion matrix, and
is the forcing vector. Note that if
, then the matrix system in (3.142) is non-symmetric.
The finite element method provides a convenient, systematic way to construct the basis functions . We address this important issue in Finite Element Approximation, in which we also discuss the case where
is a function of the temperature, wherein the essential difference is that the associated boundary integral has contributions to the matrix entries
as well as the forcing function
.
3.8.4. Weak Statement of the Transient Problem
The development of the weak statement for the time-dependent case closely follows that for the stationary case, which we developed in Weak Statement of the Stationary Problem. We begin with the time-dependent scalar conservation equation, which we rewrite here as
(3.144)
The key difference between (3.144) and (3.128) is the introduction of the new independent variable, t, and the associated time derivative term. All of the discussion in Weak Statement of the Stationary Problem regarding subdomain interfaces, flux jumps, and solution smoothness is relevant for the present case. If the method of weighted residuals is applied to (3.144), for the Statement of the Transient Problem, the following weak statement is obtained: Find the function , such that
,
on
, and
on
, which satisfies
(3.145)
for all admissible .
3.8.5. Galerkin Approximation for the Transient Problem
As in Galerkin Approximation for the Stationary Problem, let the infinite-dimensional space of solution functions be denoted . We again introduce a finite set of basis functions
, which defines a finite-dimensional subspace
. Given a value of
, we then seek a function
of
the form
(3.146)
which satisfies the weak form in , and where the
are unknown, time-dependent coefficients. If the
are Lagrange basis functions, then the parameters
: the interpolation parameters correspond to the evaluation of the interpolation function at a given node. We assume that
is bounded,
Upon substitution of (3.146) into (3.145), and setting , we arrive at the
semi-discrete form of our weak statement: Find the function
, where
, such that
,
on
, and
(3.147)
for each , and where
indicates the time derivative of
.
Here, (3.147) represents ordinary differential equations for the
unknown functions
. For known flux functions
, we may write this system of equations as the matrix system
(3.148)
where
(3.149)
The matrix is often referred to as the mass or capacitance matrix. Note that if
, then the matrix system in (3.148) is non-symmetric. Also, if any of the parameters in (3.149), such as the material properties, applied fluxes, or volumetric heating are time dependent, then the corresponding matrices or vectors will also be time dependent. The system of ordinary differential equations given in (3.148) must be numerically integrated in time. We discuss this issue in Time Discretization.
3.8.6. Mass Lumping
In contrast with the consistent mass formulation, a lumping procedure instead uses the nodally evaluated quantity effectively “lumping” the contribution, i.e., the first term in (3.148) becomes
(3.150)
Mass lumping can also be applied to source terms, or boundary fluxes via an analogies procedure.
3.8.7. Time Discretization
There are many ways to integrate the system of ordinary differential equations given in (3.148). The method employed by Aria uses is usually referred to as the variable-implicit method or theta-method. Aria has the ability to adaptively step the ODE to reduce local error with respect to time, but we emphasize that this says nothing about the global error in the ODE, i.e., the tolerance on the local error is not a tolerance on the total accuracy of the ODE solution. More significantly, we should point out that a small error in the temporal discretization does not necessarily imply that there is a small error in the spatial discretization.
We begin the description of Aria’s time integration strategy by rewriting (3.148) as
(3.151)
where and
are the vectors of entries
and
, respectively, and where
is the matrix with entries given by (3.149),
is the matrix with entries
, and
is the vector with entries given by (3.149).
Numerical time integration of a partial differential equation requires both a discretized form of the governing equation and a time integrator. The discrete form of (3.151) for time level can be written as
(3.152)
A time integrator requires that the discrete approximation of be expressed as a function of calculated data and one choice of this approximation is
(3.153)
where is often a function of computed derivatives of
.
One form of (3.153) used in Aria is an implicit method which employs
(3.154)
where is the time step size and
denotes the evaluation level within the time step,
and
.
The degree of implicitness of a given time discretization of (3.151) is determined by the time level at which is evaluated—if at the old time level, then the scheme is said to be explicit, if at the new time level, then fully implicit, and if at some average of the two, then semi-implicit. Combining (3.154) with (3.152) at time levels
and
one obtains a variable implicit method
(3.155)
The case of corresponds to Euler implicit,
corresponds to the trapezoid rule, and
corresponds to Galerkin implicit. In general, this method is accurate only to first order in time, but the special case of
can be shown to be to be second-order accurate. It is important to note that if
or
depend upon
, then (3.155).
The time derivative can be calculated using (3.154) as
(3.156)
Note that for ,
is not needed for evaluation of (3.155). If
, then the recursive nature of (3.156) introduces some difficulty to the solution scheme since
is not initially known. To circumvent this potential issue, Aria uses the fully-implicit case (
) and (3.155) becomes
(3.157)
or as one might expect
Thus in the fully-implicit formulation one solves the discretized set of equation using information from the current time plane with a selected representation of the time derivative.
In the simplest case one begins with an initial guess for and uses the first-order accurate time derivative known as BDF1 (Backward Finite Difference 1).
(3.158)
Another choice for the time derivative is the second-order approximation known as BDF2 (Backward Finite Difference 2)
(3.159)
Obviously utilization of the above requires that the first two solution steps of (3.157) be obtained using the first-order derivative (3.158). Theoretical time step bounds are available for the BDF2 time integrator and the scheme is often used in time accurate simulations.
While it is possible to use a fixed, constant while iterating (3.157), it is often preferable to allow the timestep size to vary. Aria uses an automatic timestep size selection algorithm that allows
to vary while satisfying several constraints 3. The approach first calculates a candidate timestep size, using local time truncation error estimates, then adjusts this value using several heuristic rules.
To calculate the candidate timestep size, assume that two time integrators of comparable accuracy are available. If we compare local truncation error estimates for each scheme, and if we have a target value of a truncation error norm in mind, then we can estimate . The implicit scheme given by (3.155) is one such integrator, but a second is needed. Since this second scheme is only required for the time step selection algorithm, computational speed is important, and therefore we consider only explicit schemes. If
is chosen so that the implicit scheme is first-order accurate in time, then we use forward Euler differencing, which may be written as
(3.160)
where indicates the temperature that is predicted as a result of applying (3.160), and the acceleration vector
is calculated using (3.156). For
, a second-order accurate explicit scheme is required, and the Adams–Bashforth difference formula is used, namely
(3.161)
where the acceleration vectors are calculated using in (3.156).
For , the candidate time step size,
, is estimated according to
(3.162)
where is the given, allowable value of the truncation error norm 4. A typical value of the truncation error norm would be
. Here, the norm
indicates the non-dimensional root mean square norm of a vector,
(3.163)
where
Note that in (3.162), the timestep grows as the square root of . For
, the candidate timestep size is calculated according to the second-order formula
(3.164)
Note that in (3.164), the timestep grows with the cube root of . We emphasize that the initialization problem caused by the recursive nature of (3.161) is circumvented by using the initial
with
for the first two timesteps, then switching to
and using (3.164) to calculate
thereafter.
In practice, it is useful to limit the candidate timestep size calculated by the above procedure. Aria imposes three limits on the timestep. The first limit is enforced only when there is volumetric heating due to chemical reactions, as their effect is not directly included in either (3.162) or (3.164). Aria uses the CHEMEQ library [23] for time integration of chemistry. CHEMEQ uses a stiff ordinary differential equation integrator, which adaptively subcycles species equations according to their stiffness. Usually, the chemical time scales are much shorter than that of conduction. Hence, if represents the minimum timestep size used to integrate the chemical species at timestep
, then we limit
according to
(3.165)
where is a multiplication factor, typically on the order of
.
The second limit on the timestep allows control of how fast the solution changes from one timestep to the next. Let denote the maximum change in temperature at a single mesh node from timestep
to
, and
denote the
allowable value of this difference. Then we enforce
(3.166)
Equation (3.166) states that if the estimated maximum change in temperature exceeds the allowable change, then the candidate timestep size is set to less than the value required to prevent this condition.
Finally, the user is allowed to specify minimum and maximum values of the timestep size, and
, respectively. For the BDF2 method a theoretical limit of
is also imposed. Therefore, we also restrict the candidate timestep size so that
(3.167)
In summary, the candidate timestep size is first estimated using (3.162) for and (3.164) for
. Next, each of the applicable limits given in (3.165)–(3.167) are enforced to determine the timestep size for the next iteration of (3.155),
.
Footnotes
- 1
The development in this chapter closely follows that in Finite Elements: An Introduction, by Becker, et al. [22].
- 2
This classical method is treated in depth in The Method of Weighted Residuals and Variational Principles, by Finlayson [24].
- 3
This algorithm is based on the approached described by Gresho, et al. [25] and is similar to that which is implemented in COYOTE [26] [27].
- 4
We do not derive the local truncation error estimates here, since the method of using Taylor series analysis for this purpose is well established, for example [25].