3.11. Solution Strategy

In Governing Equations, the residual was defined for each equation supported by Aria. Discretization then discussed how the equation was discretized over the discrete domain of the problem. Finally, the present section discusses how the solution is determined. First, we define the solution vector \symSlnVec as the collection of conserved quantities for each equation solved on each mesh node. Similarly, the residual vector \symResVec is defined as the corresponding residual for the given equation and mesh node. The goal of this section is to find the solution vector \symSlnVec which satisfies

(3.191)\symResVec(\symSlnVec,t) = \vector{0}

to an acceptable tolerance. We first define the Jacobian as the tangent sensitivity of the residual to the solution vector at the point \symSlnVec

\symJac_{ij} = \frac{\partial r_i}{\partial \symSlnVar_j}\bigg|_{\symSlnVec,t}

To solve (3.191), two approaches are possible. First, Newton’s method is a fixed point root finding algorithm. For each iteration k, the Jacobian is used to determine the unique root which sets the residual of the linear approximation of the problem to zero

\symJac(\symSlnVec^{k},t^k) \symSlnIncr^{k} = - \symResVec(\symSlnVec^{k},t^k) \\
\symSlnVec^{k+1} = \symSlnVec^{k} + \alpha \symSlnIncr^{k}

Since the problem is typically nonlinear, this linear approximation will not correspond to the exact root of the real problem. Instead, the iteration is repeated with each producing a better approximation of the exact solution. The algorithm is terminated when a sufficiently close approximation is found.

For the standard Newton’s method, \alpha = 1. For non-convex problems, though, Newton’s method can struggle to converge. In these cases, the second option adds a backtracking line search along the Newton direction \symSlnIncr^{k}. This approach finds an appropriate under-relaxation factor \alpha to ensure progress in reducing the problem’s residual. For each sub-iteration, the relaxation is reduced by a factor \tau. The relaxation adjustment is stopped when the Armijo-Goldstein condition is met

\| \symResVec(\symSlnVec^{k} + \alpha \symSlnIncr^{k}) \| \leq (1 - \tau  \alpha) \, \| \symResVec(\symSlnVec^{k} + \symSlnIncr^{k}) \|

For a steady problem, the nonlinear system solve must be evaluated only once. Alg. 3.1 below demonstrates the solve of a single nonlinear system

Alg. 3.1 Strategy for solving the coupled nonlinear system

\begin{algorithmic}[1]
\REQUIRE{$\symSlnVec^*,t,\epsilon, I$}
\ENSURE{$\symSlnVec$}
\STATE Pre-step work (contact search, initialization, etc)
\STATE Assemble $\symResVec^* = \symResVec(\symSlnVec^*,t)$
\STATE $i = 0$
\STATE $\symSlnVec = \symSlnVec^*$
\WHILE{$\left(\| \symResVec^* \| > \epsilon\right)$ \textbf{and} $\left( i < I\right) $}
  \STATE Assemble $\tensor{J}^* = \tensor{J}(\symSlnVec^*,t)$
  \STATE Solve $\tensor{J}^* \symSlnIncr = -\symResVec^*$ for $\symSlnIncr$
  \STATE $\alpha = 1$
  \STATE $r_p = \| \symResVec^* \|$
  \WHILE{\textbf{not} finished}
    \STATE $\symSlnVec^* = \symSlnVec + \alpha \symSlnIncr$
    \STATE Assemble $\symResVec^* = \symResVec(\symSlnVec^*,t)$
    \IF{line search not enabled \textbf{or} $\| \symResVec^* \| < (1 - \tau \alpha) r_p$}
      \STATE finished = \textbf{true}
    \ELSE
      \STATE $\alpha = \tau \alpha$
    \ENDIF
  \ENDWHILE
  \STATE Increment $i$
  \STATE $\symSlnVec = \symSlnVec^*$
\ENDWHILE
\STATE Post-step work (postprocessors, element death, etc)
\end{algorithmic}

The transient problem can be viewed simply as a series of nonlinear solves, where the transient information is embedded in the residuals. The transient implementation of this algorithm (Alg. 3.2) is provided below, where each timestep solves the nonlinear system at the new time with Alg. 3.1. The initial guess for each step is generated with a prediction of the new solution given its current rate of change, with Alg. 3.1 used to correct this prediction. Predictor-Corrector below discusses more details on this strategy. A discussion on how the transient information is included in the system of equations can be found in Time Integration.

Alg. 3.2 Nonlinear solution strategy for single equation-system transient problem

\begin{algorithmic}[1]
\REQUIRE{$\symSlnVec_0, \Delta t_0, t_0, t_\text{end},\epsilon, I$ }
\ENSURE{$\symSlnVec$}
\STATE $t = t_0$
\STATE $\symSlnVec = \symSlnVec_0$
\STATE $\Delta t = \Delta t_0$
\WHILE {$t < t_\text{end}$}
  \STATE Calculate the predicted solution $t \rightarrow t + \Delta t$, $\symSlnVec \rightarrow \symSlnVec^*$
  \STATE Solve coupled system at time $t + \Delta t$ for $\symSlnVec$ with initial guess $\symSlnVec^*$
  \STATE $t = t + \Delta t$
  \STATE Calculate new $\Delta t$
\ENDWHILE
\end{algorithmic}

In more complicated problems, it is common to split the full set of equations into multiple equation systems (See simulation setup for more discussion on how a segregated solve is setup). In this case, each step must achieve convergence of all segregated equations systems. Subject to the user specified convergence criteria, Alg. 3.1 is applied to each equation system and the solution is only accepted when the problem is globally converged

Alg. 3.3 Segregated nonlinear solution strategy for transient problem

\begin{algorithmic}[1]
\REQUIRE{$\symSlnVec_0, \Delta t_0, t_0, t_\text{end},\epsilon, I$ }
\ENSURE{$\symSlnVec$}
\STATE $t = t_0$
\STATE $\symSlnVec = \symSlnVec_0$ for each equation system
\STATE $\Delta t = \Delta t_0$
\WHILE {$t < t_\text{end}$}
  \WHILE {\textbf{not} globally converged}
    \FOR {\textbf{each} equation system}
      \IF {is first iteration}
        \STATE Calculate the predicted solution $t \rightarrow t + \Delta t$, $\symSlnVec \rightarrow \symSlnVec^*$
      \ELSE
        \STATE Reuse previous solution $\symSlnVec^* = \symSlnVec$
      \ENDIF
      \STATE Solve coupled system at time $t + \Delta t$ for $\symSlnVec$ with initial guess $\symSlnVec^*$
    \ENDFOR
  \ENDWHILE
  \STATE $t = t + \Delta t$
  \STATE Negotiate new $\Delta t$ across all equation systems
\ENDWHILE
\end{algorithmic}

3.11.1. Time Integration

As was mentioned above, time information is embedded in the forcing vector of the problem via the MASS term (See Equation Specification for more information on specifying a transient equation). When this is active, assembly of the MASS term considers not only the new time state n+1, but also the respective older states required by the time integration method. Based on the method chosen, the rate of change of the solution vector at the new time is given below

BACKWARD_EULER, FIRST_ORDER

Backward Euler method is an implicit method where the solution is progressed using only the old and new solution vector. It is also known as BDF1.

\dot{\symSlnVec}_{n+1} = \frac{ \symSlnVec_{n+1} - \symSlnVec_n }{\Delta t_{n}}

BDF2

Similar to BACKWARD_EULER, but uses an additional old time state to achieve 2nd order accuracy.

\dot{\symSlnVec}_{n+1} = c_{n+1} \, \symSlnVec_{n+1} + c_{n} \, \symSlnVec_n + c_{n-1} \, \symSlnVec_{n-1}

where

\gamma = & \,  \frac{\Delta t_n}{\Delta t_{n-1}} \\
c_{n+1} = &\,  \frac{1 + 2 \gamma}{ (1 + \gamma) \Delta t_n } \\
c_{n} = &\,  -\frac{1 + \gamma}{\Delta t_n} \\
c_{n-1} = &\,  \frac{\gamma^2}{ (1 + \gamma) \Delta t_n }

MIDPOINT_RULE, SECOND_ORDER

This method achieves 2nd order accuracy using the old and new solution vector as well as the old rate of change. This method is also known as Crank-Nicolson.

\dot{\symSlnVec}_{n+1} = \frac{2 \left( \symSlnVec_{n+1} - \symSlnVec_n \right)}{\Delta t_{n}} - \dot{\symSlnVec}_n

AVERAGE_ACCELERATION

This method uses the extended mean value theorem to find the averaged acceleration, and is also known as the Newmark-beta method

\dot{\symSlnVec}_{n+1} = \frac{\symSlnVec_{n+1} - \symSlnVec_n}{\gamma \Delta t_n} - \frac{1 - \gamma}{\gamma} \dot{\symSlnVec}_n

with \gamma = 0.5 and \beta = 0.25.

3.11.2. Predictor-Corrector

In order to decrease the number of iterations required in the nonlinear solve, as well as to avoid unphysical initial guesses, the transient iteration begins with a predicted solution. The predictor defaults to the same order as the chosen Time Integration method, or can be set manually. The first order prediction of the new solution is given by

\symSlnVec^*_{n+1} = \symSlnVec_{n} + \Delta t_n \, \dot{\symSlnVec}_n

Similarly, the second order prediction of the new solution is

\symSlnVec^*_{n+1} = \symSlnVec_{n} + \frac{\Delta t_n}{2} \left[ \left( 2 + \frac{\Delta t_n}{\Delta t_{n-1}}\right) \dot{\symSlnVec}_n - \left( \frac{\Delta t_n}{\Delta t_{n-1}} \right) \dot{\symSlnVec}_{n-1} \right]