ALEGRA is a multiphysics finite-element shock hydrodynamics code, under development at Sandia National Laboratories since 1990. Fully coupled multiphysics capabilities include transient magnetics, magnetohydrodynamics, electromechanics, and radiation transport. Importantly, ALEGRA is used to study hypervelocity impact, pulsed power devices, and radiation effects. The breadth of physics represented in ALEGRA is outlined here, along with simulated results for a selected hypervelocity impact experiment.
Lagrangian shock hydrodynamics simulations will fail to proceed past a certain time if the mesh is approaching tangling. A common solution is an Arbitrary Lagrangian Eulerian (ALE) form, in which the mesh is improved (remeshing) and the solution is remapped onto the improved mesh. The simplest remeshing techniques involve moving only the nodes of the mesh. More advanced remeshing techniques involve altering the mesh connectivity in portions of the domain in order to prevent tangling. Work has been done using Voronoi-based polygonal mesh generators and 2D quad/triangle mesh adaptation. Here, this paper presents the use of tetrahedral mesh adaptation methods as the remeshing step in an otherwise Lagrangian finite element shock hydrodynamics code called Alexa.
A discrete De Rham complex enables compatible, structure-preserving discretizations for a broad range of partial differential equations problems. Such discretizations can correctly reproduce the physics of interface problems, provided the grid conforms to the interface. However, large deformations, complex geometries, and evolving interfaces makes generation of such grids difficult. We develop and demonstrate two formally equivalent approaches that, for a given background mesh, dynamically construct an interface-conforming discrete De Rham complex. Both approaches start by dividing cut elements into interface-conforming subelements but differ in how they build the finite element basis on these subelements. The first approach discards the existing non-conforming basis of the parent element and replaces it by a dynamic set of degrees of freedom of the same kind. The second approach defines the interface-conforming degrees of freedom on the subelements as superpositions of the basis functions of the parent element. These approaches generalize the Conformal Decomposition Finite Element Method (CDFEM) and the extended finite element method with algebraic constraints (XFEM-AC), respectively, across the De Rham complex.
This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal is to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. While further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.
This paper presents an end-to-end design process for compliance minimization based topological optimization of cellular structures through to the realization of a final printed product. Homogenization is used to derive properties representative of these structures through direct numerical simulation of unit cell models of the underlying periodic structure. The resulting homogenized properties are then used assuming uniform distribution of the cellular structure to compute the final macro-scale structure. A new method is then presented for generating an STL representation of the final optimized part that is suitable for printing on typical industrial machines. Quite fine cellular structures are shown to be possible using this method as compared to other approaches that use nurb based CAD representations of the geometry. Finally, results are presented that illustrate the fine-scale stresses developed in the final macro-scale optimized part and suggestions are made as to incorporate these features into the overall optimization process.
Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these models can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.
Material response to dynamic loading is often dominated by microstructure (grain structure, porosity, inclusions, defects). An example critically important to Sandia's mission is dynamic strength of polycrystalline metals where heterogeneities lead to localization of deformation and loss of shear strength. Microstructural effects are of broad importance to the scientific community and several institutions within DoD and DOE; however, current models rely on inaccurate assumptions about mechanisms at the sub-continuum or mesoscale. Consequently, there is a critical need for accurate and robust methods for modeling heterogeneous material response at this lower length scale. This report summarizes work performed as part of an LDRD effort (FY11 to FY13; project number 151364) to meet these needs.
Aerospace designers seek lightweight, high-strength structures to lower launch weight while creating structures that are capable of withstanding launch loadings. Most 'light-weighting' is done through an expensive, time-consuming, iterative method requiring experience and a repeated design/test/redesign sequence until an adequate solution is obtained. Little successful work has been done in the application of generalized 3D optimization due to the difficulty of analytical solutions, the large computational requirements of computerized solutions, and the inability to manufacture many optimized structures with conventional machining processes. The Titanium Cholla LDRD team set out to create generalized 3D optimization routines, a set of analytically optimized 3D structures for testing the solutions, and a method of manufacturing these complex optimized structures. The team developed two new computer optimization solutions: Advanced Topological Optimization (ATO) and FlexFEM, an optimization package utilizing the eXtended Finite Element Method (XFEM) software for stress analysis. The team also developed several new analytically defined classes of optimized structures. Finally, the team developed a 3D capability for the Laser Engineered Net Shaping{trademark} (LENS{reg_sign}) additive manufacturing process including process planning for 3D optimized structures. This report gives individual examples as well as one generalized example showing the optimized solutions and an optimized metal part.
Here, we examine the coupling of the patterned-interface-reconstruction (PIR) algorithm with the extended finite element method (X-FEM) for general multi-material problems over structured and unstructured meshes. The coupled method offers the advantages of allowing for local, element-based reconstructions of the interface, and facilitates the imposition of discrete conservation laws. Of particular note is the use of an interface representation that is volume-of-fluid based, giving rise to a segmented interface representation that is not continuous across element boundaries. In conjunction with such a representation, we employ enrichment with the ridge function for treating material interfaces and an analog to Heaviside enrichment for treating free surfaces. We examine a series of benchmark problems that quantify the convergence aspects of the coupled method and examine the sensitivity to noise in the interface reconstruction. Finally, the fidelity of a remapping strategy is also examined for a moving interface problem.
The success of Lagrangian contact modeling leads one to believe that important aspects of this capability may be used for multi-material modeling when only a portion of the simulation can be represented in a Lagrangian frame. We review current experience with two dual mesh technologies where one of these meshes is a Lagrangian mesh and the other is an Arbitrary Lagrangian/Eulerian (ALE) mesh. These methods are cast in the framework of an operator-split ALE algorithm where a Lagrangian step is followed by a remesh/remap step. An interface-coupled methodology is considered first. This technique is applicable to problems involving contact between materials of dissimilar compliance. The technique models the more compliant (soft) material as ALE while the less compliant (hard) material and associated interface are modeled in a Lagrangian fashion. Loads are transferred between the hard and soft materials via explicit transient dynamics contact algorithms. The use of these contact algorithms remove the requirement of node-tonode matching at the soft-hard interface. In the context of the operator-split ALE algorithm, a single Lagrangian step is performed using a mesh to mesh contact algorithm. At the end of the Lagrangian step the meshes will be slightly offset at the interface but non-interpenetrating. The ALE mesh nodes at the interface are then remeshed to their initial location relative to the Lagrangian body faces and the ALE mesh is smoothed, translated and rotated to follow Lagrangian body. Robust remeshing in the ALE region is required for success of this algorithm, and we describe current work in this area. The second method is an overlapping grid methodology that requires mapping of information between a Lagrangian mesh and an ALE mesh. The Lagrangian mesh describes a relatively hard body that interacts with softer material contained in the ALE mesh. A predicted solution for the velocity field is performed independently on both meshes. Element-centered velocity and momentum are transferred between the meshes using the volume transfer capability implemented in contact algorithms. Data from the ALE mesh is mapped to a phantom mesh that surrounds the Lagrangian mesh, providing for the reaction to the predicted motion of the Lagrangian material. Data from the Lagrangian mesh is mapped directly to the ALE mesh. A momentum balance is performed on both meshes to adjust the velocity field to account for the interaction of the material from the other mesh. Subsequent, remeshing and remapping of the ALE mesh is performed to allow large deformation of the softer material. We overview current progress using this approach and discuss avenues for future research and development.
ALEGRA is an arbitrary Lagrangian-Eulerian multi-material finite element code used for modeling solid dynamics problems involving large distortion and shock propagation. This document describes the basic user input language and instructions for using the software.
An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.
This report presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speeds, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis (aka von Neumann analysis) provides an automatic process for separating the spectral behavior of the discrete advective operator into its symmetric dissipative and skew-symmetric advective components. Further it is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, streamline upwind control-volume, produce both an artificial diffusivity and an artificial phase speed in addition to the usual semi-discrete artifacts observed in the discrete phase speed, group speed and diffusivity. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behavior in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behavior. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework.
ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation. This document describes the user input language for the code.
This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.
The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.
The Reproducing Kernel Particle Method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a mesh-full or a mesh-free manner and provides the ability to tune the method, via the selection of a dilation parameter and window function, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although RKPM has many appealing attributes, the method is quite new and its numerical performance is still being quantified with respect to more traditional discretization methods. In order to assess the numerical performance of RKPM, detailed studies of RKPM on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, the influence of row-sum lumping the mass matrix is shown to introduce severe lagging phase errors. A higher-order mass matrix improves the dispersion characteristics relative to the lumped mass matrix but delivers severe lagging phase errors relative to the fully integrated, consistent mass matrix.
In this work, high-energy electron beam brazing of a ceramic part is modeled numerically. The part considered consists of a ceramic cylinder and disk between which is sandwiched an annular washer of braze material. An electron beam impinges on the disk, melting the braze metal. The resulting coupled electron-photon and thermal transport equations are solved using Monte Carlo and finite element techniques respectively. Results indicate that increased electron beam current decreases the time required to melt the braze while increasing temperature gradients in the ceramic near the braze. Furnace brazing was also simulated and predicted results indicate increased processing times relative to electron beam brazing.