The SPARC (Sandia Parallel Aerodynamics and Reentry Code) will provide nuclear weapon qualification evidence for the random vibration and thermal environments created by re-entry of a warhead into the earth’s atmosphere. SPARC incorporates the innovative approaches of ATDM projects on several fronts including: effective harnessing of heterogeneous compute nodes using Kokkos, exascale-ready parallel scalability through asynchronous multi-tasking, uncertainty quantification through Sacado integration, implementation of state-of-the-art reentry physics and multiscale models, use of advanced verification and validation methods, and enabling of improved workflows for users. SPARC is being developed primarily for the Department of Energy nuclear weapon program, with additional development and use of the code is being supported by the Department of Defense for conventional weapons programs.
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multi- scale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plas- ticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. Beyond cases studies in concurrent multiscale, we explore progress in crystal plastic- ity through modular designs, solution methodologies, model verification, and extensions to Sierra/SM and manycore applications. Advances in conformal microstructures having both hexahedral and tetrahedral workflows in Sculpt and Cubit are highlighted. A structure-property case study in two-phase metallic composites applies the Materials Knowledge System to local metrics for void evolution. Discussion includes lessons learned, future work, and a summary of funded efforts and proposed work. Finally, an appendix illustrates the need for two-way coupling through a single degree of freedom.
The ability to simulate wireless networks at large-scale for meaningful amount of time is considerably lacking in today's network simulators. For this reason, many published work in this area often limit their simulation studies to less than a 1,000 nodes and either over-simplify channel characteristics or perform studies over time scales much less than a day. In this report, we show that one can overcome these limitations and study problems of high practical consequence. This work presents two key contributions to high fidelity simulation of large-scale wireless networks: (a) wireless simulations can be sped up by more than 100X in runtime using ideas from spatial indexing algorithms and clipping of negligible signals and (b) clustering and task-oriented programming paradigm can be used to reduce inter- process communication in a parallel discrete event simulation resulting in a better scaling efficiency.
For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The payoff from this win-win-win scenario enabled reinvestment in semiconductor fabrication technology that could make even smaller, more densely-packed transistors. And so this virtuous cycle continued, decade after decade. Now though, experts in industry, academia, and government laboratories anticipate that semiconductor miniaturization won’t continue much longer—maybe 10 years or so, at best. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors forced clock speeds to cease getting faster more than a decade ago, which drove the industry to start building chips with multiple cores. But even multi-core architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.
This LDRD project was developed around the ambitious goal of applying PDE-constrained opti- mization approaches to design Z-machine components whose performance is governed by elec- tromagnetic and plasma models. This report documents the results of this LDRD project. Our differentiating approach was to use topology optimization methods developed for structural design and extend them for application to electromagnetic systems pertinent to the Z-machine. To achieve this objective a suite of optimization algorithms were implemented in the ROL library part of the Trilinos framework. These methods were applied to standalone demonstration problems and the Drekar multi-physics research application. Out of this exploration a new augmented Lagrangian approach to structural design problems was developed. We demonstrate that this approach has favorable mesh-independent performance. Both the final design and the algorithmic performance were independent of the size of the mesh. In addition, topology optimization formulations for the design of conducting networks were developed and demonstrated. Of note, this formulation was used to develop a design for the inner magnetically insulated transmission line on the Z-machine. The resulting electromagnetic device is compared with theoretically postulated designs.
Most previous development of the peridynamic theory has assumed a Lagrangian formulation, in which the material model refers to an undeformed reference configuration. In the present work, an Eulerian form of material modeling is developed, in which bond forces depend only on the positions of material points in the deformed configuration. The formulation is consistent with the thermodynamic form of the peridynamic model and is derivable from a suitable expression for the free energy of a material. It is shown that the resulting formulation of peridynamic material models can be used to simulate strong shock waves and fluid response in which very large deformations make the Lagrangian form unsuitable. The Eulerian capability is demonstrated in numerical simulations of ejecta from a wavy free surface on a metal subjected to strong shock wave loading. The Eulerian and Lagrangian contributions to bond force can be combined in a single material model, allowing strength and fracture under tensile or shear loading to be modeled consistently with high compressive stresses. This capability is demonstrated in numerical simulation of bird strike against an aircraft, in which both tensile fracture and high pressure response are important.
In this paper, a nonlocal convection-diffusion model is introduced for the master equation of Markov jump processes in bounded domains. With minimal assumptions on the model parameters, the nonlocal steady and unsteady state master equations are shown to be well-posed in a weak sense. Finally, then the nonlocal operator is shown to be the generator of finite-range nonsymmetric jump processes and, when certain conditions on the model parameters hold, the generators of finite and infinite activity Lévy and Lévy-type jump processes are shown to be special instances of the nonlocal operator.
Current practice for mitigating DRAM hardwarefaults is to simply discard the entire faulty DIMM. However, this becomes increasingly expensive and wasteful as the priceof memory hardware increases and moves physically closer toprocessing units. Accurately characterizing memory faults inreal-time in order to pre-empt future potentially catastrophicfailures is crucial to conserving resources by blacklisting smallaffected regions of memory rather than discarding an entirehardware component. We further evaluate and extend a machinelearning method for DRAM fault characterization introduced inprior work by Baseman et al. at Los Alamos National Laboratory. We report on the usefulness of a variety of training sets, usinga set of production-relevant metrics to evaluate the method ondata from a leadership-class supercomputing facility. We observean increase in percent of faults successfully mitigated as well asa decrease in percent of wasted blacklisted pages, regardless oftraining set, when using the learned algorithm as compared to ahuman-expert, deterministic, and rule-based approach.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
Additive manufacturing enables the rapid, cost effective production of customized structural components. To fully capitalize on the agility of additive manufacturing, it is necessary to develop complementary high-throughput materials evaluation techniques. In this study, over 1000 nominally identical tensile tests are used to explore the effect of process variability on the mechanical property distributions of a precipitation hardened stainless steel produced by a laser powder bed fusion process, also known as direct metal laser sintering or selective laser melting. With this large dataset, rare defects are revealed that affect only ≈2% of the population, stemming from a single build lot of material. The rare defects cause a substantial loss in ductility and are associated with an interconnected network of porosity. The adoption of streamlined test methods will be paramount to diagnosing and mitigating such dangerous anomalies in future structural components.
The geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, but can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.