Publications

Results 1951–2000 of 9,998

Search results

Jump to search filters

Dragonfly-Inspired Algorithms for Intercept Trajectory Planning

Chance, Frances S.

Dragonflies are known to be highly successful hunters (achieving 90-95% success rate in nature) that implement a guidance law like proportional navigation to intercept their prey. This project tested the hypothesis that dragonflies are able to implement proportional navigation using prey-image translation on their eyes. The model dragonfly presented here calculates changes in pitch and yaw to maintain the prey's image at a designated location (the fovea) on a two-dimensional screen (the model's eyes ). When the model also uses self-knowledge of its own maneuvers as an error signal to adjust the location of the fovea, its interception trajectory becomes equivalent to proportional navigation. I also show that this model can also be applied successfully (in a limited number of scenarios) against maneuvering prey. My results provide a proof-of-concept demonstration of the potential of using the dragonfly nervous system to design a robust interception algorithm for implementation on a man-made system.

More Details

On-line Generation and Error Handling for Surrogate Models within Multifidelity Uncertainty Quantification

Blonigan, Patrick J.; Geraci, Gianluca G.; Rizzi, Francesco N.; Eldred, Michael S.; Carlberg, Kevin

Uncertainty quantification is recognized as a fundamental task to obtain predictive numerical simulations. However, many realistic engineering applications require complex and computationally expensive high-fidelity numerical simulations for the accurate characterization of the system responses. Moreover, complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity approach, i.e. a workflow that only uses high-fidelity simulations to perform the uncertainty quantification task, is unfeasible due to the prohibitive overall computational cost. In recent years, multifidelity strategies have been introduced to overcome this issue. The core idea of this family of methods is to combine simulations with varying levels of fidelity/accuracy in order to obtain the multifidelity estimators or surrogates with the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a prioria sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical realization and thus its computational cost. However ,less attention has been dedicated to low-fidelity models that can be built directly from the small number of high-fidelity simulations available. In this work we focus our attention on Reduced-Order Models that can be considered a particular class of data-driven approaches. Our main goal is to explore the combination of multifidelity uncertainty quantification and reduced-order models to obtain an efficient framework for propagating uncertainties through expensive numerical codes.

More Details

Hybridizing Classifiers and Collection Systems to Maximize Intelligence and Minimize Uncertainty in National Security Data Analytics Applications

Staid, Andrea S.; Valicka, Christopher G.

There are numerous applications that combine data collected from sensors with machine-learning based classification models to predict the type of event or objects observed. Both the collection of the data itself and the classification models can be tuned for optimal performance, but we hypothesize that additional gains can be realized by jointly assessing both factors together. Through this research, we used a seismic event dataset and two neural network classification models that issued probabilistic predictions on each event to determine whether it was an earthquake or a quarry blast. Real world applications will have constraints on data collection, perhaps in terms of a budget for the number of sensors or on where, when, or how data can be collected. We mimicked such constraints by creating subnetworks of sensors with both size and locational constraints. We compare different methods of determining the set of sensors in each subnetwork in terms of their predictive accuracy and the number of events that they observe overall. Additionally, we take the classifiers into account, treating them both as black-box models and testing out various ways of combining predictions among models and among the set of sensors that observe any given event. We find that comparable overall performance can be seen with less than half the number of sensors in the full network. Additionally, a voting scheme that uses the average confidence across the sensors for a given event shows improved predictive accuracy across nearly all subnetworks. Lastly, locational constraints matter, but sometimes in unintuitive ways, as better-performing sensors may be chosen instead of the ones excluded based on location. This being a short-term research effort, we offer a lengthy discussion on interesting next-steps and ties to other ongoing research efforts that we did not have time to pursue. These include a detailed analysis of the subnetwork performance broken down by event type, specific location, and model confidence. This project also included a Campus Executive research partnership with Texas A&M University. Through this, we worked with a professor and student to study information gain for UAV routing. This was an alternative way of looking at the similar problem space that includes sensor operation for data collection and the resulting benefit to be gained from it. This work is described in an Appendix.

More Details

Shortening the Design and Certification Cycle for Additively Manufactured Materials by Improved Mesoscale Simulations and Validation Experiments: Fiscal Year 2019 Status Report

Specht, Paul E.; Mitchell, John A.; Adams, David P.; Brown, Justin L.; Silling, Stewart A.; Wise, Jack L.; Palmer, Todd

This report outlines the fiscal year (FY) 2019 status of an ongoing multi-year effort to develop a general, microstructurally-aware, continuum-level model for representing the dynamic response of material with complex microstructures. This work has focused on accurately representing the response of both conventionally wrought processed and additively manufactured (AM) 304L stainless steel (SS) as a test case. Additive manufacturing, or 3D printing, is an emerging technology capable of enabling shortened design and certification cycles for stockpile components through rapid prototyping. However, there is not an understanding of how the complex and unique microstructures of AM materials affect their mechanical response at high strain rates. To achieve our project goal, an upscaling technique was developed to bridge the gap between the microstructural and continuum scales to represent AM microstructures on a Finite Element (FE) mesh. This process involves the simulations of the additive process using the Sandia developed kinetic Monte Carlo (KMC) code SPPARKS. These SPPARKS microstructures are characterized using clustering algorithms from machine learning and used to populate the quadrature points of a FE mesh. Additionally, a spall kinetic model (SKM) was developed to more accurately represent the dynamic failure of AM materials. Validation experiments were performed using both pulsed power machines and projectile launchers. These experiments have provided equation of state (EOS) and flow strength measurements of both wrought and AM 304L SS to above Mbar pressures. In some experiments, multi-point interferometry was used to quantify the variation is observed material response of the AM 304L SS. Analysis of these experiments is ongoing, but preliminary comparisons of our upscaling technique and SKM to experimental data were performed as a validation exercise. Moving forward, this project will advance and further validate our computational framework, using advanced theory and additional high-fidelity experiments.

More Details

Designer quantum materials

Misra, Shashank M.; Ward, Daniel R.; Baczewski, Andrew D.; Campbell, Quinn C.; Schmucker, Scott W.; Mounce, Andrew M.; Tracy, Lisa A.; Lu, Tzu-Ming L.; Marshall, Michael T.; Campbell, DeAnna M.

Quantum materials have long promised to revolutionize everything from energy transmission (high temperature superconductors) to both quantum and classical information systems (topological materials). However, their discovery and application has proceeded in an Edisonian fashion due to both an incomplete theoretical understanding and the difficulty of growing and purifying new materials. This project leverages Sandia's unique atomic precision advanced manufacturing (APAM) capability to design small-scale tunable arrays (designer materials) made of donors in silicon. Their low-energy electronic behavior can mimic quantum materials, and can be tuned by changing the fabrication parameters for the array, thereby enabling the discovery of materials systems which can't yet be synthesized. In this report, we detail three key advances we have made towards development of designer quantum materials. First are advances both in APAM technique and underlying mechanisms required to realize high-yielding donor arrays. Second is the first-ever observation of distinct phases in this material system, manifest in disordered 2D sheets of donors. Finally are advances in modeling the electronic structure of donor clusters and regular structures incorporating them, critical to understanding whether an array is expected to show interesting physics. Combined, these establish the baseline knowledge required to manifest the strongly-correlated phases of the Mott-Hubbard model in donor arrays, the first step to deploying APAM donor arrays as analogues of quantum materials.

More Details

Higher-moment buffered probability

Optimization Letters

Kouri, Drew P.

In stochastic optimization, probabilities naturally arise as cost functionals and chance constraints. Unfortunately, these functions are difficult to handle both theoretically and computationally. The buffered probability of failure and its subsequent extensions were developed as numerically tractable, conservative surrogates for probabilistic computations. In this manuscript, we introduce the higher-moment buffered probability. Whereas the buffered probability is defined using the conditional value-at-risk, the higher-moment buffered probability is defined using higher-moment coherent risk measures. In this way, the higher-moment buffered probability encodes information about the magnitude of tail moments, not simply the tail average. We prove that the higher-moment buffered probability is closed, monotonic, quasi-convex and can be computed by solving a smooth one-dimensional convex optimization problem. These properties enable smooth reformulations of both higher-moment buffered probability cost functionals and constraints.

More Details

Gaussian-Process-Driven Adaptive Sampling for Reduced-Order Modeling of Texture Effects in Polycrystalline Alpha-Ti

JOM

Tallman, Aaron E.; Stopka, Krzysztof S.; Swiler, Laura P.; Wang, Yan; Kalidindi, Surya R.; Mcdowell, David L.

Data-driven tools for finding structure–property (S–P) relations, such as the Materials Knowledge System (MKS) framework, can accelerate materials design, once the costly and technical calibration process has been completed. A three-model method is proposed to reduce the expense of S–P relation model calibration: (1) direct simulations are performed as per (2) a Gaussian process-based data collection model, to calibrate (3) an MKS homogenization model in an application to α-Ti. The new methods are compared favorably with expert texture selection on the performance of the so-calibrated MKS models. Benefits for the development of new and improved materials are discussed.

More Details

EMPIRE-PIC Code Verification of a Cold Diode

Smith, Thomas M.; Pointon, T.D.; Cartwright, K.L.; Rider, W.J.

This report presents the code verification of EMPIRE-PIC to the analytic solution to a cold diode which was first derived by Jaffe. The cold diode was simulated using EMPIRE-PIC and the error norms were computed based on the Jaffe solution. The diode geometry is one-dimensional and uses the EMPIRE electrostatic field solver. After a transient start-up phase as the electrons first cross the anode-cathode gap, the simulations reach an equilibrium where the electric potential and electric field are approximately steady. The expected spatial order of convergence for potential, electric field and particle velocity are observed.

More Details

A parallel graph algorithm for detecting mesh singularities in distributed memory ice sheet simulations

ACM International Conference Proceeding Series

Bogle, Ian A.; Devine, Karen D.; Perego, Mauro P.; Rajamanickam, Sivasankaran R.; Slota, George M.

We present a new, distributed-memory parallel algorithm for detection of degenerate mesh features that can cause singularities in ice sheet mesh simulations. Identifying and removing mesh features such as disconnected components (icebergs) or hinge vertices (peninsulas of ice detached from the land) can significantly improve the convergence of iterative solvers. Because the ice sheet evolves during the course of a simulation, it is important that the detection algorithm can run in situ with the simulation - - running in parallel and taking a negligible amount of computation time - - so that degenerate features (e.g., calving icebergs) can be detected as they develop. We present a distributed memory, BFS-based label-propagation approach to degenerate feature detection that is efficient enough to be called at each step of an ice sheet simulation, while correctly identifying all degenerate features of an ice sheet mesh. Our method finds all degenerate features in a mesh with 13 million vertices in 0.0561 seconds on 1536 cores in the MPAS Albany Land Ice (MALI) model. Compared to the previously used serial pre-processing approach, we observe a 46,000x speedup for our algorithm, and provide additional capability to do dynamic detection of degenerate features in the simulation.

More Details

TATB Sensitivity to Shocks from Electrical Arcs

Propellants, Explosives, Pyrotechnics

Chen, Kenneth C.; Warne, Larry K.; Jorgenson, Roy E.; Niederhaus, John H.

Use of insensitive high explosives (IHEs) has significantly improved ammunition safety because of their remarkable insensitivity to violent cook-off, shock and impact. Triamino-trinitrobenzene (TATB) is the IHE used in many modern munitions. Previously, lightning simulations in different test configurations have shown that the required detonation threshold for standard density TATB at ambient and elevated temperatures (250 C) has a sufficient margin over the shock caused by an arc from the most severe lightning. In this paper, the Braginskii model with Lee-More channel conductivity prescription is used to demonstrate how electrical arcs from lightning could cause detonation in TATB. The steep rise and slow decay in typical lightning pulse are used in demonstrating that the shock pressure from an electrical arc, after reaching the peak, falls off faster than the inverse of the arc radius. For detonation to occur, two necessary detonation conditions must be met: the Pop-Plot criterion and minimum spot size requirement. The relevant Pop-Plot for TATB at 250 C was converted into an empirical detonation criterion, which is applicable to explosives subject to shocks of variable pressure. The arc cross-section was required to meet the minimum detonation spot size reported in the literature. One caveat is that when the shock pressure exceeds the detonation pressure the Pop-Plot may not be applicable, and the minimum spot size requirement may be smaller.

More Details
Results 1951–2000 of 9,998
Results 1951–2000 of 9,998