Publications

Results 1751–1800 of 99,299

Search results

Jump to search filters

Modeling Workloads of a Linear Electromagnetic Code for Load Balancing Matrix Assembly

Lifflander, Jonathan J.; Pebay, Pierre L.; Mcgovern, Sean T.; Slattengren, Nicole L.

This report presents our work to model the workloads of a linear electromagnetic application based on the method of moments in the frequency domain to effectively load balance the matrix assembly. This application is particularly challenging to load balance due to its lack of persistent iterative behavior, its operation under tight memory constraint (where the matrix may fill 80% of memory on each node), and the algorithmic complexity of the computational method. This report describes the first step in our work to apply an inspector-executor approach for load balancing workloads where key parameters are exposed during the inspector phase and a pre-trained model is applied to predict relative task weights for the load balancer.

More Details

Electrochemical aptamer-based sensors: leveraging the sensing platform for minimally-invasive microneedle measurements and fundamental exploration of sensor biofouling dynamics

Downs, Alexandra M.; Miller, Philip R.; Bolotsky, Adam; Staats, Amelia M.; Weaver, Bryan M.; Bennett, Haley L.; Tiwari, Sidhant; Kolker, Stephanie; Wolff, Nathan P.; Polsky, Ronen; Larson, Steven R.; Coombes, Kenneth R.; Sawyer, Patricia S.

The ability to track the concentrations of specific molecules in the body in real time would significantly improve our ability to study, monitor, and respond to diseases. To achieve this, we require sensors that can withstand the complex environment inside the body. Electrochemical aptamer-based sensors are particularly promising for in vivo sensing, as they are among the only generalizable sensing technologies that can achieve real-time molecular monitoring directly in blood and the living body. In this project, we first focused on extending the application space of aptamer sensors to support minimally-invasive wearable measurements. To achieve this, we developed individually-addressable sensors with commercial off-the-shelf microneedles. We demonstrated sensor function in buffer, blood, and porcine skin (a common proxy for human skin). In addition to the applied sensing project, we also worked to improve fundamental understanding of the aptamer sensing platform and how it responds to biomolecular interferents. Specifically, we explored the interfacial dynamics of biofouling – a process impacting sensors placed in complex fluids, such as blood.

More Details

Rapid Fabrication of High Frame Rate Multichannel FTIR Spectrometers

Reneker, Joseph; Wermer, Lydia R.; Kaehr, Bryan J.; Meiser, Daniel; Huntley, Emily; Shields, Eric A.

Spectrally resolved signals in the short- to mid-wave infrared (SWIR/MWIR) bands at high-temporal resolution are critical for many national security remote sensing missions. Currently available off the shelf technology can achieve either high temporal resolution or high spectral resolution, but rugged instruments that can achieve both simultaneously remain mostly in the realm of one-off R&D projects. This report documents efforts to demonstrate a new technique for designing and building high resolution, high framerate multichannel FTIR (MC-FTIR) spectrometers that operate in the SWIR/MWIR bands. The core optical element in a MC-FTIR spectrometer is an array of statically-tuned lamellar grating interferometers (LGI). In the original MC-FTIR work these arrays were fabricated using a synchrotron x-ray lithography method. We proposed to instead fabricate these LGI arrays using multiphoton lithography (MPL), a 3D printing technique that can fabricate meso-scale structures with sub-micron precision. Although we were able to fabricate LGI arrays of sufficient size using MPL, the realized optical surfaces had unsuitably high optical form errors, precluding their use in a fieldable instrument. Further advancement in MPL technology may eventually enable fabrication of interferometer-grade LGI arrays.

More Details

Computing dissipation for molecular-level turbulence simulations

Mcmullen, Ryan M.

A major difficulty in the analysis of molecular-level simulations is that macroscopic flow quantities are inherently noisy due to molecular fluctuations. An important example for turbulent flows is the kinetic energy dissipation rate. Traditionally, this quantity is calculated from gradients of the macroscopic velocity field, which exacerbates the noise problem. The inability to accurately compute the dissipation rate makes meaningful comparison of molecular-level and continuum simulation results a serious challenge. Herein, we extend previously developed coarse-graining theories to derive an exact molecular-level expression for the dissipation rate, which would circumvent the need to compute gradients of noisy fields. Although the exact expression cannot feasibly be implemented in Sandia’s direct simulation Monte Carlo (DSMC) code SPARTA, we utilize an approximate “hybrid” approach and compare it to the conventional gradient-based approach for planar Couette flow and the two-dimensional Taylor-Green vortex, demonstrating that the hybrid approach is significantly more accurate. Finally, we explore the possibility of adopting a Lagrangian approach to calculate the energy dissipation rate.

More Details

Machine learning methods for particle stress development in suspension Poiseuille flows

Rheologica Acta

Howard, Amanda A.; Dong, Justin; Patel, Ravi; D'Elia, Marta; Yeo, Kyongmin; Maxey, Martin R.; Stinis, Panos

Numerical simulations are used to study the dynamics of a developing suspension Poiseuille flow with monodispersed and bidispersed neutrally buoyant particles in a planar channel, and machine learning is applied to learn the evolving stresses of the developing suspension. The particle stresses and pressure develop on a slower time scale than the volume fraction, indicating that once the particles reach a steady volume fraction profile, they rearrange to minimize the contact pressure on each particle. We consider the timescale for stress development and how the stress development connects to particle migration. For developing monodisperse suspensions, we present a new physics-informed Galerkin neural network that allows for learning the particle stresses when direct measurements are not possible. We show that when a training set of stress measurements is available, the MOR-physics operator learning method can also capture the particle stresses accurately.

More Details

Barriers and Alternatives to Encryption in Critical Nuclear Systems

Lamb, Christopher; Sandoval, Daniel R.

Over the past decade, cybersecurity researchers have released multiple studies highlighting the insecure nature of I&C system communication protocols. In response, standards bodies have addressed the issue by adding the ability to encrypt communications to some protocols in some cases, while control system engineers have argued that encryption within these kinds of high consequence systems is in fact dangerous. Certainly, control system information between systems should be protected. But encrypting the information may not be the best way to do so. In fact, while in IT systems vendors are concerned with confidentiality, integrity, and availability, frequently in that order, in OT systems engineers are much more concerned with availability and integrity that confidentiality. In this paper, we will counter specific arguments against encrypting control system traffic, and present potential alternatives to encryption that support nuclear OT system needs more strongly that commodity IT system needs while still providing robust integrity and availability guarantees.

More Details

High Energy Arcing Fault (HEAF) Photometrics 2022 Test Report

Glover, Austin M.; Cruz-Cabrera, Alvaro A.; Flanagan, Ryan

High Energy Arcing Faults (HEAFs) are hazardous events in which an electrical arc leads to the rapid release of energy in the form of heat, vaporized metal, and mechanical force. In Nuclear Power Plants, these events are often accompanied by loss of essential power and complicated shutdowns. To confirm the probabilistic risk analysis (PRA) methodology in NUREG/CR-6850, which was formulated based on limited observational data, the NRC led an international experimental campaign from 2014 to 2016. The results of these experiments uncovered an unexpected hazard posed by aluminum components in or near electrical equipment and the potential for unanalyzed equipment failures. Sandia National Laboratories (SNL), in support of the NRC work, collaborated with NIST, BSI, KEMA, and NRC to support the full-scale HEAF test campaign in 2022. SNL provided high speed visible and infrared video/data of ten tests that collected data from HEAFs originated on copper and aluminum buses inside switchgears and bus ducts. Part of the SNL scope was to place cameras with high-speed data collection at different vantage points within the test facility to provide NRC a more complete and granular view of the test events.

More Details

Comparison of Tritium Dose Calculations from MACCS, UFOTRI, and ETMOD

Foulk, James W.; Clavier, Kyle A.

Tritium exhibits unique environmental behavior because of its potential interactions with water and organic substances. Modeling the environmental consequences of tritium releases can be relatively complex and thus an evaluation of MACCS is needed to understand what updates, if any, are needed in MACCS to account for the behavior of tritium. We examine documented tritium releases and previous benchmarking assessments to perform a model intercomparison between MACCS and state-of-practice tritium-specific codes UFOTRI and ETMOD to quantify the difference between MACCS and state of practice models for assessing tritium consequences. Additionally, information to assist an analyst in judging whether a postulated tritium release is likely to lead to significant doses is provided.

More Details

Deep Deception: Exemplars of Adversarial Machine Learning and Countermeasures Applicable to International Safeguards

Farley, David R.; Katinas, Christopher M.

As a follow-up to our more comprehensive report on Adversarial Machine Learning (AML), here we provide demonstrations of AML attacks against the Limbo image database of UF6 cylinders in a variety of orientations and amongst a variety of distractor images. We demonstrate the Carlini & Wagner AML attack against a subset of Limbo images, with 100% attack success rate; meaning all attacked images were misclassified by a highly accurate trained model, yet the image changes were imperceptible to the human eye. We also demonstrate successful attacks against segmented images (images with more than one targeted object). Finally, we demonstrated the Fast Fourier Transform countermeasure that can be used to detect AML attacks on images. The intent of this and our previous report is to inform the IAEA and stakeholders of both the promise of machine learning, which could greatly improve the efficiency of surveillance monitoring, but also of the real threat of AML and potential defenses.

More Details

Technology Integration through Additive Manufacturing for Wind Turbine Blade Tips

Houchens, Brent C.; Berg, Jonathan C.; Caserta, Paolo G.; Hernandez, Miguel L.; Houck, Daniel R.; Lopez, Helio; Maniaci, David C.; Monroe, Graham; Motes, Austin G.; Paquette, Joshua A.; Rodriguez, Salvador B.; Sproul, Evan G.; Tilles, Julia N.; Develder, Nathaniel; Williams, Michelle; Westergaard, Carsten H.; Payant, James A.; Wetzel, Kyle

Abstract not provided.

An investigation into the effects of state of charge and heating rate on propagating thermal runaway in Li-ion batteries with experiments and simulations

Fire Safety Journal

Kurzawski, John C.; Gray, Lucas; Torres-Castro, Loraine; Hewson, John C.

As large systems of Li-ion batteries are being increasingly deployed, the safety of such systems must be assessed. Due to the high cost of testing large systems, it is important to extract key safety information from any available experiments. Developing validated predictive models that can be exercised at larger scales offers an opportunity to augment experimental data In this work, experiments were conducted on packs of three Li-ion pouch cells with different heating rates and states of charge (SOC) to assess the propagation behavior of a module undergoing thermal runaway. The variable heating rates represent slow or fast heating that a module may experience in a system. As the SOC decreases, propagation slows down and eventually becomes mitigated. It was found that the SOC boundary between propagation and mitigation was higher at a heating rate of 50 °C/min than at 10 °C/min for these cells. However, due to increased pre-heating at the lower heating rate, the propagation speed increased. Simulations were conducted with a new intra-particle diffusion-limited reaction model for a range of anode particle sizes. Propagation speeds and onset times were generally well predicted, and the variability in the propagation/mitigation boundary highlighted the need for greater uncertainty quantification of the predictions.

More Details

Trust-Enhancing Probabilistic Transfer Learning for Sparse and Noisy Data Environments

Bridgman, Wyatt; Balakrishnan, Uma; Soriano, Bruno S.; Jung, Kisung; Wang, Fulton; Jacobs, Justin W.; Jones, Reese E.; Rushdi, Ahmad; Chen, Jacqueline H.; Khalil, Mohammad

There is an increasing aspiration to utilize machine learning (ML) for various tasks of relevance to national security. ML models have thus far been mostly applied to tasks and domains that, while impactful, have sufficient volume of data. For predictive tasks of national security relevance, ML models of great capacity (ability to approximate nonlinear trends in input-output maps) are often needed to capture the complex underlying physics. However, scientific problems of relevance to national security are often accompanied by various sources of sparse and/or incomplete data, including experiments and simulations, across different regimes of operation, of varying degrees of fidelity, and include noise with different characteristics and/or intensity. State-of-the-art ML models, despite exhibiting superior performance on the task and domain they were trained on, may suffer detrimental loss in performance in such sparse data environments. This report summarizes the results of the Laboratory Directed Research and Development project entitled Trust-Enhancing Probabilistic Transfer Learning for Sparse and Noisy Data Environments. The objective of the project was to develop a new transfer learning (TL) framework that aims to adaptively blend the data across different sources in tackling one task of interest, resulting in enhanced trustworthiness of ML models for mission- and safety-critical systems. The proposed framework determines when it is worth applying TL and how much knowledge is to be transferred, despite uncontrollable uncertainties. The framework accomplishes this by leveraging concepts and techniques from the fields of Bayesian inverse modeling and uncertainty quantification, relying on strong mathematical foundations of probability and measure theories to devise new uncertainty-aware TL workflows.

More Details

Explicit solvent machine-learned coarse-grained model of sodium polystyrene sulfonate to capture polymer structure and dynamics

European Physical Journal E

Taylor, Phillip A.; Stevens, Mark J.

Strongly charged polyelectrolytes (PEs) demonstrate complex solution behavior as a function of chain length, concentrations, and ionic strength. The viscosity behavior is important to understand and is a core quantity for many applications, but aspects remain a challenge. Molecular dynamics simulations using implicit solvent coarse-grained (CG) models successfully reproduce structure, but are often inappropriate for calculating viscosities. To address the need for CG models which reproduce viscoelastic properties of one of the most studied PEs, sodium polystyrene sulfonate (NaPSS), we report our recent efforts in using Bayesian optimization to develop CG models of NaPSS which capture both polymer structure and dynamics in aqueous solutions with explicit solvent. We demonstrate that our explicit solvent CG NaPSS model with the ML-BOP water model [Chan et al. Nat Commun 10, 379 (2019)] quantitatively reproduces NaPSS chain statistics and solution structure. The new explicit solvent CG model is benchmarked against diffusivities from atomistic simulations and experimental specific viscosities for short chains. We also show that our Bayesian-optimized CG model is transferable to larger chain lengths across a range of concentrations. Overall, this work provides a machine-learned model to probe the structural, dynamic, and rheological properties of polyelectrolytes such as NaPSS and aids in the design of novel, strongly charged polymers with tunable structural and viscoelastic properties

More Details

Multifidelity uncertainty quantification with models based on dissimilar parameters

Computer Methods in Applied Mechanics and Engineering

Zeng, Xiaoshu; Geraci, Gianluca; Eldred, Michael; Jakeman, John D.; Gorodetsky, Alex A.; Ghanem, Roger

Multifidelity uncertainty quantification (MF UQ) sampling approaches have been shown to significantly reduce the variance of statistical estimators while preserving the bias of the highest-fidelity model, provided that the low-fidelity models are well correlated. However, maintaining a high level of correlation can be challenging, especially when models depend on different input uncertain parameters, which drastically reduces the correlation. Existing MF UQ approaches do not adequately address this issue. In this work, we propose a new sampling strategy that exploits a shared space to improve the correlation among models with dissimilar parameterization. We achieve this by transforming the original coordinates onto an auxiliary manifold using the adaptive basis (AB) method (Tipireddy and Ghanem, 2014). The AB method has two main benefits: (1) it provides an effective tool to identify the low-dimensional manifold on which each model can be represented, and (2) it enables easy transformation of polynomial chaos representations from high- to low-dimensional spaces. This latter feature is used to identify a shared manifold among models without requiring additional evaluations. We present two algorithmic flavors of the new estimator to cover different analysis scenarios, including those with legacy and non-legacy high-fidelity (HF) data. We provide numerical results for analytical examples, a direct field acoustic test, and a finite element model of a nuclear fuel assembly. For all examples, we compare the proposed strategy against both single-fidelity and MF estimators based on the original model parameterization.

More Details

Finite Element Analysis System Workflow Tools

Spencer, Nathan A.

A collection of MATLAB functions and class definitions called System Workflow Tools (SWFT) are available to semi-automate steps in the simulation process. Some of these steps are often simple and routine for smaller finite element models, but if done directly by an analyst can quickly become labor intensive, cumbersome, and error prone for larger, system level models. Some of SWFT’s capabilities demonstrated in this report includes writing Sierra input decks and processing Quantities of Interest (QOI) from results files. SWFT also writes scripts in order to utilize other software programs such as Cubit (separating system level CAD into subassemblies and components, creating nodesets and sidesets), DAKOTA (ensemble management), and ParaView (contour plots and animations). Detailed commands and workflows from mesh generation to report generation are provided as examples for analysts to utilize SWFT capabilities.

More Details

Holistic fleet optimization incorporating system design considerations

Naval Research Logistics

Henry, Stephen M.; Hoffman, Matthew; Waddell, Lucas A.; Muldoon, Frank M.

The methodology described in this article enables a type of holistic fleet optimization that simultaneously considers the composition and activity of a fleet through time as well as the design of individual systems within the fleet. Often, real-world system design optimization and fleet-level acquisition optimization are treated separately due to the prohibitive scale and complexity of each problem. This means that fleet-level schedules are typically limited to the inclusion of predefined system configurations and are blind to a rich spectrum of system design alternatives. Similarly, system design optimization often considers a system in isolation from the fleet and is blind to numerous, complex portfolio-level considerations. In reality, these two problems are highly interconnected. To properly address this system-fleet design interdependence, we present a general method for efficiently incorporating multi-objective system design trade-off information into a mixed-integer linear programming (MILP) fleet-level optimization. This work is motivated by the authors' experience with large-scale DOD acquisition portfolios. However, the methodology is general to any application where the fleet-level problem is a MILP and there exists at least one system having a design trade space in which two or more design objectives are parameters in the fleet-level MILP.

More Details

Integral Experiment Request 523 CED – 1 Report

Cook, William M.; Foulk, James W.; Lutz, Elijah; Cole, James; Raster, Ashley R.; Miller, John; Harms, Gary A.; Marshall, William J.; Zerkle, Michael

This report documents the preliminary design phase of the Critical Experiment Design (CED-1) conducted as part of integral experiment request (IER) 523. The purpose of IER-523 is to determine critical configurations of 35 weight percent (wt%) enriched uranium dioxideberyllium oxide (UO2-BeO) material with Seven Percent Critical Experiment (7uPCX) fuels at Sandia National Laboratories (Sandia). Preliminary experiment design concepts, neutronic analysis results, and proposed paths for continuing the CED process are presented. This report builds on the feasibility and justification of experimental need report (CED-0) completed in December 2021.

More Details

Stress-strain and work hardening relationships of 304L AM alloy

Jankowski, Alan F.; Yee, Joshua K.

A new approach to analytically derive constitutive stress-strain relationships from modeling the work hardening behavior of alloys was developed for assessing the strength and ductility of the Ti-6Al-4V alloy. This new approach is now successfully applied for assessing the quasi-static stress-strain behavior of an additively manufactured 304L sample. A predictive capability of this modelling approach may then be extended to model material stress-strain behavior at higher strain rates of loading.

More Details

Chaconne: A Statistical Approach to Nonlocal Compression for Supervised Learning, Semi-Supervised Learning, and Anomaly Detection

Foss, Alexander; Field, Richard V.; Ting, Christina; Shuler, Kurtis; Bauer, Travis L.; Zhao, Sihai D.; Cardenas-Torres, Eduardo

This project developed a novel statistical understanding of compression analytics (CA), which has challenged and clarified some core assumptions about CA, and enabled the development of novel techniques that address vital challenges of national security. Specifically, this project has yielded the development of novel capabilities including 1. Principled metrics for model selection in CA, 2. Techniques for deriving/applying optimal classification rules and decision theory to supervised CA, including how to properly handle class imbalance and differing costs of misclassification, 3. Two techniques for handling nonlocal information in CA, 4. A novel technique for unsupervised CA that is agnostic with regard to the underlying compression algorithm, 5. A framework for semisupervised CA when a small number of labels are known in an otherwise large unlabeled dataset. 6. The academic alliance component of this project has focused on the development of a novel exemplar-based Bayesian technique for estimating variable length Markov models (closely related to PPM [prediction by partial matching] compression techniques). We have developed examples illustrating the application of our work to text, video, genetic sequences, and unstructured cybersecurity log files.

More Details

How Good Is Your Location? Comparing and Understanding the Uncertainties in Location for the 1993 Rock Valley Sequence

Seismic Record

Pyle, Moira L.; Chen, Ting; Preston, Leiph; Scalise, Michelle; Zeiler, Cleat

Accurate event locations are important for many endeavors in seismology, and understanding the factors that contribute to uncertainties in those locations is complex. In this article, we present a case study that takes an in-depth look at the accuracy and precision possible for locating nine shallow earthquakes in the Rock Valley fault zone in southern Nevada. These events are targeted by the Rock Valley Direct Comparison phase of the Source Physics Experiment, as candidates for the colocation of a chemical explosion with an earthquake hypocenter to directly compare earthquake and explosion sources. For this comparison, it is necessary to determine earthquake hypocenters as accurately as possible so that different source types have nearly identical locations. Our investigations include uncertainty analysis from different sets of phase arrivals, stations, velocity models, and location algorithms. For a common set of phase arrivals and stations, we find that epicentral locations from different combinations of velocity models and algorithms are within 600 m of one another in most cases. Event depths exhibit greater uncertainties, but focusing on the S-P times at the nearest station allows for estimates within approximately 500 m.

More Details

Test and Evaluation of Systems with Embedded Machine Learning Components

ITEA Journal of Test and Evaluation

Smith, Michael R.; Cuellar, Christopher R.; Jose, Deepu; Ingram, Joe B.; Martinez, Carianne; Debonis, Mark

As Machine Learning (ML) continues to advance, it is being integrated into more systems. Often, the ML component represents a significant portion of the system that reduces the burden on the end user or significantly improves task performance. However, the ML component represents an unknown complex phenomenon that is learned from collected data without the need to be explicitly programmed. Despite the improvement in task performance, the models are often black boxes. Evaluating the credibility and the vulnerabilities of ML models poses a gap in current test and evaluation practice. For high consequence applications, the lack of testing and evaluation procedures represents a significant source of uncertainty and risk. To help reduce that risk, here we present considerations to evaluate systems embedded with an ML component within a red-teaming inspired methodology. We focus on (1) cyber vulnerabilities to an ML model, (2) evaluating performance gaps, and (3) adversarial ML vulnerabilities.

More Details

Porosity, roughness, and passive film morphology influence the corrosion behavior of 316L stainless steel manufactured by laser powder bed fusion

Journal of Manufacturing Processes

Delrio, F.W.; Khan, Ryan M.; Heiden, Michael J.; Kotula, Paul G.; Renner, Peter A.; Karasz, Erin K.; Melia, Michael A.

The development of additively-manufactured (AM) 316L stainless steel (SS) using laser powder bed fusion (LPBF) has enabled near net shape components from a corrosion-resistant structural material. In this article, we present a multiscale study on the effects of processing parameters on the corrosion behavior of as-printed surfaces of AM 316L SS formed via LPBF. Laser power and scan speed of the LPBF process were varied across the instrument range known to produce parts with >99 % density, and the macroscale corrosion trends were interpreted via microscale and nanoscale measurements of porosity, roughness, microstructure, and chemistry. Porosity and roughness data showed that porosity φ decreased as volumetric energy density Ev increased due to a shift in the pore formation mechanism and that roughness Sq was due to melt track morphology and partially fused powder features. Cross-sectional and plan-view maps of chemistry and work function ϕs revealed an amorphous Mn-silicate phase enriched with Cr and Al that varied in both thickness and density depending on Ev. Finally, the macroscale potentiodynamic polarization experiments under full immersion in quiescent 0.6 M NaCl showed significant differences in breakdown potential Eb and metastable pitting. In general, samples with smaller φ and Sq values and larger ϕs values and homogeneity in the Mn-silicate exhibited larger Eb. The porosity and roughness effects stemmed from an increase to the overall number of initiation sites for pitting, and the oxide phase contributed to passive film breakdown by acting as a crevice former or creating a galvanic couple with the SS.

More Details

Analysis of Transient Postclosure Criticality

Price, Laura L.; Alsaed, Halim; Jones, Philip G.; Sanders, Charlotta; Prouty, Jeralyn

The United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign seeks to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal.

More Details

Compressive strength improvements from noncircular carbon fibers: A numerical study

Composites Science and Technology

Camarena, Ernesto; Clarke, Ryan J.; Ennis, Brandon L.

The benefits of high-performance unidirectional carbon fiber composites are limited in many cost-driven industries due to the high cost relative to alternative reinforcement fibers. Low-cost carbon fibers have been previously proposed, but the longitudinal compressive strength continues to be a limiting factor or studies are based on simplifications that warrant further analysis. A micromechanical model is used to (1) determine if the longitudinal compressive strength of composites can be improved with noncircular carbon fiber shapes and (2) characterize why some shapes are stronger than others in compression. In comparison to circular fibers, the results suggest that the strength can be increased by 10%–13% by using a specific six-lobe fiber shape and by 6%–9% for a three-lobe fiber shape. A slight increase is predicted in the compressive strength of the study two-lobe fiber but has the highest uncertainty and sensitivity to fiber orientation and misalignment direction. The underlying mechanism governing the compressive failure of the composites was linked to the unique stress fields created by the lobes, particularly the pressure stress in the matrix. This work provides mechanics-based evidence of strength improvements from noncircular fiber shapes and insight on how matrix yielding is altered with alternative fiber shapes.

More Details

Sources of error and methods to improve accuracy in interface state density analysis using quasi-static capacitance-voltage measurements in wide bandgap semiconductors

Journal of Applied Physics

Rummel, Brian D.; Cooper, J.A.; Morisette, D.T.; Yates, Luke; Glaser, Caleb E.; Binder, Andrew; Ramadoss, K.; Kaplar, Robert

Characterizing interface trap states in commercial wide bandgap devices using frequency-based measurements requires unconventionally high probing frequencies to account for both fast and slow traps associated with wide bandgap materials. The C − ψ S technique has been suggested as a viable quasi-static method for determining the interface trap state densities in wide bandgap systems, but the results are shown to be susceptible to errors in the analysis procedure. This work explores the primary sources of errors present in the C − ψ S technique using an analytical model that describes the apparent response for wide bandgap MOS capacitor devices. Measurement noise is shown to greatly impact the linear fitting routine of the 1 / C S ∗ 2 vs ψ S plot to calibrate the additive constant in the surface potential/gate voltage relationship, and an inexact knowledge of the oxide capacitance is also shown to impede interface trap state analysis near the band edge. In addition, a slight nonlinearity that is typically present throughout the 1 / C S ∗ 2 vs ψ S plot hinders the accurate estimation of interface trap densities, which is demonstrated for a fabricated n-SiC MOS capacitor device. Methods are suggested to improve quasi-static analysis, including a novel method to determine an approximate integration constant without relying on a linear fitting routine.

More Details

Tuning the magnetic properties of the CrMnFeCoNi Cantor alloy

Physical Review. B

Dingreville, Remi; Startt, Jacob K.; Elmslie, Timothy A.; Yang, Yang; Soto-Medina, Sujeily; Zappala, Emma; Meisel, Mark W.; Manuel, Michele V.; Frandsen, Benjamin A.; Hamlin, James J.

Magnetic properties of more than 20 Cantor alloy samples of varying composition were investigated over a temperature range of 5 K to 300 K and in fields of up to 70 kOe using magnetometry and muon spin relaxation. Two transitions are identified: a spin-glass-like transition that appears between 55K and 190K, depending on composition, and a ferrimagnetic transition that occurs at approximately 43K in multiple samples with widely varying compositions. The magnetic signatures at 43K are remarkably insensitive to chemical composition. A modified Curie-Weiss model was used to fit the susceptibility data and to extract the net effective magnetic moment for each sample. The resulting values for the net effective moment were either diminished with increasing Cr or Mn concentrations or enhanced with decreasing Fe, Co, or Ni concentrations. Beyond a sufficiently large effective moment, the magnetic ground state transitions from ferrimagnetism to ferromagnetism. The effective magnetic moments, together with the corresponding compositions, are used in a global linear regression analysis to extract element-specific effective magnetic moments, which are compared to the values obtained by ab initio based density functional theory calculations. Finally, these moments provide the information necessary to controllably tune the magnetic properties of Cantor alloy variants.

More Details

Enhancing Early Systems R&D Capabilities with Systems —Theoretic Process Analysis

INSIGHT

Williams, Adam D.

Systems engineering today faces a wide array of challenges, ranging from new operational environments to disruptive technological — necessitating approaches to improve research and development (R&D) efforts. Yet, emphasizing the Aristotelian argument that the “whole is greater than the sum of its parts” seems to offer a conceptual foundation creating new R&D solutions. Invoking systems theoretic concepts of emergence and hierarchy and analytic characteristics of traceability, rigor, and comprehensiveness is potentially beneficial for guiding R&D strategy and development to bridge the gap between theoretical problem spaces and engineering-based solutions. In response, this article describes systems–theoretic process analysis (STPA) as an example of one such approach to aid in early-systems R&D discussions. STPA—a ‘top-down’ process that abstracts real complex system operations into hierarchical control structures, functional control loops, and control actions—uses control loop logic to analyze how control actions (designed for desired system behaviors) may become violated and drive the complex system toward states of higher risk. By analyzing how needed controls are not provided (or out of sequence or stopped too soon) and unneeded controls are provided (or engaged too long), STPA can help early-system R&D discussions by exploring how requirements and desired actions interact to either mitigate or potentially increase states of risk that can lead to unacceptable losses. This article will demonstrate STPA's benefit for early-system R&D strategy and development discussion by describing such diverse use cases as cyber security, nuclear fuel transportation, and US electric grid performance. Together, the traceability, rigor, and comprehensiveness of STPA serve as useful tools for improving R&D strategy and development discussions. In conclusion, leveraging STPA as well as related systems engineering techniques can be helpful in early R&D planning and strategy development to better triangulate deeper theoretical meaning or evaluate empirical results to better inform systems engineering solutions.

More Details

Comparison of reactive burn equilibrium closure assumptions in CTH

AIP Conference Proceedings

Ruggirello, Kevin P.; Tuttle, Leah; Kittell, David E.

For reactive burn models in hydrocodes, an equilibrium closure assumption is typically made between the unreacted and product equations of state. In the CTH [1] (not an acronym) hydrocode the assumption of density and temperature equilibrium is made by default, while other codes make a pressure and temperature equilibrium assumption. The main reason for this difference is the computational efficiency in making the density and temperature assumption over the pressure and temperature one. With fitting to data, both assumptions can accurately predict reactive flow response using the various models, but the model parameters from one code cannot necessarily be used directly in a different code with a different closure assumption. A new framework is intro-duced in CTH to allow this assumption to be changed independently for each reactive material. Comparisons of the response and computational cost of the History Variable Reactive Burn (HVRB) reactive flow model with the different equilibrium assumptions are presented.

More Details

Extension of the XHVrB reactive burn model for graded density explosives

AIP Conference Proceedings

Damm, David L.; Tuttle, Leah

A new capability for modeling graded density reactive flow materials in the shock physics hydrocode, CTH, is demonstrated here. Previously, materials could be inserted in CTH with graded material properties, but the sensitivity of the material was not adjusted based on these properties. Of particular interest are materials that are graded in density, sometimes due to pressing or other assembly operations. The sensitivity of explosives to both density and temperature has been well demonstrated in the literature, but to-date the material parameters for use in a simulation were fit to a single condition and applied to the entire material, or the material had to be inserted in sections and each section assigned a condition. The reactive flow model xHVRB has been extended to shift explosive sensitivity with initial density, so that sensitivity is also graded in the material. This capability is demonstrated for use in three examples. The first models detonation transfer in a graded density pellet of HNS, the second is a shaped charge with density gradients in the explosive, and the third is an explosively formed projectile.

More Details

Maximizing microbial bioproduction from sustainable carbon sources using iterative systems engineering

Cell Reports

Eng, Thomas; Banerjee, Deepanwita; Menasalvas, Javier; Chen, Yan; Gin, Jennifer; Choudhary, Hemant; Baidoo, Edward; Chen, Jian H.; Ekman, Axel; Kakumanu, Ramu; Diercks, Yuzhong L.; Codik, Alex; Larabell, Carolyn; Gladden, John M.; Simmons, Blake A.; Keasling, Jay D.; Petzold, Christopher J.; Mukhopadhyay, Aindrila

Maximizing the production of heterologous biomolecules is a complex problem that can be addressed with a systems-level understanding of cellular metabolism and regulation. Specifically, growth-coupling approaches can increase product titers and yields and also enhance production rates. However, implementing these methods for non-canonical carbon streams is challenging due to gaps in metabolic models. Over four design-build-test-learn cycles, we rewire Pseudomonas putida KT2440 for growth-coupled production of indigoidine from para-coumarate. We explore 4,114 potential growth-coupling solutions and refine one design through laboratory evolution and ensemble data-driven methods. The final growth-coupled strain produces 7.3 g/L indigoidine at 77% maximum theoretical yield in para-coumarate minimal medium. The iterative use of growth-coupling designs and functional genomics with experimental validation was highly effective and agnostic to specific hosts, carbon streams, and final products and thus generalizable across many systems.

More Details

An Analysis of FPGA LUT Bias and Entropy for Physical Unclonable Functions

Journal of Hardware and Systems Security (Online)

Paskaleva, Biliana S.; Wilcox, Ian Z.; Bochev, Pavel B.; Plusquellic, Jim; Jao, Jenilee; Chan, Calvin; Thotakura, Sriram

Process variations within Field Programmable Gate Arrays (FPGAs) provide a rich source of entropy and are therefore well-suited for the implementation of Physical Unclonable Functions (PUFs). However, careful considerations must be given to the design of the PUF architecture as a means of avoiding undesirable localized bias effects that adversely impact randomness, an important statistical quality characteristic of a PUF. Here in this paper, we investigate a ring-oscillator (RO) PUF that leverages localized entropy from individual look-up table (LUT) primitives. A novel RO construction is presented that enables the individual paths through the LUT primitive to be measured and isolated at high precision, and an analysis is presented that demonstrates significant levels of localized design bias. The analysis demonstrates that delay-based PUFs that utilize LUTs as a source of entropy should avoid using FPGA primitives that are localized to specific regions of the FPGA, and instead, a more robust PUF architecture can be constructed by distributing path delay components over a wider region of the FPGA fabric. Compact RO PUF architectures that utilize multiple configurations within a small group of LUTs are particularly susceptible to these types of design-level bias effects. The analysis is carried out on data collected from a set of identically designed, hard macro instantiations of the RO implemented on 30 copies of a Zynq 7010 SoC.

More Details

Comparing the structures and photophysical properties of two charge transfer co-crystals

Physical Chemistry Chemical Physics

Abou Taka, Ali; Foulk, James W.; Cole-Filipiak, Neil C.; Shivanna, Mohana; Yu, Christine J.; Feng, Patrick L.; Allendorf, Mark; Ramasesha, Krupa; Stavila, Vitalie; Mccaslin, Laura M.

Organic co-crystals have emerged as a promising class of semiconductors for next-generation optoelectronic devices due to their unique photophysical properties. This paper presents a joint experimental-theoretical study comparing the crystal structure, spectroscopy, and electronic structure of two charge transfer co-crystals. Reported herein is a novel co-crystal Npe:TCNQ, formed from 4-(1-naphthylvinyl)pyridine (Npe) and 7,7,8,8-tetracyanoquinodimethane (TCNQ) via molecular self-assembly. This work also presents a revised study of the co-crystal composed of Npe and 1,2,4,5-tetracyanobenzene (TCNB) molecules, Npe:TCNB, herein reported with a higher-symmetry (monoclinic) crystal structure than previously published. Npe:TCNB and Npe:TCNQ dimer clusters are used as theoretical model systems for the co-crystals; the geometries of the dimers are compared to geometries of the extended solids, which are computed with periodic boundary conditions density functional theory. UV-Vis absorption spectra of the dimers are computed with time-dependent density functional theory and compared to experimental UV-Vis diffuse reflectance spectra. Both Npe:TCNB and Npe:TCNQ are found to exhibit neutral character in the S0 state and ionic character in the S1 state. The high degree of charge transfer in the S1 state of both Npe:TCNB and Npe:TCNQ is rationalized by analyzing the changes in orbital localization associated with the S1 transitions.

More Details

Improved melt model for power flow

Bennett, Nichelle L.; Thoma, Carsten; Welch, Dale; Cochrane, Kyle

Accelerators that drive z-pinch experiments transport current densities in excess of 1 MA/cm2 in order to melt or ionize the target and implode it on axis. These high current densities stress the transmission lines upstream from the target, where rapid electrode heating causes plasma formation, melt, and possibly vaporization. These plasmas negatively impact accelerator efficiency by diverting some portion of the current away from the target, referred to as “current loss”. Simulations that are able to reproduce this behavior may be applied to improving the efficiency of existing accelerators and to designing systems operating at ever higher current densities. The relativistic particle-in-cell code CHICAGO® is the primary code for modeling power flow on Sandia National Laboratories’ Z accelerator. We report here on new algorithms that incorporate vaporization and melt into the standard power-flow simulation framework. Taking a hybrid approach, the CHICAGO® kinetic/multi-fluid treatment has been expanded to include vaporization while the quasi-neutral equation-of-motion has been updated for melt at high current-densities. For vaporization, a new one-dimensional substrate model provides a more accurate calculation of electrode thermal, mass, and magnetic field diffusion as well as a means of emitting absorbed contaminants and vaporized metal ions. A quasi-fluid model has been implemented expressly to mimic the motion of imploding liners for accurate inductance histories. For melt, a multi-ion Hall-MHD option has been implemented and benchmarked against Alegra MHD. This new model is described with sufficient detail to reproduce these algorithms in any hybrid kinetic code. Physics results from the new code are also presented. A CHICAGO® Hall-MHD simulation of a radial transmission line demonstrates that Hall physics, not included in Alegra, has no significant impact on the diffusion of electrode material. When surface contaminant desorption is mocked in as a hydrogen surface plasma, both the surface and bulk-material plasmas largely compress under the influence of the j × B force. Similar results are seen in Alegra, which also shows magnetic and material diffusion scaling with peak current. Test vaporization simulations using MagLIF and a power-flow experimental geometry show Fe+ ions diffuse only a few hundred µm from the electrodes, so present models of Z power flow remain valid.

More Details

Robust scalable initialization for Bayesian variational inference with multi-modal Laplace approximations

Probabilistic Engineering Mechanics

Bridgman, Wyatt; Jones, Reese E.; Khalil, Mohammad

Predictive modeling typically relies on Bayesian model calibration to provide uncertainty quantification. Variational inference utilizing fully independent (“mean-field”) Gaussian distributions are often used as approximate probability density functions. This simplification is attractive since the number of variational parameters grows only linearly with the number of unknown model parameters. However, the resulting diagonal covariance structure and unimodal behavior can be too restrictive to provide useful approximations of intractable Bayesian posteriors that exhibit highly non-Gaussian behavior, including multimodality. High-fidelity surrogate posteriors for these problems can be obtained by considering the family of Gaussian mixtures. Gaussian mixtures are capable of capturing multiple modes and approximating any distribution to an arbitrary degree of accuracy, while maintaining some analytical tractability. Unfortunately, variational inference using Gaussian mixtures with full-covariance structures suffers from a quadratic growth in variational parameters with the number of model parameters. The existence of multiple local minima due to strong nonconvex trends in the loss functions often associated with variational inference present additional complications, These challenges motivate the need for robust initialization procedures to improve the performance and computational scalability of variational inference with mixture models. In this work, we propose a method for constructing an initial Gaussian mixture model approximation that can be used to warm-start the iterative solvers for variational inference. The procedure begins with a global optimization stage in model parameter space. In this step, local gradient-based optimization, globalized through multistart, is used to determine a set of local maxima, which we take to approximate the mixture component centers. Around each mode, a local Gaussian approximation is constructed via the Laplace approximation. Finally, the mixture weights are determined through constrained least squares regression. The robustness and scalability of the proposed methodology is demonstrated through application to an ensemble of synthetic tests using high-dimensional, multimodal probability density functions. Here, the practical aspects of the approach are demonstrated with inversion problems in structural dynamics.

More Details

Sierra/SD – User’s Manual (V.5.16)

Crane, Nathan K.; Foulk, James W.; Bunting, Gregory; Day, David M.; Dohrmann, Clark R.; Joshi, Sidharth S.; Lindsay, Payton; Plews, Julia A.; Vo, Johnathan; Pepe, Justin; Manktelow, Kevin

Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user’s guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.

More Details

FY23 Status Report: SNF Interim Storage Canister Corrosion and Surface Environment Investigations

Bryan, C.R.; Knight, A.W.; Katona, Ryan M.; Smith, Elizabeth D.S.; Schaller, Rebecca S.

Work evaluating spent nuclear fuel (SNF) dry storage canister surface environments and canister corrosion progressed significantly in FY23, with the goal of developing a scientific understanding of the processes controlling initiation and growth of stress corrosion cracking (SCC) cracks in stainless steel canisters in relevant storage environments. The results of the work performed at Sandia National Laboratories (SNL) will guide future work and will contribute to the development of better tools for predicting potential canister penetration by SCC.

More Details

11-th order of accuracy for numerical solution of 3-D Poisson equation with irregular interfaces on unfitted Cartesian meshes

Computer Methods in Applied Mechanics and Engineering

Idesman, Alexander; Bishop, Joseph E.

For the first time the optimal local truncation error method (OLTEM) with 125-point stencils and unfitted Cartesian meshes has been developed in the general 3-D case for the Poisson equation for heterogeneous materials with smooth irregular interfaces. The 125-point stencils equations that are similar to those for quadratic finite elements are used for OLTEM. The interface conditions for OLTEM are imposed as constraints at a small number of interface points and do not require the introduction of additional unknowns, i.e., the sparse structure of global discrete equations of OLTEM is the same for homogeneous and heterogeneous materials. The stencils coefficients of OLTEM are calculated by the minimization of the local truncation error of the stencil equations. These derivations include the use of the Poisson equation for the relationship between the different spatial derivatives. Such a procedure provides the maximum possible accuracy of the discrete equations of OLTEM. In contrast to known numerical techniques with quadratic elements and third order of accuracy on conforming and unfitted meshes, OLTEM with the 125-point stencils provides 11-th order of accuracy, i.e., an extremely large increase in accuracy by 8 orders for similar stencils. The numerical results show that OLTEM yields much more accurate results than high-order finite elements with much wider stencils. The increased numerical accuracy of OLTEM leads to an extremely large increase in computational efficiency. Additionally, a new post-processing procedure with the 125-point stencil has been developed for the calculation of the spatial derivatives of the primary function. The post-processing procedure includes the minimization of the local truncation error and the use of the Poisson equation. It is demonstrated that the use of the partial differential equation (PDE) for the 125-point stencils improves the accuracy of the spatial derivatives by 6 orders compared to post-processing without the use of PDE as in existing numerical techniques. At an accuracy of 0.1% for the spatial derivatives, OLTEM reduces the number of degrees of freedom by 900 - 4∙106 times compared to quadratic finite elements. The developed post-processing procedure can be easily extended to unstructured meshes and can be independently used with existing post-processing techniques (e.g., with finite elements).

More Details
Results 1751–1800 of 99,299
Results 1751–1800 of 99,299