Publications

Results 31801–32000 of 99,299

Search results

Jump to search filters

Photonic design parameters for AWG-based RF channelized receivers

Optics InfoBase Conference Papers

Davis, Kyle; Stark, Andrew; Yang, Benjamin; Lentine, Anthony L.; Derose, Christopher; Gehl, Michael

An 11-channel 1-GHz bandwidth silicon photonic AWG was fabricated and measured in the lab. Two photonic architectures are presented: (1) RF-envelope detector, and (2) RF downconvertor for digital systems. The RF-envelope detector architecture was modeled based on the demonstrated AWG characteristics to determine estimated system-level RF receiver performance.

More Details

Estimation of transport and kinetic parameters of vanadium redox batteries using static cells

ECS Transactions

Lee, Seong B.; Foulk, James W.; Anderson, Travis M.; Mitra, Kishalay; Chalamala, Babu C.; Subramanian, Venkat R.

Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the models through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. This paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.

More Details

Footprint placement for mosaic imaging by sampling and optimization

Proceedings International Conference on Automated Planning and Scheduling, ICAPS

Mitchell, Scott A.; Valicka, Christopher G.; Rowe, Stephen; Zou, Simon

We consider the problem of selecting a small set (mosaic) of sensor images (footprints) whose union covers a two-dimensional Region Of Interest (ROI) on Earth. We take the approach of modeling the mosaic problem as a Mixed-Integer Linear Program (MILP). This allows solutions to this subproblem to feed into a larger remote-sensor collection-scheduling MILP. This enables the scheduler to dynamically consider alternative mosaics, without having to perform any new geometric computations. Our approach to set up the optimization problem uses maximal disk sampling and point-in-polygon geometric calculations. Footprints may be of any shape, even non-convex, and we show examples using a variety of shapes that may occur in practice. The general integer optimization problem can become computationally expensive for large problems. In practice, the number of placed footprints is within an order of magnitude of ten, making the time to solve to optimality on the order of minutes. This is fast enough to make the approach relevant for near real-time mission applications. We provide open source software for all our methods, "GeoPlace."

More Details

Chemo-mechanical coupling in kerogen gas adsorption/desorption

Physical Chemistry Chemical Physics

Ho, Tuan A.; Wang, Yifeng; Criscenti, Louise

Kerogen plays a central role in hydrocarbon generation in an oil/gas reservoir. In a subsurface environment, kerogen is constantly subjected to stress confinement or relaxation. The interplay between mechanical deformation and gas adsorption of the materials could be an important process for shale gas production but unfortunately is poorly understood. Using a hybrid Monte Carlo/molecular dynamics simulation, we show here that a strong chemo-mechanical coupling may exist between gas adsorption and mechanical strain of a kerogen matrix. The results indicate that the kerogen volume can expand by up to 5.4% and 11% upon CH4 and CO2 adsorption at 192 atm, respectively. The kerogen volume increases with gas pressure and eventually approaches a plateau as the kerogen becomes saturated. The volume expansion appears to quadratically increase with the amount of gas adsorbed, indicating a critical role of the surface layer of gas adsorbed in the bulk strain of the material. Furthermore, gas uptake is greatly enhanced by kerogen swelling. Swelling also increases the surface area, porosity, and pore size of kerogen. Our results illustrate the dynamic nature of kerogen, thus questioning the validity of the current assumption of a rigid kerogen molecular structure in the estimation of gas-in-place for a shale gas reservoir or gas storage capacity for subsurface carbon sequestration. The coupling between gas adsorption and kerogen matrix deformation should be taken into consideration.

More Details

Collaborative analytics for biological facility characterization

Proceedings of SPIE - The International Society for Optical Engineering

Caswell, Jacob; Cairns, Kelsey; Ting, Christina; Hansberger, Mark W.; Stoebner, Matthew A.; Brounstein, Tom R.; Cuellar, Christopher R.; Jurrus, Elizabeth R.

Thousands of facilities worldwide are engaged in biological research activities. One of DTRA's missions is to fully understand the types of facilities involved in collecting, investigating, and storing biological materials. This characterization enables DTRA to increase situational awareness and identify potential partners focused on biodefense and biosecurity. As a result of this mission, DTRA created a database to identify biological facilities from publicly available, open-source information. This paper describes an on-going effort to automate data collection and entry of facilities into this database. To frame our analysis more concretely, we consider the following motivating question: How would a decision maker respond to a pathogen outbreak during the 2018 Winter Olympics in South Korea? To address this question, we aim to further characterize the existing South Korean facilities in DTRA's database, and to identify new candidate facilities for entry, so that decision makers can identify local facilities properly equipped to assist and respond to an event. We employ text and social analytics on bibliometric data from South Korean facilities and a list of select pathogen agents to identify patterns and relationships within scientific publication graphs.

More Details

Polysulfide speciation in the bulk electrolyte of a lithium sulfur battery

Journal of the Electrochemical Society

Mcbrayer, Josefine D.; Foulk, James W.; Perdue, Brian R.; Garzon, Fernando H.; Apblett, Christopher A.

In situ Raman microscopy was used to study polysulfide speciation in the bulk ether electrolyte during the discharge and charge of a Li-S electrochemical cell to assess the complex interplay between chemical and electrochemical reactions in solution. During discharge, long chain polysulfides and the S3− radical appear in the electrolyte at 2.4 V indicating a rapid equilibrium of the dissociation reaction to form S3−. When charging, however, an increase in the concentration of all polysulfide species was observed. This highlights the importance of the electrolyte to sulfur ratio and suggests a loss in the useful sulfur inventory from the cathode to the electrolyte.

More Details

LES soot-radiation predictions of buoyant fire plumes

2018 Spring Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2018

Koo, Heeseok; Hewson, John C.; Knaus, Robert C.

This study addresses predicting the internal thermochemical state in buoyant fire plumes using largeeddy simulations (LES) with a tabular flamelet library for the underlying flame chemistry. Buoyant fire plumes are characterized by moderate turbulent mixing, soot growth and oxidation and radiation transport. Soot moments, mixture fraction and enthalpy evolve in the LES with soot source terms given by the non-adiabatic flamelet library. Participating media radiation transport is predicted using the discrete ordinates method with source terms also from the flamelet library, and the LES subgrid-scale modeling is based on a one-equation kinetic-energy sub-filter model. This library is generated with flamelet states that include unsteady heat loss through extinction nominally representing radiative quenching. We describe the performance of this model both in the context of a laminar coflow configuration where extensive measurements are available and in buoyant turbulent fire plumes where measurements are more global.

More Details

The effect of oxygen penetration on apparent pulverized coal char combustion kinetics

2018 Spring Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2018

Shaddix, Christopher R.; Hecht, Ethan S.; Gonzalo-Tirado, Cristina

Apparent char kinetic rates are commonly used to predict pulverized coal char burning rates. These kinetic rates quantify the char burning rate based on the temperature of the particle and the oxygen concentration at the particle surface, thereby inherently neglecting the impact of variations in the penetration of oxygen into the char on the predicted burning rate. To investigate the impact of variable extents of penetration during Zone II burning conditions, experimental measurements were performed of char particle combustion temperature and burnout for a common U.S. subbituminous coal burning in an optical laminar entrained flow reactor with either helium or nitrogen diluents. The combination of much higher thermal conductivity and mass diffusivity in the helium environments resulted in substantially cooler char combustion temperatures than in equivalent N2 environments. Measured char burnout was similar in the two environments for a given bulk oxygen concentration but was approximately 60% higher in helium environments for a given char combustion temperature. Detailed particle simulations of the experimental conditions confirmed a 60% higher burning rate in the helium environments as a function of char temperature, whereas catalyst theory predicts that the burning rate in helium could be as high as 90% greater than in nitrogen, in the limit of large Thiele modulus (i.e. near the diffusion limit). For application combustion in CO2 environments (e.g. for oxy-fuel combustion), these results demonstrate that due to differences in oxygen diffusivity the apparent char oxidation rates will be lower, but by no more than 9% relative to burning rates measured in nitrogen environments.

More Details

Turbulent Combustion Simulations with High-Performance Computing

Energy, Environment, and Sustainability

Kolla, Hemanth; Chen, Jacqueline H.

Considering that simulations of turbulent combustion are computationally expensive, this chapter takes a decidedly different perspective, that of high-performance computing (HPC). The cost scaling arguments of non-reacting turbulence simulations are revisited and it is shown that the cost scaling for reacting flows is much more stringent for comparable conditions, making parallel computing and HPC indispensable. Hardware abstractions of typical parallel supercomputers are presented which show that for design of an efficient and optimal program, it is essential to exploit both distributed memory parallelism and shared-memory parallelism, i.e. hierarchical parallelism. Principles of efficient programming at various levels of parallelism are illustrated using archetypal code examples. The vast array of numerical methods, particularly schemes for spatial and temporal discretization, are examined in terms of tradeoffs they present from an HPC perspective. Aspects of data analytics that invariably result from large feature-rich data sets generated by combustion simulations are covered briefly.

More Details

The effect of oxygen penetration on apparent pulverized coal char combustion kinetics

2018 Spring Technical Meeting of the Western States Section of the Combustion Institute, WSSCI 2018

Shaddix, Christopher R.; Hecht, Ethan S.; Gonzalo-Tirado, Cristina

Apparent char kinetic rates are commonly used to predict pulverized coal char burning rates. These kinetic rates quantify the char burning rate based on the temperature of the particle and the oxygen concentration at the particle surface, thereby inherently neglecting the impact of variations in the penetration of oxygen into the char on the predicted burning rate. To investigate the impact of variable extents of penetration during Zone II burning conditions, experimental measurements were performed of char particle combustion temperature and burnout for a common U.S. subbituminous coal burning in an optical laminar entrained flow reactor with either helium or nitrogen diluents. The combination of much higher thermal conductivity and mass diffusivity in the helium environments resulted in substantially cooler char combustion temperatures than in equivalent N2 environments. Measured char burnout was similar in the two environments for a given bulk oxygen concentration but was approximately 60% higher in helium environments for a given char combustion temperature. Detailed particle simulations of the experimental conditions confirmed a 60% higher burning rate in the helium environments as a function of char temperature, whereas catalyst theory predicts that the burning rate in helium could be as high as 90% greater than in nitrogen, in the limit of large Thiele modulus (i.e. near the diffusion limit). For application combustion in CO2 environments (e.g. for oxy-fuel combustion), these results demonstrate that due to differences in oxygen diffusivity the apparent char oxidation rates will be lower, but by no more than 9% relative to burning rates measured in nitrogen environments.

More Details

Overview of geological carbon storage (GCS)

Science of Carbon Storage in Deep Saline Formations: Process Coupling across Time and Spatial Scales

Newell, Pania; Ilgen, Anastasia G.

Geological carbon storage (GCS) is a promising technology for mitigating increasing concentrations of carbon dioxide (CO2) in the atmosphere. The injection of supercritical CO2into geological formations perturbs the physical and chemical state of the subsurface. The reservoir rock, as well as the overlying caprock, can experience changes in the pore fluid pressure, thermal state, chemical reactivity and stress distribution. These changes can cause mechanical deformation of the rock mass, opening/closure of preexisting fractures or/and initiation of new fractures, which can influence the integrity of the overall geological carbon storage (GCS) systems over thousands of years, required for successful carbon storage. GCS sites are inherently unified systems; however, given the scientific framework, these systems are usually divided based on the physics and temporal/spatial scales during scientific investigations. For many applications, decoupling the physics by treating the adjacent system as a boundary condition works well. Unfortunately, in the case of water and gas flow in porous media, because of the complexity of geological subsurface systems, the decoupling approach does not accurately capture the behavior of the larger relevant system. The coupled processes include various combinations of thermal (T), hydrological (H), chemical (C), mechanical (M), and biological (B) effects. These coupled processes are time- and length-scale- dependent, and can manifest in one- or two-way coupled behavior. There is an undeniable need for understanding the coupling of processes during GCS, and how these coupled phenomena can result in emergent behaviors arising from the interplay of physics and chemistry, including self - focusing of flow, porosity collapse, and changes in fracture networks. In this chapter, the first section addresses the subsurface system response to the injection of CO2, examined at field and laboratory scales, as well as in model systems, addressed from a perspective of single disciplines. The second section reviews coupling between processes during GCS observed either in the field or anticipated based on laboratory results.

More Details

Hierarchical material property representation in finite element analysis: Convergence behavior and the electrostatic response of vertical fracture sets

2018 SEG International Exposition and Annual Meeting, SEG 2018

Weiss, Chester J.; Beskardes, Gungor D.; Van Bloemen Waanders, Bart

Methods for the efficient representation of fracture response in geoelectric models impact an impressively broad range of problems in applied geophysics. We adopt the recently-developed hierarchical material property representation in finite element analysis (Weiss, 2017) to model the electrostatic response of a discrete set of vertical fractures in the near surface and compare these results to those from anisotropic continuum models. We also examine the power law behavior of these results and compare to continuum theory. We find that in measurement profiles from a single point source in directions both parallel and perpendicular to the fracture set, the fracture signature persists over all distances. Furthermore, the homogenization limit (distance at which the individual fracture anomalies are too small to be either measured or of interest) is not strictly a function of the geometric distribution of the fractures, but also their conductivity relative to the background. Hence, we show that the definition of “representative elementary volume”, that distance over which the statistics of the underlying heterogeneities is stationary, is incomplete as it pertains to the applicability of an equivalent continuum model. We also show that detailed interrogation of such intrinsically heterogeneous models may reveal power law behavior that appears anomalous, thus suggesting a possible mechanism to reconcile emerging theories in fractional calculus with classical electromagnetic theory.

More Details

Imaging radar performance analysis using product dark regions

Proceedings of SPIE - The International Society for Optical Engineering

Raynal, Ann M.; Bickel, Douglas L.

Many types of dark regions occur naturally or artificially in Synthetic Aperture Radar (SAR) and Coherent Change Detection (CCD) products. Occluded regions in SAR imagery, known as shadows, are created when incident radar energy is obstructed by a target with height from illuminating resolution cells immediately behind the target in the ground plane. No return areas are also created from objects or terrain that produce little scattering in the direction of the receiver, such as still water or flat plates for monostatic systems. Depending on the size of the dark region, additive and multiplicative noise levels are commonly measured for SAR performance testing. However, techniques for radar performance testing of CCD using dark regions are not common in the literature. While dark regions in SAR imagery also produce dark regions in CCD products, additional dark regions in CCD may further arise from decorrelation of bright regions in SAR imagery due to clutter or terrain that has poor wide-sense stationarity (such as foliage in wind), man-made disturbances of the scene, or unintended artifacts introduced by the radar and image processing. By comparing dark regions in CCD imagery over multiple passes, one can identify unintended decorrelation introduced by poor radar performance rather than phenomenology. This paper addresses select dark region automated measurement techniques for the evaluation of radar performance during SAR and CCD field testing.

More Details

Discrete-Direct Model Calibration and Propagation Approach Addressing Sparse Replicate Tests and Material, Geometric, and Measurement Uncertainties

SAE Technical Papers

Romero, Vicente J.

This paper introduces the "Discrete Direct" (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying systems. The DD approach generates and propagates various discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters-which adds unjustified information to the calibration and propagation problem. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties in the experiments. The approach appears to have several advantages over Bayesian and other calibration approaches for capturing and utilizing the information obtained from the typically small number of experiments in model calibration situations. In particular, the DD methodology better preserves the fundamental information from the experimental data in a way that enables model predictions to be more directly traced back to the supporting experimental data. The approach is also presently more viable for calibration involving sparse realizations of random function data (e.g. stress-strain curves) and random field data. The DD methodology is conceptually simpler than Bayesian calibration approaches, and is straightforward to implement. The methodology is demonstrated and analyzed in this paper on several illustrative calibration and uncertainty propagation problems.

More Details

Sensor operators as technology consumers: What do users really think about that radar?

Proceedings of SPIE - The International Society for Optical Engineering

Mcnamara, Laura A.; Divis, Kristin M.; Morrow, James D.

Many companies rely on user experience metrics, such as Net Promoter scores, to monitor changes in customer attitudes toward their products. This paper suggests that similar metrics can be used to assess the user experience of the pilots and sensor operators who are tasked with using our radar, EO/IR, and other remote sensing technologies. As we have previously discussed, the problem of making our national security remote sensing systems useful, usable and adoptable is a human-system integration problem that does not get the sustained attention it deserves, particularly given the high-throughput, information-dense task environments common to military operations. In previous papers, we have demonstrated how engineering teams can adopt well-established human-computer interaction principles to fix significant usability problems in radar operational interfaces. In this paper, we describe how we are using a combination of Situation Awareness design methods, along with techniques from the consumer sector, to identify opportunities for improving human-system interactions. We explain why we believe that all stakeholders in remote sensing-including program managers, engineers, or operational users-can benefit from systematically incorporating some of these measures into the evaluation of our national security sensor systems. We will also provide examples of our own experience adapting consumer user experience metrics in operator-focused evaluation of currently deployed radar interfaces.

More Details

Data-driven uncertainty quantification for multisensor analytics

Proceedings of SPIE - The International Society for Optical Engineering

Stracuzzi, David J.; Darling, Michael C.; Chen, Maximillian G.; Peterson, Matthew G.

We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes "interesting" activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi-source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.

More Details

Distinguishing one from many using super-resolution compressive sensing

Proceedings of SPIE - The International Society for Optical Engineering

Anthony, Stephen M.; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; Woodbury, Drew P.

Distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that the PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l1-norm, l0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. As a result, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.

More Details

Two-channel wakeup system employing aluminum nitride based MEMS resonant accelerometers for near-zero power applications

2018 Solid-State Sensors, Actuators and Microsystems Workshop, Hilton Head 2018

Reger, Robert W.; Yen, Sean; Barney, Bryson; Satches, Michael R.; Young, Andrew I.; Pluym, Tammy; Wiwi, Michael; Delaney, Matthew A.; Griffin, Benjamin

The Defense Advanced Research Project Agency has identified a need for low-standby-power systems which react to physical environmental signals in the form of an electrical wakeup signal. To address this need, we design piezoelectric aluminum nitride based microelectromechanical resonant accelerometers that couple with a near-zero power, complementary metal-oxide-semiconductor application specific integrated circuit. The piezoelectric accelerometer operates near resonance to form a passive mechanical filter of the vibration spectrum that targets a specific frequency signature. Resonant vibration sensitivities as large as 490 V/g (in air) are obtained at frequencies as low as 43 Hz. The integrated circuit operates in the subthreshold regime employing current starvation to minimize power consumption. Two accelerometers are coupled with the circuit to form the wakeup system which requires only 5.25 nW before wakeup and 6.75 nW after wakeup. The system is shown to wake up to a generator signal and reject confusers in the form of other vehicles and background noise.

More Details

Catastrophic depolymerization of microtubules driven by subunit shape change

Soft Matter

Bollinger, Jonathan; Stevens, Mark J.

Microtubules exhibit a dynamic instability between growth and catastrophic depolymerization. GTP-tubulin (αβ-dimer bound to GTP) self-assembles, but dephosphorylation of GTP- to GDP-tubulin within the tubule results in destabilization. While the mechanical basis for destabilization is not fully understood, one hypothesis is that dephosphorylation causes tubulin to change shape, frustrating bonds and generating stress. To test this idea, we perform molecular dynamics simulations of microtubules built from coarse-grained models of tubulin, incorporating a small compression of α-subunits associated with dephosphorylation in experiments. We find that this shape change induces depolymerization of otherwise stable systems via unpeeling "ram's horns" characteristic of microtubules. Depolymerization can be averted by caps with uncompressed α-subunits, i.e., GTP-rich end regions. Thus, the shape change is sufficient to yield microtubule behavior.

More Details

Compressive sensing with cross-validation and stop-sampling for sparse polynomial chaos expansions

SIAM-ASA Journal on Uncertainty Quantification

Huan, Xun H.; Safta, Cosmin; Sargsyan, Khachik; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quantification analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several compressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers l1_ls, SpaRSA, CGIST, FPC_AS, and ADMM, we develop techniques to mitigate overfitting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these techniques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-crossflow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy, and computational trade-offs between polynomial bases of different degrees, and practicability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.

More Details

Influence of the fluctuating velocity field on the surface pressures in a jet/fin interaction

Journal of Spacecraft and Rockets

Beresh, Steven J.; Henfling, John F.; Spillers, Russell; Pruett, Brian

The mechanism by which aerodynamic effects of jet/fin interaction arise from the flow structure of a jet in crossflow is explored using particle image velocimetry measurements of the crossplane velocity field as it impinges on a downstream fin instrumented with high-frequency pressure sensors. A Mach 3.7 jet issues into a Mach 0.8 crossflow from either a normal or inclined nozzle, and three lateral fin locations are tested. Conditional ensemble-averaged velocity fields are generated based upon the simultaneous pressure condition. Additional analysis relates instantaneous velocity vectors to pressure fluctuations. The pressure differential across the fin is driven by variations in the spanwise velocity component, which substitutes for the induced angle of attack on the fin. Pressure changes at the fin tip are strongly related to fluctuations in the streamwise velocity deficit, wherein lower pressure is associated with higher velocity and vice versa. The normal nozzle produces a counter-rotating vortex pair that passes above the fin, and pressure fluctuations are principally driven by the wall horseshoe vortex and the jet wake deficit. The inclined nozzle produces a vortex pair that impinges the fin and yields stronger pressure fluctuations driven more directly by turbulence originating from the jet mixing.

More Details

Scalable preconditioners for structure preserving discretizations of maxwell equations in first order form

SIAM Journal on Scientific Computing

Phillips, Edward; Shadid, John N.; Cyr, Eric C.

Multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physics compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.

More Details

Thermal conductivity measurements of ceramic fiber insulation materials

Proceedings of the Thermal and Fluids Engineering Summer Conference

Headley, Alexander; Hileman, Michael B.; Robbins, Aron; Roberts, Christine

Ceramic fiber insulation materials, such as Fiberfrax and Min-K products, are used in a number of applications (e.g. aerospace, fire protection, and military) for their stability and performance in extreme conditions. However, the thermal properties of these materials have not been thoroughly characterized for many of the conditions that they will be exposed to, such as high temperatures and pressures. This complicates the design of systems using these insulations as the uncertainty in the thermal properties is high. In this study, the thermal conductivity of three ceramic fiber insulations, Fiberfrax T-30LR laminate, Fiberfrax 970-H paper, and Min-K TE1400 board, was measured as a function of atmospheric temperature and compression. Measurements were taken using the transient plane source technique. The results of this study are compared against three published data sets.

More Details

An overview of the water network tool for resilience (WNTR)

1st International WDSA / CCWI 2018 Joint Conference

Klise, Katherine A.; Murray, Regan; Haxton, Terranna

Drinking water systems face multiple challenges, including aging infrastructure, water quality concerns, uncertainty in supply and demand, natural disasters, environmental emergencies, and cyber and terrorist attacks. All of these incidents have the potential to disrupt a large portion of a water system causing damage to critical infrastructure, threatening human health, and interrupting service to customers. Recent incidents, including the floods and winter storms in the southern United States, highlight vulnerabilities in water systems and the need to minimize service loss. Simulation and analysis tools can help water utilities better understand how their system would respond to a wide range of disruptive incidents and inform planning to make systems more resilient over time. The Water Network Tool for Resilience (WNTR) is a new open source Python package designed to meet this need. WNTR integrates hydraulic and water quality simulation, a wide range of damage and response options, and resilience metrics into a single software framework, allowing for end-Toend evaluation of water network resilience. WNTR includes capabilities to 1) generate and modify water network structure and operations, 2) simulate disaster scenarios, 3) model response and repair strategies, 4) simulate pressure dependent demand and demand-driven hydraulics, 5) simulate water quality, 6) calculate resilience metrics, and 7) visualize results. These capabilities can be used to evaluate resilience of water distribution systems to a wide range of hazards and to prioritize resilience-enhancing actions. Furthermore, the flexibility of the Python environment allows the user to easily customize analysis. For example, utilities can simulate a specific incident or run stochastic analysis for a range of probabilistic scenarios. The U.S. Environmental Protection Agency and Sandia National Laboratories are working with water utilities to ensure that WNTR can be used to efficiently evaluate resilience under different use cases. The software has been used to evaluate resilience under earthquake and power outage scenarios, run fire-fighting capacity and pipe criticality analysis, evaluate sampling and flushing locations, and prioritize repair strategies. This paper includes discussion on WNTR capabilities, use cases, and resources to help get new users started using the software. WNTR can be downloaded from the U.S. Environmental Protection Agency GitHub site at https://github.com/USEPA/WNTR. The GitHub site includes links to software documentation, software testing results, and contact information.

More Details

The pitfalls of provisioning exascale networks: A trace replay analysis for understanding communication performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Kenny, Joseph; Sargsyan, Khachik; Knight, Samuel; Michelogiannakis, George; Wilke, Jeremiah

Data movement is considered the main performance concern for exascale, including both on-node memory and off-node network communication. Indeed, many application traces show significant time spent in MPI calls, potentially indicating that faster networks must be provisioned for scalability. However, equating MPI times with network communication delays ignores synchronization delays and software overheads independent of network hardware. Using point-to-point protocol details, we explore the decomposition of MPI time into communication, synchronization and software stack components using architecture simulation. Detailed validation using Bayesian inference is used to identify the sensitivity of performance to specific latency/bandwidth parameters for different network protocols and to quantify associated uncertainties. The inference combined with trace replay shows that synchronization and MPI software stack overhead are at least as important as the network itself in determining time spent in communication routines.

More Details

Strengthened SOCP Relaxations for ACOPF with McCormick Envelopes and Bounds Tightening

Computer Aided Chemical Engineering

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

The solution of the Optimal Power Flow (OPF) and Unit Commitment (UC) problems (i.e., determining generator schedules and set points that satisfy demands) is critical for efficient and reliable operation of the electricity grid. For computational efficiency, the alternating current OPF (ACOPF) problem is usually formulated with a linearized transmission model, often referred to as the DCOPF problem. However, these linear approximations do not guarantee global optimality or even feasibility for the true nonlinear alternating current (AC) system. Nonlinear AC power flow models can and should be used to improve model fidelity, but successful global solution of problems with these models requires the availability of strong relaxations of the AC optimal power flow constraints. In this paper, we use McCormick envelopes to strengthen the well-known second-order cone (SOC) relaxation of the ACOPF problem. With this improved relaxation, we can further include tight bounds on the voltages at the reference bus, and this paper demonstrates the effectiveness of this for improved bounds tightening. We present results on the optimality gap of both the base SOC relaxation and our Strengthened SOC (SSOC) relaxation for the National Information and Communications Technology Australia (NICTA) Energy System Test Case Archive (NESTA). For the cases where the SOC relaxation yields an optimality gap more than 0.1 %, the SSOC relaxation with bounds tightening further reduces the optimality gap by an average of 67 % and ultimately reduces the optimality gap to less than 0.1 % for 58 % of all the NESTA cases considered. Stronger relaxations enable more efficient global solution of the ACOPF problem and can improve computational efficiency of MINLP problems with AC power flow constraints, e.g., unit commitment.

More Details

Uncovering New Opportunities from Frequency Regulation Markets with Dynamic Optimization and Pyomo.DAE

Computer Aided Chemical Engineering

Dowling, Alexander W.; Nicholson, Bethany L.

Real-time energy pricing has caused a paradigm shift for process operations with flexibility becoming a critical driver of economics. As such, incorporating real-time pricing into planning and scheduling optimization formulations has received much attention over the past two decades (Zhang and Grossman, 2016). These formulations, however, focus on 1-hour or longer time discretizations and neglect process dynamics. Recent analysis of historical price data from the California electricity market (CAISO) reveals that a majority of economic opportunities come from fast market layers, i.e., real-time energy market and ancillary services (Dowling et al., 2017). We present a dynamic optimization framework to quantify the revenue opportunities of chemical manufacturing systems providing frequency regulation (FR). Recent analysis of first order systems finds that slow process dynamics naturally dampen high frequency harmonics in FR signals (Dowling and Zavala, 2017). As a consequence, traditional chemical processes with long time constants may be able to provide fast flexibility without disrupting product quality, performance of downstream unit operations, etc. This study quantifies the ability of a distillation system to provide sufficient dynamic flexibility to adjust energy demands every 4 seconds in response to market signals. Using a detailed differential algebraic equation (DAE) model (Hahn and Edgar, 2002) and historic data from the Texas electricity market (ECROT), we estimate revenue opportunities for different column designs. We implement our model using the algebraic modeling language Pyomo (Hart et al., 2011) and its dynamic optimization extension Pyomo.DAE (Nicholson et al., 2017). These software packages enable rapid development of complex optimization models using high-level modelling constructs and provide flexible tools for initializing and discretizing DAE models.

More Details

Coupled multiphase flow and geomechanical modeling of injection-induced seismicity on the basement fault

52nd U.S. Rock Mechanics/Geomechanics Symposium

Chang, Kyung W.; Yoon, Hongkyu; Martinez, Mario J.; Newell, Pania

The fluid injection into deep geological formations altar the states of pore pressure and stress on the faults, potentially causing earthquakes. In the multiphase flow system, the interaction between fluid flow and mechanical deformation in porous media is critical to determine the spatio-temporal distribution of pore pressure and stress. The contrast of fluid and rock properties between different structures produces the changes in pressure gradients and subsequently stress fields. Assuming two-phase fluid flow (gas-water system), we simulate the two-dimensional reservoir including a basement fault, in which injection-induced pressure encounters the fault directly given injection scenarios. The single-phase flow model with the same setting is also conducted to evaluate the multiphase flow effects on mechanical response of the fault to gas injection. A series of sensitivity tests are performed by varying the fault permeability. The presence of gaseous phase reduces the pressure buildup within the gas-saturated region, causing less Coulomb stress change. The low-permeability fault prevent diffusion initially as observed in the single-phase flow system. Once gaseous phase approaches, the fault acts as a capillary barrier that causes increases in pressure within the fault zone, potentially inducing earthquakes even without direct diffusion.

More Details

Practical challenges in the calculation of turbulent viscosity from piv data

2018 Aerodynamic Measurement Technology and Ground Testing Conference

Beresh, Steven J.; Miller, Nathan E.; Smith, Barton L.

Turbulent viscosities have been calculated from stereoscopic particle image velocimetry (PIV) data for a supersonic jet exhausting into a transonic crossflow. Image interrogation must be optimized to produce useful turbulent viscosity fields. High-accuracy image reconstruction should be used for the final iteration, whereas efficient algorithms produce spatial artifacts in derivative fields. Mean strain rates should be calculated from large windows (128 pixel) with 75% overlap. Turbulent stresses are optimally computed using multiple (more than two) iterations of image interrogation and 75% overlap, both of which increase the signal bandwidth. However, the improvement is modest and may not justify the considerable increase in computational expense. The turbulent viscosity may be expressed in tensor notation to include all three axes of velocity data. In this formulation, a least-squares fit to the multiple equations comprising the tensor generated a scalar turbulent viscosity that eliminated many of the artifacts and noise present in the single-component formulation. The resulting experimental turbulent viscosity fields will be used to develop data-driven turbulence models that can improve the fidelity of predictive computations.

More Details

Scale dependence of material response at extreme incident radiative heat flux

2018 Joint Thermophysics and Heat Transfer Conference

Brown, Alexander L.; Engerer, Jeffrey D.; Ricks, Allen J.; Christian, Josh

The thermal environment generated during an intense radiation event like a nuclear weapon airburst, lightning strike, or directed energy weaponry has a devastating effect on many exposed materials. Natural and engineered materials can be damaged and ignite from the intense thermal radiation, potentially resulting in sustained fires. Understanding material behavior in such an event is essential for mitigating the damage to a variety of defense systems, such as aircraft and weaponry. Flammability and ignition studies in this regime (very high heat flux, short duration) are less plentiful than in the heat flux regimes representative of typical fires. The flammability and ignition behavior of a material may differ at extreme heat flux due to the balance of the heat conduction into the material compared to other processes. Length scale effects may also be important in flammability and ignition behavior, especially in the high heat flux regime. A variety of materials have recently been subjected to intense thermal loads (~100–1000 kW/m2) in testing at both the Solar Furnace and the Solar Tower at the National Solar Thermal Test Facility at Sandia National Laboratories. The Solar Furnace, operating at a smaller scale (≈30 cm2 area), provides the ability to test a wide range of materials under controlled radiative flux conditions. The Solar Tower exposes objects and materials to the same flux on a much larger scale (≈4 m2 area), integrating complex geometry and scale effects. Results for a variety of materials tested in both facilities are presented and compared. Material response often differs depending on scale, suggesting a significant scale effect. Mass loss per unit energy tends to go down as scale increases, and ignition probability tends to increase with scale.

More Details

Ignition and damage thresholds of materials at extreme incident radiative heat flux

2018 Joint Thermophysics and Heat Transfer Conference

Engerer, Jeffrey D.; Brown, Alexander L.; Christian, Josh

Intense, dynamic radiant heat loads damage and ignite many common materials, but are outside the scope of typical fire studies. Explosive, directed-energy, and nuclear-weapon environments subject materials to this regime of extreme heating. The Solar Furnace at the National Solar Test Facility simulated this environment for an extensive experimental study on the response of many natural and engineered materials. Solar energy was focused onto a spot (∼10 cm2 area) in the center of the tested materials, generating an intense radiant load (∼100 kW m−2 –1000 kW m−2) for approximately 3 seconds. Using video photography, the response of the material to the extreme heat flux was carefully monitored. The initiation time of various events was monitored, including charring, pyrolysis, ignition, and melting. These ignition and damage thresholds are compared to historical ignition results predominantly for black, α-cellulose papers. Reexamination of the historical data indicates ignition behavior is predicted from simplified empirical models based on thermal diffusion. When normalized by the thickness and the thermal properties, ignition and damage thresholds exhibit comparable trends across a wide range of materials. This technique substantially reduces the complexity of the ignition problem, improving ignition models and experimental validation.

More Details

Spatially resolved analysis of material response to destructive environments utilizing three-dimensional scans

2018 Joint Thermophysics and Heat Transfer Conference

Engerer, Jeffrey D.; Brown, Alexander L.

The surface topology of a solid subjected to destructive environments is often difficult to quantify. In thermal environments, the size and shape of the solid changes as it pyrolyzes, ablates, warps, or chars. Quantitative descriptions of such responses are valuable for data reporting and model validation. In this work, a three-dimensional scanner is evaluated for non-destructive material analysis. The scans spatially resolve the response of materials to a high-heat-flux environment. To account for the effect of distortion induced in thin materials, back-side scans of the sample are used to characterize the displacement of the bulk material. Data spanning the area of the sample, rather than using a net or average quantity, enhances the evaluation of the crater formed by the incident flux. The 3D reconstruction of the sample also provides the ability to perform volumetric calculations. The data obtained from this methodology may be useful for characterizing materials exposed to a variety of destructive environments.

More Details

Design load analysis for wave energy converters

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Van Rij, Jennifer; Yu, Yi H.; Coe, Ryan G.

This study demonstrates a systematic methodology for establishing the design loads of a wave energy converter. The proposed design load methodology incorporates existing design guidelines, where they exist, and follows a typical design progression; namely, advancing from many, quick, order-ofmagnitude accurate, conceptual stage design computations to a few, computationally intensive, high-fidelity, design validation simulations. The goal of the study is to streamline and document this process based on quantitative evaluations of the design loads' accuracy at each design step and consideration for the computational efficiency of the entire design process. For the wave energy converter, loads, and site conditions considered, this study demonstrates an efficient and accurate methodology of evaluating the design loads.

More Details

Evaluation of alternative designs for a high temperature particle-to-SCO2 heat exchanger

ASME 2018 12th International Conference on Energy Sustainability, ES 2018, collocated with the ASME 2018 Power Conference and the ASME 2018 Nuclear Forum

Ho, Clifford K.; Carlson, Matthew; Albrecht, Kevin; Ma, Zhiwen; Jeter, Sheldon; Nguyen, Clayton M.

This paper presents an evaluation of alternative particle heat-exchanger designs, including moving packed-bed and fluidized-bed designs, for high-temperature heating of a solardriven supercritical CO2 (sCO2) Brayton power cycle. The design requirements for high pressure (> 20 MPa) and high temperature (> 700 °C) operation associated with sCO2 posed several challenges requiring high-strength materials for piping and/or diffusion bonding for plates. Designs from several vendors for a 100 kW-thermal particle-to-sCO2 heat exchanger were evaluated as part of this project. Cost, heat-transfer coefficient, structural reliability, manufacturability, parasitics and heat losses, scalability, compatibility, erosion and corrosion, transient operation, and inspection ease were considered in the evaluation. An analytical hierarchy process was used to weight and compare the criteria for the different design options. The fluidized-bed design fared the best on heat transfer coefficient, structural reliability, scalability and inspection ease, while the moving packed-bed designs fared the best on cost, parasitics and heat losses, manufacturability, compatibility, erosion and corrosion, and transient operation. A 100 kWt shell-and-plate design was ultimately selected for construction and integration with Sandia's falling particle receiver system.

More Details

Strategies for improving the laser-induced damage thresholds of dichroic coatings developed for high-transmission at 527 nm and high reflection at 1054 nm

Proceedings of SPIE - The International Society for Optical Engineering

Field, Ella; Kletecka, Damon

We report on progress for increasing the laser-induced damage threshold of dichroic beam combiner coatings for high transmission at 527 nm and high reflection at 1054 nm (22.5° angle of incidence, S-polarization). The initial coating consisted of HfO2 and SiO2 layers deposited with electron beam evaporation, and the laser-induced damage threshold was 7 J/cm2 at 532 nm with 3.5 ns pulses. This study introduces different coating strategies that were utilized to increase the laser damage threshold of this coating to 12.5 J/cm2.

More Details

Flash ignition tests at the national solar thermal test facility

2018 Joint Thermophysics and Heat Transfer Conference

Ricks, Allen J.; Brown, Alexander L.; Christian, Josh

Nuclear weapon airbursts can create extreme radiative heat fluxes for a short duration. The radiative heat transfer from the fireball can damage and ignite materials in a region that extends beyond the zone damaged by the blast wave itself. Directed energy weapons also create extreme radiative heat fluxes. These scenarios involve radiative fluxes much greater than the environments typically studied in flammability and ignition tests. Furthermore, the vast majority of controlled experiments designed to obtain material response and flammability data at high radiative fluxes have been performed at relatively small scales (order 10 cm2 area). A recent series of tests performed on the Solar Tower at the National Solar Thermal Test Facility exposed objects and materials to fluxes of 100 – 2,400 kW/m2 at a much larger scale (≈1 m2 area). This paper provides an overview of testing performed at the Solar Tower for a variety of materials including aluminum, fabric, and two types of plastics. Tests with meter-scale objects such as tires and chairs are also reported, highlighting some potential effects of geometry that are difficult to capture in small-scale tests. The aluminum sheet melted at the highest heat flux tested. At the same flux, the tire ignited but the flames were not sustained when the external heat flux was removed; the damage appeared to be limited to the outer portion of the tire, and internal pressure was maintained.

More Details

Enhanced second-harmonic generation in broken symmetry III-V semiconductor metasurfaces driven by Fano resonance

Optics InfoBase Conference Papers

Vabishchevich, P.P.; Liu, Sheng; Sinclair, Michael B.; Keeler, Gordon A.; Peake, Gregory M.; Brener, Igal

We use broken symmetry III-V semiconductor Fano metasurfaces to substantially improve the efficiency of second-harmonic generation (SHG) in the near infrared, compared to SHG obtained from metasurfaces created using symmetrical Mie resonators.

More Details

III-V semiconductor metasurface as the optical metamixer

Optics InfoBase Conference Papers

Vabishchevich, P.P.; Liu, S.; Vaskin, A.; Reno, John L.; Keeler, G.A.; Sinclair, Michael B.; Staude, I.; Brener, Igal

In this work, we experimentally demonstrate simultaneous occurrence of second-,third-, fourth-harmonic generation, sum-frequency generation, four-wave mixing and six-wave mixing processes in III-V semiconductor metasurfaces with spectra spanning from the UV to the near-IR.

More Details

Importance of treating correlations in the uncertainty quantification of radiation damage metrics

20th Topical Meeting of the Radiation Protection and Shielding Division, RPSD 2018

Griffin, Patrick J.; Koning, Arjan; Rochman, Dimitri

The radiation effects community embraces the importance of quantifying uncertainty in model predictions and the importance of propagating this uncertainty into the integral metrics used to validate models, but they are not always aware of the importance of addressing the energy- and reaction-dependent correlations in the underlying uncertainty contributors. This paper presents a rigorous high-fidelity Total Monte Carlo approach that addresses the correlation in the underlying uncertainty components and quantifies the role of both energy and reaction-dependent correlations in a sample application that addresses the damage metrics relevant to silicon semiconductors.

More Details

Use of comparative vacuum monitoring sensors for automated, wireless health monitoring of bridges and infrastructure

Maintenance, Safety, Risk, Management and Life-Cycle Performance of Bridges - Proceedings of the 9th International Conference on Bridge Maintenance, Safety and Management, IABMAS 2018

Roach, Dennis P.

Economic barriers to the replacement of bridges and other civil structures have created an aging infrastructure and placed greater demands on the deployment of effective and rapid health monitoring methods. To gain access for inspections, structure and sealant must be removed, disassembly processes must be completed and personnel must be transported to remote locations. Reliable Structural Health Monitoring (SHM) systems can automatically process data, assess structural condition, and signal the need for specific maintenance actions. They can reduce the costs associated with the increasing maintenance and surveillance needs of aging structures. The use of in-situ sensors, coupled with remote interrogation, can be employed to overcome a myriad of inspection impediments stemming from accessibility limitations, complex geometries, the location of hidden damage, and the isolated location of the structure. Furthermore, prevention of unexpected flaw growth and structural failure could be improved if on-board SHM systems were used to regularly, or even continuously, assess structural integrity. A research program was completed to develop and validate Comparative Vacuum Monitoring (CVM) sensors for crack detection. Sandia National Labs, in conjunction with private industry and the U.S. Department of Transportation, completed a series of CVM validation and certification programs aimed at establishing the overall viability of these sensors for monitoring bridge structures. Factors that affect SHM sensitivity include flaw size, shape, orientation and location relative to the sensors, along with operational environments. Statistical methods using one-sided tolerance intervals were employed to derive Probability of Flaw Detection (POD) levels for typical application scenarios. Complimentary, multi-year field tests were also conducted to study the deployment and long-term operation of CVM sensors on aircraft and bridges. This paper presents the quantitative crack detection capabilities of the CVM sensor, its performance in actual operating environments, and the prospects for structural health monitoring applications on a wide array of civil structures.

More Details

Microchannel heat exchanger flow validation study

Proceedings of the ASME Turbo Expo

Lance, Blake; Carlson, Matthew

Flow maldistribution in microchannel heat exchanger(MCHEs) can negatively impact heat exchanger effectiveness.Several rules of thumb exist about designing for uniform flow,but very little data are published to support these claims. In thiswork, complementary experiments and computational fluiddynamics (CFD) simulations of MCHEs enable a solidunderstanding of flow uniformity to a higher level of detail thanpreviously seen. Experiments provide a validation data source toassess CFD predictive capability. The traditional semi-circularheader geometry is tested. Experiments are carried out in a clearacrylic MCHE and water flow is measured optically with particleimage velocimetry. CFD boundary conditions are matched tothose in the experiment and the outputs, specifically velocity andturbulent kinetic energy profiles, are compared.

More Details

Self-assembly/disassembly of giant double-hydrophilic polymersomes at biologically-relevant pH

Chemical Communications

Shin, Sun H.; Mcaninch, Patrick T.; Henderson, Ian M.; Gomez, Andrew G.; Greene, Adrienne C.; Carnes, Eric C.; Paxton, Walter F.

Self-assembled giant polymer vesicles prepared from double-hydrophilic diblock copolymers, poly(ethylene oxide)-b-poly(acrylic acid) (PEO-PAA) show significant degradation in response to pH changes. Because of the switching behavior of the diblock copolymers at biologically-relevant pH environments (2 to 9), these polymer vesicles have potential biomedical applications as smart delivery vehicles.

More Details

Compiler-assisted source-to-source skeletonization of application models for system simulation

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Wilke, Jeremiah; Kenny, Joseph; Knight, Samuel; Rumley, Sebastien

Performance modeling of networks through simulation requires application endpoint models that inject traffic into the simulation models. Endpoint models today for system-scale studies consist mainly of post-mortem trace replay, but these off-line simulations may lack flexibility and scalability. On-line simulations running so-called skeleton applications run reduced versions of an application that generate traffic that is the same or similar to the full application. These skeleton apps have advantages for flexibility and scalability, but they often must be custom written for the simulator itself. Auto-skeletonization of existing application source code via compiler tools would provide endpoint models with minimal development effort. These source-to-source transformations have been only narrowly explored. We introduce a pragma language and corresponding Clang-driven source-to-source compiler that performs auto-skeletonization based on provided pragma annotations. We describe the compiler toolchain, validate the generated skeletons, and show scalability of the generated simulation models beyond 100Â K endpoints for example MPI applications. Overall, we assert that our proposed auto-skeletonization approach and the flexible skeletons it produces can be an important tool in realizing balanced exascale interconnect designs.

More Details

Implementation and comparison of advanced friction representations within finite element models

Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics

Mathis, A.T.; Brink, Adam R.; Quinn, D.D.

Advanced friction models are often mathematically defined as nonlinear differential equations or complicated algebraic operations acting in single degree-of-freedom systems; however, such simplified conditions are not relevant to most design applications. As a result, current designers of practical structures typically simplify friction modeling to classical, Coulomb-like descriptions. In order to be viable for design purposes, friction models must be applicable to realistic structures and available in standard commercial codes. The goal of this work is to implement several different friction models into the commercial code, Abaqus, as user-defined contact models and to explore their properties in a dynamic simulation. A verification problem of interest to the joints community is utilized to evaluate efficacy. Several output quantities of the model will be presented and discussed, including frictional energy dissipation, amplitude, and frequency. The selected results are comparable to commonly observed experimental phenomena in mechanics of jointed structures.

More Details

Numerical model development and validation for the wecccomp control competition

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Tom, Nathan; Ruehl, Kelley M.; Ferri, Francesco

This paper details the development and validation of a numerical model of the Wavestar wave energy converter (WEC) developed in WEC-Sim. This numerical model was developed in support of the WEC Control Competition (WECCCOMP), a competition with the objective of maximizing WEC performance over costs through innovative control strategies. WECCCOMP has two stages: numerical implementation of control strategies, and experimental implementation. The work presented in this paper is for support of the numerical implementation, where contestants are provided a WEC-Sim model of the 1:20 scale Wavestar device to develop their control algorithms. This paper details the development of the numerical model in WEC-Sim and of its validation through comparison to experimental data.

More Details

Taxonomist: Application Detection Through Rich Monitoring Data

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Ates, Emre; Tuncer, Ozan; Turk, Ata; Leung, Vitus J.; Brandt, James M.; Egele, Manuel; Coskun, Ayse K.

Modern supercomputers are shared among thousands of users running a variety of applications. Knowing which applications are running in the system can bring substantial benefits: knowledge of applications that intensively use shared resources can aid scheduling; unwanted applications such as cryptocurrency mining or password cracking can be blocked; system architects can make design decisions based on system usage. However, identifying applications on supercomputers is challenging because applications are executed using esoteric scripts along with binaries that are compiled and named by users. This paper introduces a novel technique to identify applications running on supercomputers. Our technique, Taxonomist, is based on the empirical evidence that applications have different and characteristic resource utilization patterns. Taxonomist uses machine learning to classify known applications and also detect unknown applications. We test our technique with a variety of benchmarks and cryptocurrency miners, and also with applications that users of a production supercomputer ran during a 6 month period. We show that our technique achieves nearly perfect classification for this challenging data set.

More Details

Assessing task-to-data affinity in the LLVM OpenMP runtime

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Klinkenberg, Jannis; Samfass, Philipp; Terboven, Christian; Duran, Alejandro; Klemm, Michael; Teruel, Xavier; Mateo, Sergi; Olivier, Stephen L.; Muller, Matthias S.

In modern shared-memory NUMA systems which typically consist of two or more multi-core processor packages with local memory, affinity of data to computation is crucial for achieving high performance with an OpenMP program. OpenMP* 3.0 introduced support for task-parallel programs in 2008 and has continued to extend its applicability and expressiveness. However, the ability to support data affinity of tasks is missing. In this paper, we investigate several approaches for task-to-data affinity that combine locality-aware task distribution and task stealing. We introduce the task affinity clause that will be part of OpenMP 5.0 and provide the reasoning behind its design. Evaluation with our experimental implementation in the LLVM OpenMP runtime shows that task affinity improves execution performance up to 4.5x on an 8-socket NUMA machine and significantly reduces runtime variability of OpenMP tasks. Our results demonstrate that a variety of applications can benefit from task affinity and that the presented clause is closing the gap of task-to-data affinity in OpenMP 5.0.

More Details

Investigations of fluid flow in fractured crystalline rocks at the Mizunami Underground Research Laboratory

2nd International Discrete Fracture Network Engineering Conference, DFNE 2018

Hadgu, Teklu; Kalinina, Elena A.; Wang, Yifeng; Ozaki, Y.; Iwatsuki, T.

Experimental hydrology data from the Mizunami Underground Research Laboratory in Central Japan have been used to develop a site-scale fracture model and a flow model for the study area. The discrete fracture network model was upscaled to a continuum model to be used in flow simulations. A flow model was developed centered on the research tunnel, and using a highly refined regular mesh. In this study development and utilization of the model is presented. The modeling analysis used permeability and porosity fields from the discrete fracture network model as well as a homogenous model using fixed values of permeability and porosity. The simulations were designed to reproduce hydrology of the modeling area and to predict inflow of water into the research tunnel during excavation. Modeling results were compared with the project hydrology data. Successful matching of the experimental data was obtained for simulations based on the discrete fracture network model.

More Details

Offshore wind sediment stability evaluation framework

Proceedings of the Annual Offshore Technology Conference

Jones, Craig; Mcwilliams, Sam; Engelmann, Georg; Thurlow, Aimee; Roberts, Jesse D.

Developing sound methods to evaluate risk of seabed mobility and alteration of sediment transport patterns in the near-shore coastal regions due to the presence of Offshore Wind (OW) infrastructure is critical to project planning, permitting, and operations. OW systems may include seafloor foundations, cabling, floating structures with gravity anchors, or a combination of several of these systems. Installation of these structures may affect the integrity of the sediment bed, thus affecting seabed dynamics and stability. It is therefore necessary to evaluate hydrodynamics and seabed dynamics and the effects of OW subsea foundations and cables on sediment transport. A methodology is presented here to map a site's sediment (seabed) stability and can in turn support the evaluation of the potential for these processes to affect OW deployments and the local ecology. Sediment stability risk maps are developed for a site offshore of Central Oregon. A combination of geophysical site characterization, metocean analysis, and numerical modeling is used to develop a quantitative assessment of local scour and overall seabed stability. The findings generally show the presence of structures reduces the sediment transport in the lee area of the array by altering current and wave fields. The results illustrate how the overall regional patterns of currents and waves influence local scour near pilings and cables.

More Details

Development and validation of a fracture model for the granite rocks at Mizunami Underground Research Laboratory, Japan

2nd International Discrete Fracture Network Engineering Conference, DFNE 2018

Kalinina, Elena A.; Hadgu, Teklu; Wang, Yifeng; Ozaki, Y.; Iwatsuki, T.

The Mizunami Underground Research Laboratory is located in the Tono area (Central Japan). Its main purpose is providing a scientific basis for the research and development of technologies needed for deep geological disposal of radioactive waste in fractured crystalline rocks. The current work is focused on the experiments in the research tunnel (500 m depth). The collected tunnel and borehole data were shared with the participants of DEvelopment of COupled models and their VALidation against EXperiments (DECOVALEX) project. This study describes how these data were used to (1) develop the fracture model of the granite rocks around the research tunnel and (2) validate the model.

More Details

A Stable Low Frequency Time Domain EFIE with Weighted Continuity Equation

2018 IEEE Antennas and Propagation Society International Symposium and USNC/URSI National Radio Science Meeting, APSURSI 2018 - Proceedings

Roth, Thomas E.; Chew, Weng C.

A new time domain electric field integral equation is proposed to solve low frequency problems. This new formulation uses the current and charge densities as unknowns, with a form of the continuity equation that is weighted by a Green's function as a second constraining equation. This equation can be derived from a scalar potential equivalence principle integral equation, which is in contrast to the traditional strong form of the continuity equation that has been used in an ad-hoc manner in the augmented EFIE. Numerical results demonstrate the improved stability of this approach, as well as the accuracy at low frequencies.

More Details

Time-resolved digital in-line holography and pyrometry for aluminized solid rocket propellants

Optics InfoBase Conference Papers

Mazumdar, Yi C.; Heyborne, Jeffery D.; Guildenbecher, Daniel

Combustion of aluminum droplets in solid rocket propellants is studied using laser diagnostic techniques. The time-resolved droplet velocity, temperature, and size are measured using high speed digital in-line holography and imaging pyrometry at 20 kHz.

More Details

Ignition and damage thresholds of materials at extreme incident radiative heat flux

2018 Joint Thermophysics and Heat Transfer Conference

Engerer, Jeffrey D.; Brown, Alexander L.; Christian, Josh

Intense, dynamic radiant heat loads damage and ignite many common materials, but are outside the scope of typical fire studies. Explosive, directed-energy, and nuclear-weapon environments subject materials to this regime of extreme heating. The Solar Furnace at the National Solar Test Facility simulated this environment for an extensive experimental study on the response of many natural and engineered materials. Solar energy was focused onto a spot (∼10 cm2 area) in the center of the tested materials, generating an intense radiant load (∼100 kW m−2 –1000 kW m−2) for approximately 3 seconds. Using video photography, the response of the material to the extreme heat flux was carefully monitored. The initiation time of various events was monitored, including charring, pyrolysis, ignition, and melting. These ignition and damage thresholds are compared to historical ignition results predominantly for black, α-cellulose papers. Reexamination of the historical data indicates ignition behavior is predicted from simplified empirical models based on thermal diffusion. When normalized by the thickness and the thermal properties, ignition and damage thresholds exhibit comparable trends across a wide range of materials. This technique substantially reduces the complexity of the ignition problem, improving ignition models and experimental validation.

More Details

Survey of sensitivity analysis methods during the simulation of residual stresses in simple composite structures

33rd Technical Conference of the American Society for Composites 2018

Nelson, Stacy M.; Hanson, Alexander A.; Werner, Brian T.; Nelson, Kevin; Briggs, Timothy

Process-induced residual stresses occur in composite structures composed of dissimilar materials. As these residual stresses could result in fracture, their consideration when designing composite parts is necessary. However, the experimental determination of residual stresses in prototype parts can be time and cost prohibitive. Alternatively, it is possible for computational tools to predict potential residual stresses. Therefore, a process modeling methodology was developed and implemented into Sandia National Laboratories' SIERRA/Solid Mechanics code. This method requires the specification of many model parameters to form accurate predictions. These parameters, which are related to the mechanical and thermal behaviors of the modeled composite material, can be determined experimentally, but at a potentially prohibitive cost. Furthermore, depending upon a composite part's specific geometric and manufacturing process details, it is possible that certain model parameters may have an insignificant effect on the simulated prediction. Therefore, to streamline the material characterization process, formal parameter sensitivity studies can be applied to determine which of the required input parameters are truly relevant to the simulated prediction. Then, only those model parameters found to be critical will require rigorous experimental characterization. Numerous sensitivity analysis methods exist in the literature, each offering specific strengths and weaknesses. Therefore, the objective of this study is to compare the performance of several accepted sensitivity analysis methods during the simulation of a bi-material composite strip's manufacturing process. The examined sensitivity analysis methods include both simple techniques, such Monte Carlo and Latin Hypercube sampling, as well as more sophisticated approaches, such as the determination of Sobol indices via a polynomial chaos expansion or a Gaussian process. The relative computational cost and critical parameter list are assessed for each of the examined methods and conclusions are drawn regarding the ideal sensitivity analysis approach for future residual stress investigations.

More Details

A dynamic coupled-code assessment of mitigation actions in an interfacing system loss of coolant accident

PSAM 2018 - Probabilistic Safety Assessment and Management

Jankovsky, Zachary K.; Denman, Matthew R.; Aldemir, Tunc

Containment bypass scenarios in nuclear power plants can lead to large early release of radionuclides. A residual heat removal (RHR) system interfacing system loss of coolant accident (ISLOCA) has the potential to cause a hazardous environment in the auxiliary building, a loss of coolant from the primary system, a pathway for early release of radionuclides, and the failure of a system important to safely shutting down the plant. Prevention of this accident sequence relies on active systems that may be vulnerable to cyber failures in new or retrofitted plants with digital instrumentation and control systems. RHR ISLOCA in a hypothetical pressurized water reactor is analyzed in a dynamic framework to evaluate the time-dependent effects of various uncertainties on the state of the nuclear fuel, the auxiliary building environment, and the release of radionuclides. The ADAPT dynamic event tree code is used to drive both the MELCOR severe accident analysis code and the RADTRAD dose calculation code to track the progression of the accident from the initiating event to its end states. The resulting data set is then mined for insights into key events and their impacts on the final state of the plant and radionuclide releases.

More Details

Sodium valve performance in the NaSCoRD database

PSAM 2018 - Probabilistic Safety Assessment and Management

Denman, Matthew R.; Stuart, Zacharia W.; Jankovsky, Zachary K.

Sodium Fast Reactors (SFRs) have an extensive operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories (SNL) has recently reconstituted the United States SFR data from the Centralized Reliability Database Organization (CREDO) into a new modern database called the Sodium (Na) System Component Reliability Database (NaSCoRD). This new database is currently undergoing validation and usability testing to better understand the strengths and limitations of this historical data. The most common class of equipment found in the NaSCoRD database are valves. NaSCoRD contains a record of over 4,000 valves that have operated in EBR-II, FFTF, and test loops including those operated by Westinghouse and the Energy Technology Engineering Center. Valve failure events in NaSCoRD can be categorized by working fluid (e.g., sodium, water, gas), valve type (e.g., butterfly, check, throttle, block), failure mode (e.g., failure to open, failure to close, rupture), operating facility, operating temperature, or other user defined categories. Sodium valve reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of EG&G Idaho’s suggested corrections and various prior distributions on these reliability estimates will also be presented.

More Details

Parametric Analysis of Vertically Oriented Metamaterials for Wideband Omnidirectional Perfect Absorption

2018 IEEE Antennas and Propagation Society International Symposium and USNC/URSI National Radio Science Meeting, APSURSI 2018 - Proceedings

Pung, Aaron J.; Goldflam, Michael; Burckel, David B.; Brener, Igal; Sinclair, Michael B.; Campione, Salvatore

Metamaterials provide a means to tailor the spectral response of a surface. Given the periodic nature of the metamaterial, proper design of the unit cell requires intimate knowledge of the parameter space for each design variable. We present a detailed study of the parameter space surrounding vertical split-ring resonators and planar split-ring resonators, and demonstrate widening of the perfect absorption bandwidth based on the understanding of its parameter space.

More Details

Nuclear facility safety enhancement using Sandia National Laboratories’ computer codes

International Conference on Nuclear Engineering, Proceedings, ICONE

Foulk, James W.

This paper describes the ongoing study of nuclear facility safety enhancement using Sandia National Laboratories’ (SNL) computer codes, supported by U.S. Department of Energy (DOE) Nuclear Safety Research and Development (NSR&D) Program. Continued DOE NSR&D support, since 2014 has allowed the use of the SNL engineering code suite (SIERRA Mechanics) to further substantiate data in the DOE Handbook published in 1994: DOE-HDBK-3010-94, “Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities.” The use of SIERRA codes allows for a better understanding of the mechanics, dynamics, chemistry and overall physics of airborne release scenarios. SIERRA codes provide insights into the contributing phenomena of source term releases from events such as liquid fires. The 1994 Handbook documents small-scaled, bench-top and limited experiments involving liquid fires, powder spills, pressurized releases, and mechanical insult-induced fragmentation scenarios. Data recorded from these scenarios has been substantiated using SIERRA solid mechanics and fluid mechanics codes. Data passing among multi-physics SIERRA codes predicted the contaminant release from a drum rupture due to fire even though there is no experimental data available. In the anticipated revision effort of the Handbook by DOE, these computational capabilities could enhance the data in a broader usage and could provide confidence in the safety analysis SIERRA codes can provide the initial source term to be used in the leak path factor (LPF) analyses, which predicts the ST release out of the facility. Typical LPF analysis is done using the MELCOR code, developed at SNL for the U.S. Nuclear Regulatory Commission. Widely used in nuclear reactor applications, MELCOR is a toolbox safety code in the DOE’s central registry for LPF applications. A recent LPF guidance study done by SNL indicated that MELCOR 2.1, along with updated guidance, should replace the obsolete MELCOR 1.8.5 guidance. This new guidance is significantly improved over the previous guidance, utilizing extensive MELCOR validation, including applicable reactor experiments and experiments described in the DOE-HDBK-3010-94 for LPF applications. The latest version of MELCOR should be included in DOE’s central registry, and should be used by safety analysts for LPF analyses.

More Details

Pyomo.GDP: Disjunctive Models in Python

Computer Aided Chemical Engineering

Chen, Qi; Johnson, Emma S.; Siirola, John D.; Grossmann, Ignacio E.

In this work, we describe new capabilities for the Pyomo.GDP modeling environment, moving beyond classical reformulation approaches to include non-standard reformulations and a new logic-based solver, GDPopt. Generalized Disjunctive Programs (GDPs) address optimization problems involving both discrete and continuous decision variables. For difficult problems, advanced reformulations such as the disjunctive “basic step” to intersect multiple disjunctions or the use of procedural reformulations may be necessary. Complex nonlinear GDP models may also be tackled using logic-based outer approximation. These expanded capabilities highlight the flexibility that Pyomo.GDP offers modelers in applying novel strategies to solve difficult optimization problems.

More Details

Energy implications of torque feedback control and series elastic actuators for mobile robots

ASME 2018 Dynamic Systems and Control Conference, DSCC 2018

Buerger, Stephen P.; Mazumdar, Anirban; Spencer, Steven J.

Torque feedback control and series elastic actuators are widely used to enable compact, highly-geared electric motors to provide low and controllable mechanical impedance. While these approaches provide certain benefits for control, their impact on system energy consumption is not widely understood. This paper presents a model for examining the energy consumption of drivetrains implementing various target dynamic behaviors in the presence of gear reductions and torque feedback. Analysis of this model reveals that under cyclical motions for many conditions, increasing the gear ratio results in greater energy loss. A similar model is presented for series elastic actuators and used to determine the energy consequences of various spring stiffness values. Both models enable the computation and optimization of power based on specific hardware manifestations, and illustrate how energy consumption sometimes defies conventional best-practices. Results of evaluating these two topologies as part of a drivetrain design optimization for two energy-efficient electrically driven humanoids are summarized. The model presented enables robot designers to predict the energy consequences of gearing and series elasticity for future robot designs, helping to avoid substantial energy sinks that may be inadvertently introduced if these issues are not properly analyzed.

More Details

A General Framework for Sensitivity-Based Optimal Control and State Estimation

Computer Aided Chemical Engineering

Thierry, David; Nicholson, Bethany L.; Biegler, Lorenz

New modelling and optimization platforms have enabled the creation of frameworks for solution strategies that are based on solving sequences of dynamic optimization problems. This study demonstrates the application of the Python-based Pyomo platform as a basis for formulating and solving Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE) problems, which enables fast on-line computations through large-scale nonlinear optimization and Nonlinear Programming (NLP) sensitivity. We describe these underlying approaches and sensitivity computations, and showcase the implementation of the framework with large DAE case studies including tray-by-tray distillation models and Bubbling Fluidized Bed Reactors (BFB).

More Details

A Compact Beam-Forming Network for Switched-Beam Arrays

2018 IEEE Antennas and Propagation Society International Symposium and USNC/URSI National Radio Science Meeting, APSURSI 2018 - Proceedings

Young, Matthew W.

A Butler-matrix-inspired beam-forming network has been developed to provide phasing for a switched-beam 2×2 element antenna array. The network uses an arrangement of double-box quadrature hybrids to achieve wide instantaneous bandwidth in a small, planar form factor. The planar feed structure has been designed to integrate with an array aperture to form a low-profile array stackup.

More Details

Results and correlations from analyses of the ENSA ENUN 32P cask transport tests

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Kalinina, Elena A.; Gordon, Natalie; Ammerman, Douglas; Uncapher, William L.; Saltzstein, Sylvia J.; Wright, Catherine

An ENUN 32P cask supplied by Equipos Nucleares S.A. (ENSA) was transported 9,600 miles by road, sea, and rail in 2017 in order to collect shock and vibration data on the cask system and surrogate spent fuel assemblies within the cask. The task of examining 101,857 ASCII data files – 6.002 terabytes of data (this includes binary and ASCII files) – has begun. Some results of preliminary analyses are presented in this paper. A total of seventy-seven accelerometers and strain gauges were attached by Sandia National Laboratories (SNL) to three surrogate spent fuel assemblies, the cask basket, the cask body, the transport cradle, and the transport platforms. The assemblies were provided by SNL, Empresa Nacional de Residuos Radiactivos, S.A. (ENRESA), and a collaboration of Korean institutions. The cask system was first subjected to cask handling operations at the ENSA facility. The cask was then transported by heavy-haul truck in northern Spain and shipped from Spain to Belgium and subsequently to Baltimore on two roll-on/roll-off ships. From Baltimore, the cask was transported by rail using a 12- axle railcar to the American Association of Railroads’ Transportation Technology Center, Inc. (TTCI) near Pueblo, Colorado where a series of special rail tests were performed. Data were continuously collected during this entire sequence of multi-modal transportation events. (We did not collect data on the transfer between modes of transportation.) Of particular interest – indeed the original motivation for these tests – are the strains measured on the zirconium-alloy tubes in the assemblies. The strains for each of the transport modes are compared to the yield strength of irradiated Zircaloy to illustrate the margin against rod failure during normal conditions of transport. The accelerometer data provides essential comparisons of the accelerations on the different components of the cask system exhibiting both amplification and attenuation of the accelerations at the transport platforms through the cradle and cask and up to the interior of the cask. These data are essential for modeling cask systems. This paper concentrates on analyses of the testing of the cask on a 12-axle railcar at TTCI.

More Details

Blind prediction of the response of an additively manufactured tensile test coupon loaded to failure

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Gilkey, Lindsay N.; Bignell, John; Dingreville, Remi; Sanborn, Scott E.; Jones, Chris A.

Sandia National Laboratories (SNL) conducted in the summer of 2017 its third fracture challenge (i.e., the Third Sandia Fracture Challenge or SFC3). The challenge, which was open to the public, asked participants to predict, without foreknowledge of the outcome, the fracture response predictions of an additively manufactured tensile test coupon of moderate geometric complexity when loaded to failure. This paper outlines the approach taken by our team, one of the SNL teams that participated in the challenge, to make a prediction. To do so, we employed a traditional finite element approach coupled with a continuum damage mechanics constitutive model. Constitutive model parameters were determined through a calibration process of the model response with the provided longitudinal and transverse tensile test coupon data. Comparison of model predictions with the challenge coupon test results are presented and general observations gleaned from the exercise are provided.

More Details

Multi-objective optimization for coupled mechanics-dynamics analyses of composite structures

33rd Technical Conference of the American Society for Composites 2018

Skulborstad, Alyssa J.; Nelson, Stacy M.

Fiber reinforced composites are increasingly used in advanced applications due to advantageous qualities including high strength-To-weight ratio. The ability to tailor composite structures to meet specific performance criteria is particularly desirable. In practice designs must often balance multiple objectives with conflicting behavior. Objectives of this work were to optimize lamina orientations of a three-ply carbon fiber reinforced composite structure for the coupled solid mechanics and dynamics considerations of minimizing max principal stress while maximizing fundamental frequency. Two approaches were investigated: Pareto set optimization (PSO), and multi-objective genetic algorithm (MOGA). In PSO, a single objective function is constructed as a weighted sum of multiple objective terms. Multiple weighting sets are evaluated to determine a Pareto set of solutions. MOGA mimics evolutionary principles, where the best design points populate subsequent generations. Instead of weight factors, MOGA uses a domination count that ranks population members. Results showed both methods converged to solutions along the same Pareto front. The PSO method calculated fewer function evaluations, but provided many fewer final data points. At a certain threshold, MOGA provides more solutions with fewer calculations. The PSO method requires more user intervention which may introduce bias, but can largely be run in parallel. In contrast, MOGA generation are evaluated in series. The Pareto front showed the trend of increasing frequency with increasing stress. At the low stress and frequency extreme, the stacking sequence tended toward (45°/90°/45°) with max principal stress located in the inner ply in the hoop direction. At high stress and frequency, the stacking sequences (90°/∗/90°) indicated that the middle ply orientation was less significant. A mesh convergence study and dynamic validation experiments gave confidence to the computational model. Future work will include an uncertainty quantification about selected solutions. The final selected solution will be fabricated and experimental validation testing will be conducted.

More Details

Denoising 400-khz “postage-stamp piv” using uncertainty quantification

AIAA Aerospace Sciences Meeting, 2018

Beresh, Steven J.

A new approach to denoising Time-Resolved Particle Image Velocimetry data is proposed by incorporating measurement uncertainties estimated using the correlation statistics method. The denoising algorithm of Oxlade et al (Experiments in Fluids, 2012) has been modified to add the frequency dependence of PIV noise by obtaining it from the uncertainty estimates, including the correlated term between velocity and uncertainty that is zero only if white noise is assumed. Although the present approach was only partially effective in denoising the 400-kHz “postage-stamp PIV” data, important and novel insights were obtained into the behavior of PIV uncertainty. The belief that PIV noise is white noise has been shown to be inaccurate, though it may serve as a reasonable approximation for measurements with a high dynamic range. Noise spectra take a similar shape to the velocity spectra because increased velocity fluctuations correspond to higher shear and therefore increased uncertainty. Coherence functions show that correlation between velocity fluctuations and uncertainty is strongest at low and mid frequencies, tapering to a much weaker correlation at high frequencies where turbulent scales are small with lower shear magnitudes.

More Details

Solution Approaches to Stochastic Programming Problems under Endogenous and/or Exogenous Uncertainties

Computer Aided Chemical Engineering

Cremaschi, Selen; Siirola, John D.

Optimization problems under uncertainty involve making decisions without the full knowledge of the impact the decisions will have and before all the facts relevant to those decisions are known. These problems are common, for example, in process synthesis and design, planning and scheduling, supply chain management, and generation and distribution of electric power. The sources of uncertainty in optimization problems fall into two broad categories: endogenous and exogenous. Exogenous uncertain parameters are realized at a known stage (e.g., time period or decision point) in the problem irrespective of the values of the decision variables. For example, demand is generally considered to be independent of any capacity expansion decisions in process industries, and hence, is regarded as an exogenous uncertain parameter. In contrast, decisions impact endogenous uncertain parameters. The impact can either be in the resolution or in the distribution of the uncertain parameter. The realized values of a Type-I endogenous uncertain parameter are affected by the decisions. An example of this type of uncertainty would be facility protection problem where the likelihood of a facility failing to deliver goods or services after a disruptive event depends on the level of resources allocated as protection to that facility. On the other hand, only the realization times of Type-II endogenous uncertain parameters are affected by decisions. For example, in a clinical trial planning problem, whether a clinical trial is successful or not is only realized after the clinical trial has been completed, and whether the clinical trial is successful or not is not impacted by when the clinical trial is started. There are numerous approaches to modelling and solving optimization problems with exogenous and/or endogenous uncertainty, including (adjustable) robust optimization, (approximate) dynamic programming, model predictive control, and stochastic programming. Stochastic programming is a particularly attractive approach, as there is a straightforward translation from the deterministic model to the stochastic equivalent. The challenge with stochastic programming arises through the rapid, sometimes exponential, growth in the program size as we sample the uncertainty space or increase the number of recourse stages. In this talk, we will give an overview of our research activities developing practical stochastic programming approaches to problems with exogeneous and/or endogenous uncertainty. We will highlight several examples from power systems planning and operations, process modelling, synthesis and design optimization, artificial lift infrastructure planning for shale gas production, and clinical trial planning. We will begin by discussing the straightforward case of exogenous uncertainty. In this situation, the stochastic program can be expressed completely by a deterministic model, a scenario tree, and the scenario-specific parameterizations of the deterministic model. Beginning with the deterministic model, modelers create instances of the deterministic model for each scenario using the scenario-specific data. Coupling the scenario models occurs through the addition of nonanticipativity constraints, equating the stage decision variables across all scenarios that pass through the same stage node in the scenario tree. Modelling tools like PySP (Watson, 2012) greatly simplify the process of composing large stochastic programs by beginning either with an abstract representation of the deterministic model written in Pyomo (Hart, et al., 2017) and scenario data, or a function that will return the deterministic Pyomo model for a specific scenario. PySP automatically can create the extensive form (deterministic equivalent) model from a general representation of the scenario tree. The challenge with large scale stochastic programs with exogenous uncertainty arises through managing the growth of the problem size. Fortunately, there are several well-known approaches to decomposing the problem, both stage-wise (e.g., Benders’ decomposition) and scenario-based (e.g., Lagrangian relaxation or Progressive Hedging), enabling the direct solution of stochastic programs with hundreds or thousands of scenarios. We will then discuss developments in modelling and solving stochastic programs with endogenous uncertainty. These problems are significantly more challenging to both pose and to solve, due to the exponential growth in scenarios required to cover the decision-dependent uncertainties relative to the number of stages in the problem. In this situation, standardized frameworks for expressing stochastic programs do not exist, requiring a modeler to explicitly generate the representations and nonanticipativity constraints. Further, the size of the resulting scenario space (frequently exceeding millions of scenarios) precludes the direct solution of the resulting program. In this case, numerous decomposition algorithms and heuristics have been developed (e.g., Lagrangean decomposition-based algorithms (Tarhan, et al. 2013) or Knapsack-based decomposition Algorithms (Christian and Cremaschi, 2015)).

More Details

Bulk Handling Facility Modeling and Simulation for Safeguards Analysis

Science and Technology of Nuclear Installations

Cipiti, Benjamin B.

The Separation and Safeguards Performance Model (SSPM) uses MATLAB/Simulink to provide a tool for safeguards analysis of bulk handling nuclear processing facilities. Models of aqueous and electrochemical reprocessing, enrichment, fuel fabrication, and molten salt reactor facilities have been developed to date. These models are used for designing the overall safeguards system, examining new safeguards approaches, virtually testing new measurement instrumentation, and analyzing diversion scenarios. The key metrics generated by the models include overall measurement uncertainty and detection probability for various material diversion or facility misuse scenarios. Safeguards modeling allows for rapid and cost-effective analysis for Safeguards by Design. The models are currently being used to explore alternative safeguards approaches, including more reliance on process monitoring data to reduce the need for destructive analysis that adds considerable burden to international safeguards. Machine learning techniques are being applied, but these techniques need large amounts of data for training and testing the algorithms. The SSPM can provide that training data. This paper will describe the SSPM and its use for applying both traditional nuclear material accountancy and newer machine learning options.

More Details

An overview of the water network tool for resilience (WNTR)

1st International Wdsa Ccwi 2018 Joint Conference

Klise, Katherine A.; Murray, Regan; Haxton, Terranna

Drinking water systems face multiple challenges, including aging infrastructure, water quality concerns, uncertainty in supply and demand, natural disasters, environmental emergencies, and cyber and terrorist attacks. All of these incidents have the potential to disrupt a large portion of a water system causing damage to critical infrastructure, threatening human health, and interrupting service to customers. Recent incidents, including the floods and winter storms in the southern United States, highlight vulnerabilities in water systems and the need to minimize service loss. Simulation and analysis tools can help water utilities better understand how their system would respond to a wide range of disruptive incidents and inform planning to make systems more resilient over time. The Water Network Tool for Resilience (WNTR) is a new open source Python package designed to meet this need. WNTR integrates hydraulic and water quality simulation, a wide range of damage and response options, and resilience metrics into a single software framework, allowing for end-Toend evaluation of water network resilience. WNTR includes capabilities to 1) generate and modify water network structure and operations, 2) simulate disaster scenarios, 3) model response and repair strategies, 4) simulate pressure dependent demand and demand-driven hydraulics, 5) simulate water quality, 6) calculate resilience metrics, and 7) visualize results. These capabilities can be used to evaluate resilience of water distribution systems to a wide range of hazards and to prioritize resilience-enhancing actions. Furthermore, the flexibility of the Python environment allows the user to easily customize analysis. For example, utilities can simulate a specific incident or run stochastic analysis for a range of probabilistic scenarios. The U.S. Environmental Protection Agency and Sandia National Laboratories are working with water utilities to ensure that WNTR can be used to efficiently evaluate resilience under different use cases. The software has been used to evaluate resilience under earthquake and power outage scenarios, run fire-fighting capacity and pipe criticality analysis, evaluate sampling and flushing locations, and prioritize repair strategies. This paper includes discussion on WNTR capabilities, use cases, and resources to help get new users started using the software. WNTR can be downloaded from the U.S. Environmental Protection Agency GitHub site at https://github.com/USEPA/WNTR. The GitHub site includes links to software documentation, software testing results, and contact information.

More Details

Neural-Inspired Anomaly Detection

Springer Proceedings in Complexity

Verzi, Stephen J.; Vineyard, Craig M.; Aimone, James B.

Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.

More Details

Formation of low-volatility products in reactions of carbonyl oxide criegee intermediates

15th Conference of the International Society of Indoor Air Quality and Climate, INDOOR AIR 2018

Caravan, Rebecca L.; Eskola, Arkke J.; Antonov, Ivan O.; Winiberg, Frank A.F.; Rotavera, Brandon; Ramasesha, Krupa; Sheps, Leonid; Osborn, David L.; Percival, Carl J.; Shallcross, Dudley E.; Taatjes, Craig A.

Direct kinetic and product studies of Criegee Intermediates reveal insertion and addition mechanisms for multiple co-reactant species. Observation of these highly oxygenated low volatility products indicate the potential role of Criegee Intermediate chemistry in molecular weight growth, and subsequently, secondary organic aerosol formation.

More Details

Experimental and computational investigations of process-induced stress effects on the interlaminar fracture toughness of hybrid composites

33rd Technical Conference of the American Society for Composites 2018

Nelson, Stacy M.; Werner, Brian T.

Hybrid composites allow designers to develop efficient structures, which strategically exploit a material's strengths while mitigating possible weaknesses. However, elevated temperature curing processes and exposure to thermally-extreme service environments lead to the development of residual stresses. These stresses form at the hybrid composite's bi-material interfaces, significantly impacting the stress state at the crack tip of any pre-existing flaw within the structure and affecting the probability that small defects will grow into large-scale delaminations. Therefore, in this study, a carbon fiber reinforced composite (CFRP) is co-cured with a glass fiber reinforced composite (GFRP), and the mixed-mode fracture toughness is measured across a wide temperature range (-54°C to +71°C). Upon completion of the testing, the measured results and observations are used to develop high-fidelity finite element models simulating both the formation of residual stresses throughout the composite manufacturing process, as well as the mixed-mode testing of the hybrid composite. The stress fields predicted through simulation assist in understanding the trends observed during the completed experiments. Furthermore, the modeled predictions indicate that failure to account for residual stress effects during the analysis of composite structures could lead to non-conservative structural designs and premature failure.

More Details

Unimolecular decomposition kinetics of the stabilised Criegee intermediates CH2OO and CD2OO

Physical Chemistry Chemical Physics

Stone, Daniel; Au, Kendrew; Sime, Samantha; Medeiros, Diogo J.; Blitz, Mark; Seakins, Paul W.; Decker, Zachary; Sheps, Leonid

Decomposition kinetics of stabilised CH2OO and CD2OO Criegee intermediates have been investigated as a function of temperature (450-650 K) and pressure (2-350 Torr) using flash photolysis coupled with time-resolved cavity-enhanced broadband UV absorption spectroscopy. Decomposition of CD2OO was observed to be faster than CH2OO under equivalent conditions. Production of OH radicals following CH2OO decomposition was also monitored using flash photolysis with laser-induced fluorescence (LIF), with results indicating direct production of OH in the v = 0 and v = 1 states in low yields. Master equation calculations performed using the Master Equation Solver for Multi-Energy well Reactions (MESMER) enabled fitting of the barriers for the decomposition of CH2OO and CD2OO to the experimental data. Parameterisations of the decomposition rate coefficients, calculated by MESMER, are provided for use in atmospheric models and implications of the results are discussed. For CH2OO, the MESMER fits require an increase in the calculated barrier height from 78.2 kJ mol-1 to 81.8 kJ mol-1 using a temperature-dependent exponential down model for collisional energy transfer with 〈ΔE〉down = 32.6(T/298 K)1.7 cm-1 in He. The low- and high-pressure limit rate coefficients are k1,0 = 3.2 × 10-4(T/298)-5.81exp(-12770/T) cm3 s-1 and k1,∞ = 1.4 × 1013(T/298)0.06exp(-10010/T) s-1, with median uncertainty of ∼12% over the range of experimental conditions used here. Extrapolation to atmospheric conditions yields k1(298 K, 760 Torr) = 1.1+1.5-1.1 × 10-3 s-1. For CD2OO, MESMER calculations result in 〈ΔE〉down = 39.6(T/298 K)1.3 cm-1 in He and a small decrease in the calculated barrier to decomposition from 81.0 kJ mol-1 to 80.1 kJ mol-1. The fitted rate coefficients for CD2OO are k2,0 = 5.2 × 10-5(T/298)-5.28exp(-11610/T) cm3 s-1 and k2,∞ = 1.2 × 1013(T/298)0.06exp(-9800/T) s-1, with overall error of ∼6% over the present range of temperature and pressure. The extrapolated k2(298 K, 760 Torr) = 5.5+9.2-5.5 × 10-3 s-1. The master equation calculations for CH2OO indicate decomposition yields of 63.7% for H2 + CO2, 36.0% for H2O + CO and 0.3% for OH + HCO with no significant dependence on temperature between 400 and 1200 K or pressure between 1 and 3000 Torr.

More Details

Addressing technical and regulatory requirements to deploy structural health monitoring systems on commercial aircraft

31st Congress of the International Council of the Aeronautical Sciences, ICAS 2018

Roach, Dennis P.; Rice, Thomas M.

Multi-site fatigue damage, hidden cracks in hard-to-reach locations, disbonded joints, erosion, impact, and corrosion are among the major flaws encountered in today's extensive fleet of aging aircraft. The use of in-situ sensors for real-time health monitoring of aircraft structures, coupled with remote interrogation, provides a viable option to overcome inspection impediments stemming from accessibility limitations, complex geometries, and the location and depth of hidden damage. Reliable, Structural Health Monitoring (SHM) systems can automatically process data, assess structural condition, and signal the need for human intervention. Prevention of unexpected flaw growth and structural failure can be improved if on-board health monitoring systems are used to continuously assess structural integrity. Such systems can detect incipient damage before catastrophic failures occurs. Other advantages of on-board distributed sensor systems are that they can eliminate costly and potentially damaging disassembly, improve sensitivity by producing optimum placement of sensors and decrease maintenance costs by eliminating more time-consuming manual inspections. This paper presents the results from successful SHM technology validation efforts that established the performance of sensor systems for aircraft fatigue crack detection. Validation tasks were designed to address the SHM equipment, the health monitoring task, the resolution required, the sensor interrogation procedures, the conditions under which the monitoring will occur, and the potential inspector population. All factors that affect SHM sensitivity were included in this program including flaw size, shape, orientation and location relative to the sensors, operational and environmental variables and issues related to the presence of multiple flaws within a sensor network. This paper will also present the formal certification tasks including formal adoption of SHM systems into aircraft manuals and the release of an Alternate Means of Compliance and a modified Service Bulletin to allow for routine use of SHM sensors on commercial aircraft. This program also established a regulatory approval process that includes FAR Part 25 (Transport Category Aircraft) and shows compliance with 25.571 (fatigue) and 25.1529 (Instructions for Continued Airworthiness).

More Details

Specializations in the sceptre code for charged-particle transport

20th Topical Meeting of the Radiation Protection and Shielding Division, RPSD 2018

Drumm, Clifton R.; Fan, Wesley C.; Pautz, Shawn D.

Charged particles present some unique challenges for radiation transport codes. This is because charged particles have cross sections that are extremely forward peaked, are huge in the limit of small energy transfer, and are highly scattering, which causes slow convergence of the source iterations. The primary application of SCEPTRE is modeling radiation-driven electrical effects, so substantial effort has been invested in SCEPTRE for the efficient modeling of electron transport. This paper will summarize recent and ongoing activities involving the accurate deterministic-transport modeling of charged particles and methods implemented to improve iterative convergence.

More Details

Compressive hyperspectral imaging using total variation minimization

Proceedings of SPIE - The International Society for Optical Engineering

Lee, Dennis J.; Shields, Eric A.

Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.

More Details

Orientation dependence of hydrogen accelerated fatigue crack growth rates in pipeline steels

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Song, Eun J.; Ronevich, Joseph

One of the most efficient methods for supplying gaseous hydrogen long distances is by using steel pipelines. However, steel pipelines exhibit accelerated fatigue crack growth rates in gaseous hydrogen relative to air. Despite conventional expectations that higher strength steels would be more susceptible to hydrogen embrittlement, recent testing on a variety of pipeline steel grades has shown a notable independence between strength and hydrogen assisted fatigue crack growth rate. It is thought that microstructure may play a more defining role than strength in determining the hydrogen susceptibility. Among the many factors that could affect hydrogen accelerated fatigue crack growth rates, this study was conducted with an emphasis on orientation dependence. The orientation dependence of toughness in hot rolled steels is a well-researched area; however, few studies have been conducted to reveal the relationship between fatigue crack growth rate in hydrogen and orientation. In this work, fatigue crack growth rates were measured in hydrogen for high strength steel pipeline with different orientations. A significant reduction in fatigue crack growth rates were measured when cracks propagated perpendicular to the rolling direction. A detailed microstructural investigation was performed, in an effort to understand the orientation dependence of fatigue crack growth rate performance of pipeline steels in hydrogen environments.

More Details

Profiling and Debugging Support for the Kokkos Programming Model

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hammond, Simon; Trott, Christian R.; Ibanez-Granados, Daniel A.; Sunderland, Daniel

Supercomputing hardware is undergoing a period of significant change. In order to cope with the rapid pace of hardware and, in many cases, programming model innovation, we have developed the Kokkos Programming Model – a C++-based abstraction that permits performance portability across diverse architectures. Our experience has shown that the abstractions developed can significantly frustrate debugging and profiling activities because they break expected code proximity and layout assumptions. In this paper we present the Kokkos Profiling interface, a lightweight, suite of hooks to which debugging and profiling tools can attach to gain deep insights into the execution and data structure behaviors of parallel programs written to the Kokkos interface.

More Details

Unsupervised learning methods to perform material identification tasks on spectral computed tomography data

Proceedings of SPIE - The International Society for Optical Engineering

Gallegos, Isabel; Koundinyan, Srivathsan; Suknot, April; Jimenez, Edward S.; Thompson, Kyle; Goodner, Ryan N.

Sandia National Laboratories has developed a method that applies machine learning methods to high-energy spectral X-ray computed tomography data to identify material composition for every reconstructed voxel in the field-of-view. While initial experiments led by Koundinyan et al. demonstrated that supervised machine learning techniques perform well in identifying a variety of classes of materials, this work presents an unsupervised approach that differentiates isolated materials with highly similar properties, and can be applied on spectral computed tomography data to identify materials more accurately compared to traditional performance. Additionally, if regions of the spectrum for multiple voxels become unusable due to artifacts, this method can still reliably perform material identification. This enhanced capability can tremendously impact fields in security, industry, and medicine that leverage non-destructive evaluation for detection, verification, and validation applications.

More Details

Predicting polymer degradation and mechanical property changes for combined radiation-thermal aging environments

Rubber Chemistry and Technology

Celina, Mathew C.; Gillen, Kenneth T.

A new approach is presented for conducting and extrapolating combined environment (radiation plus thermal) accelerated aging experiments. The method involves a novel way of applying the time-temperature-dose rate (t-T-R) approach derived many years ago, which assumes that by simultaneously accelerating the thermal-initiation rate (from Arrhenius T-only analysis) and the radiation dose rate R by the same factor x, the overall degradation rate will increase by the factor x. The dose rate assumption implies that equal dose yields equal damage, which is equivalent to assuming the absence of dose-rate effects (DRE).Aplot of inverse absolute temperature versus the log of the dose rate is used to indicate experimental conditions consistent with themodel assumptions, which can be derived along lines encompassing so-called matched accelerated conditions (MAC lines). Aging trends taken along MAC lines for several elastomers confirms the underlying model assumption and therefore indicates, contrary to many past published results, that DRE are typically not present. In addition, the MAC approach easily accommodates the observation that substantial degradation chemistry changes occur as aging conditions transition R-T space from radiation domination (high R, low T) to temperature domination (low R, high T). The MAC-line approach also suggests an avenue for gaining more confidence in extrapolations of accelerated MAC-line data to ambient aging conditions by using ultrasensitive oxygen consumption (UOC) measurements taken along the MAC line both under the accelerated conditions and at ambient. From UOC data generated under combined R-T conditions, this approach is tested and quantitatively confirmed for one of thematerials. In analogy to the wear-out approach developed previously for thermo-oxidative aging, the MAC-line concept can also be used to predict the remaining lifetimes of samples extracted periodically from ambient environments.

More Details

ACES: Automatic compartments for embedded systems

Proceedings of the 27th USENIX Security Symposium

Clements, Abraham; Almakhdhub, Naif S.; Bagchi, Saurabh; Payer, Mathias

Securing the rapidly expanding Internet of Things (IoT) is critical. Many of these "things" are vulnerable bare-metal embedded systems where the application executes directly on hardware without an operating system. Unfortunately, the integrity of current systems may be compromised by a single vulnerability, as recently shown by Google's P0 team against Broadcom's WiFi SoC. We present ACES (Automatic Compartments for Embedded Systems)1, an LLVM-based compiler that automatically infers and enforces inter-component isolation on bare-metal systems, thus applying the principle of least privileges. ACES takes a developer-specified compartmentalization policy and then automatically creates an instrumented binary that isolates compartments at runtime, while handling the hardware limitations of baremetal embedded devices. We demonstrate ACES' ability to implement arbitrary compartmentalization policies by implementing three policies and comparing the compartment isolation, runtime overhead, and memory overhead. Our results show that ACES' compartments can have low runtime overheads (13% on our largest test application), while using 59% less Flash, and 84% less RAM than the Mbed μVisor-the current state-of-the-art compartmentalization technique for bare-metal systems. ACES' compartments protect the integrity of privileged data, provide control-flow integrity between compartments, and reduce exposure to ROP attacks by 94.3% compared to μVisor.

More Details

Optical systems for task-specific compressive classification

Proceedings of SPIE - The International Society for Optical Engineering

Birch, Gabriel C.; Quach, Tu T.; Sahakian, Meghan A.; Lacasse, Charles F.; Dagel, Amber

Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.

More Details

Compressive hyperspectral imaging using total variation minimization

Proceedings of SPIE - The International Society for Optical Engineering

Lee, Dennis J.; Shields, Eric A.

Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.

More Details

Investigation of interfacial impurities in m-plane GaN regrown p-n junctions for high-power vertical electronic devices

Proceedings of SPIE - The International Society for Optical Engineering

Stricklin, Isaac; Monavarian, Morteza; Aragon, Andrew; Pickrell, Gregory W.; Crawford, Mary H.; Allerman, A.A.; Armstrong, Andrew A.; Feezell, Daniel

GaN is an attractive material for high-power electronics due to its wide bandgap and large breakdown field. Verticalgeometry devices are of interest due to their high blocking voltage and small form factor. One challenge for realizing complex vertical devices is the regrowth of low-leakage-current p-n junctions within selectively defined regions of the wafer. Presently, regrown p-n junctions exhibit higher leakage current than continuously grown p-n junctions, possibly due to impurity incorporation at the regrowth interfaces, which consist of c-plane and non-basal planes. Here, we study the interfacial impurity incorporation induced by various growth interruptions and regrowth conditions on m-plane p-n junctions on free-standing GaN substrates. The following interruption types were investigated: (1) sample in the main MOCVD chamber for 10 min, (2) sample in the MOCVD load lock for 10 min, (3) sample outside the MOCVD for 10 min, and (4) sample outside the MOCVD for one week. Regrowth after the interruptions was performed on two different samples under n-GaN and p-GaN growth conditions, respectively. Secondary ion mass spectrometry (SIMS) analysis indicated interfacial silicon spikes with concentrations ranging from 5e16 cm-3 to 2e18 cm-3 for the n-GaN growth conditions and 2e16 cm-3 to 5e18 cm-3 for the p-GaN growth conditions. Oxygen spikes with concentrations ∼1e17 cm-3 were observed at the regrowth interfaces. Carbon impurity levels did not spike at the regrowth interfaces under either set of growth conditions. We have correlated the effects of these interfacial impurities with the reverse leakage current and breakdown voltage of regrown m-plane p-n junctions.

More Details

Measuring Multithreaded Message Matching Misery

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Schonbein, William W.; Dosanjh, Matthew G.; Grant, Ryan; Bridges, Patrick G.

MPI usage patterns are changing as applications move towards fully-multithreaded runtimes. However, the impact of these patterns on MPI message matching is not well-studied. In particular, MPI’s mechanic for receiver-side data placement, message matching, can be impacted by increased message volume and nondeterminism incurred by multithreading. While there has been significant developer interest and work to provide an efficient MPI interface for multithreaded access, there has not been a study showing how these patterns affect messaging patterns and matching behavior. In this paper, we present a framework for studying the effects of multithreading on MPI message matching. This framework allows us to explore the implications of different common communication patterns and thread-level decompositions. We present a study of these impacts on the architecture of two of the Top 10 supercomputers (NERSC’s Cori and LANL’s Trinity). This data provides a baseline to evaluate reasonable matching engine queue lengths, search depths, and queue drain times under the multithreaded model. Furthermore, the study highlights surprising results on the challenge posed by message matching for multithreaded application performance.

More Details

Electron transport algorithms in the integrated tiger series (ITS) codes

20th Topical Meeting of the Radiation Protection and Shielding Division, RPSD 2018

Franke, Brian C.; Kensek, Ronald P.

We describe the three electron-transport algorithms that have been implemented in the ITS Monte Carlo codes. While the underlying cross-section data is similar, each uses a fundamentally unique method, which at a high level are best characterized as condensed history, multigroup, and single scatter. Through a set of comparisons with experimental data and some comparisons of purely numerical results, we discuss various attributes of each of the algorithms and show some of the defects that can affect results.

More Details

Ambient Temperature Thermally Induced Voltage Alteration (TIVA) for Identification of Defects in Superconducting Electronics

Conference Proceedings from the International Symposium for Testing and Failure Analysis

Jenkins, Mark W.; Tangyunyong, Paiboon; Missert, Nancy; Vernik, Igor; Kirichhenko, Alex; Mukhanov, Oleg; Wynn, Alex; Bolkhovsky, Vladimir; Johnson, Leonard

As research in superconducting electronics matures, it is necessary to have failure analysis techniques to identify parameters that impact yield and failure modes in the fabricated product. However, there has been significant skepticism regarding the ability of laser-based failure analysis techniques to dctect defects at room temperature in superconducting electronics designed to operate at cryogenic temperatures. In this paper, we describe preliminary data showing the use of Thermally Induced Voltage Alteration (1∗1 VA) [l| at ambient temperature to locate defects in known defective circuits fabricated using state-of-the-art techniques for superconducting electronics.

More Details

Adjoint-enabled multidimensional optimization of satellite electron/proton shields

20th Topical Meeting of the Radiation Protection and Shielding Division, RPSD 2018

Pautz, Shawn D.; Bruss, Donald E.; Adams, Brian M.; Franke, Brian C.; Blansett, Ethan

The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is complicated by the need to protect sensitive electronics from the space radiation environment. There is growing interest in automated design optimization techniques to help achieve that objective. Traditional optimization approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron/proton shields in one-dimensional slab geometries. In this paper we extend that work to two-dimensional Cartesian geometries. This consists primarily of deriving the sensitivities to geometric changes, given a particular prescription for parametrizing the shield geometry. We incorporate these sensitivities into our optimization process and demonstrate their effectiveness in such design calculations.

More Details

Deriving specifications for coupling through dual-wound generators

Proceedings of the International Ship Control Systems Symposium

Rashkin, Lee J.; Neely, Jason C.; Wilson, David G.; Glover, Steven F.; Doerry, N.; Mccoy, T.J.

Many candidate power system architectures are being evaluated for the Navy’s next generation all-electric warship. One proposed power system concept involves the use of dual-wound generators to power both the Port and Starboard side buses using different 3-phase sets from the same machine (Doerry, 2015). This offers the benefit of improved efficiency through reduced engine light-loading and improved dispatch flexibility, but the approach couples the two busses through a common generator, making one bus vulnerable to faults and other dynamic events on the other bus. Thus, understanding the dynamics of cross-bus coupling is imperative to the successful implementation of a dual-wound generator system. In (Rashkin, 2017), a kilowatt-scale system was analysed that considered the use of a dual-wound permanent magnet machine, two passive rectifiers, and two DC buses with resistive loads. For this system, dc voltage variation on one bus was evaluated in the time domain as a function of load changes on the other bus. Therein, substantive cross-bus coupling was demonstrated in simulation and hardware experiments. The voltage disturbances were attributed to electromechanical (i.e. speed disturbances) as well as electromagnetic coupling mechanisms. In this work, a 25 MVA dual-wound generator was considered, and active rectifier models were implemented in Matlab both using average value modelling and switching (space vector modulation) simulation models. The frequency dynamics of the system between the load on one side and the dc voltage on the other side was studied. The coupling is depicted in the frequency domain as a transfer function with amplitude and phase and is shown to have distinct characteristics (i.e. frequency regimes) associated with physical coupling mechanisms such as electromechanical and electromagnetic coupling as well as response characteristics associated with control action by the active rectifiers. In addition, based on requirements outlined in draft Military Standard 1399-MVDC, an approach to derive specifications will be discussed and presented. This method will aid in quantifying the allowable coupling of energy from one bus to another in various frequency regimes as a function of other power system parameters. Finally, design and control strategies will be discussed to mitigate cross-bus coupling. The findings of this work will inform the design, control, and operation of future naval warship power systems.

More Details

Hydrophilic domain structure in polymer exchange membranes: Simulations of NMR spin diffusion experiments to address ability for model discrimination

Journal of Polymer Science, Part B: Polymer Physics

Sorte, Eric; Abbott, Lauren J.; Frischknecht, Amalie L.; Wilson, Mark A.; Alam, Todd M.

We detail the development of a flexible simulation program (NMR_DIFFSIM) that solves the nuclear magnetic resonance (NMR) spin diffusion equation for arbitrary polymer architectures. The program was used to explore the proton (1H) NMR spin diffusion behavior predicted for a range of geometrical models describing polymer exchange membranes. These results were also directly compared with the NMR spin diffusion behavior predicted for more complex domain structures obtained from molecular dynamics (MD) simulations. The numerical implementation and capabilities of NMR_DIFFSIM were demonstrated by evaluating the experimental NMR spin diffusion behavior for the hydrophilic domain structure in sulfonated Diels-Alder Poly(Phenylene) (SDAPP) polymer membranes. The impact of morphology variations as a function of sulfonation and hydration level on the resulting NMR spin diffusion behavior were determined. These simulations allowed us to critically address the ability of NMR spin diffusion to discriminate between different structural models, and to highlight the extremely high fidelity experimental data required to accomplish this. A direct comparison of experimental double-quantum-filtered 1H NMR spin diffusion in SDAPP membranes to the spin diffusion behavior predicted for MD-proposed morphologies revealed excellent agreement, providing experimental support for the MD structures at low to moderate hydration levels. © 2017 Wiley Periodicals, Inc. J. Polym. Sci., Part B: Polym. Phys. 2018, 56, 62–78.

More Details

The future of scientific workflows

International Journal of High Performance Computing Applications

Deelman, Ewa; Peterka, Tom; Altintas, Ilkay; Carothers, Christopher D.; Van Dam, Kerstin K.; Moreland, Kenneth D.; Parashar, Manish; Ramakrishnan, Lavanya; Taufer, Michela; Vetter, Jeffrey

Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on those workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.

More Details

Fast Quasi-Static Time-Series (QSTS) for yearlong PV impact studies using vector quantization

Solar Energy

Deboever, Jeremiah; Grijalva, Santiago; Reno, Matthew J.; Broderick, Robert J.

The rapidly growing penetration levels of distributed photovoltaic (PV) systems requires more comprehensive studies to understand their impact on distribution feeders. IEEE P.1547 highlights the need for Quasi-Static Time Series (QSTS) simulation in conducting distribution impact studies for distributed resource interconnection. Unlike conventional scenario-based simulation, the time series simulation can realistically assess time-dependent impacts such as the operation of various controllable elements (e.g. voltage regulating tap changers) or impacts of power fluctuations. However, QSTS simulations are still not widely used in the industry because of the computational burden associated with running yearlong simulations at a 1-s granularity, which is needed to capture device controller effects responding to PV variability. This paper presents a novel algorithm that reduces the number of times that the non-linear 3-phase unbalanced AC power flow must be solved by storing and reassigning power flow solutions as it progresses through the simulation. Each unique power flow solution is defined by a set of factors affecting the solution that can easily be queried. We demonstrate a computational time reduction of 98.9% for a yearlong simulation at 1-s resolution with minimal errors for metrics including: number of tap changes, capacitor actions, highest and lowest voltage on the feeder, line losses, and ANSI voltage violations. The key contribution of this work is the formulation of an algorithm capable of: (i) drastically reducing the computational time of QSTS simulations, (ii) accurately modeling distribution system voltage-control elements with hysteresis, and (iii) efficiently compressing result time series data for post-simulation analysis.

More Details

Agglomerate sizing in aluminized propellants using digital inline holography and traditional diagnostics

Journal of Propulsion and Power

Powell, Michael S.; Gunduz, Ibrahim W.; Shang, Weixiao; Chen, Jun; Son, Steven F.; Mazumdar, Yi C.; Guildenbecher, Daniel

Aluminized ammonium perchlorate composite propellants can form large molten agglomerated particles that may result in poor combustion performance, slag accumulation, and increased two-phase flow losses. Quantifying agglomerate size distributions are needed to gain an understanding of agglomeration dynamics and ultimately design new propellants for improved performance. Due to complexities of the reacting multiphase environment, agglomerate size diagnostics are difficult and measurement accuracies are poorly understood. To address this, the current work compares three agglomerate sizing techniques applied to two propellant formulations. Particle collection on a quench plate and backlit videography are two relatively common techniques, whereas digital inline holography is an emerging alternative for three-dimensional measurements. Atmospheric pressure combustion results show that all three techniques are able to capture the qualitative trends; however, significant differences exist in the quantitative size distributions and mean diameters. For digital inline holography, methods are proposed that combine temporally resolved high-speed recording with lower-speed but higher spatial resolution measurements to correct for size- velocity correlation biases while extending the measurable size dynamic range. The results from this work provide new guidance for improved agglomerate size measurements along with statistically resolved datasets for validation of agglomerate models.

More Details

Operando spectromicroscopy of sulfur species in lithium-sulfur batteries

Journal of the Electrochemical Society

Miller, Elizabeth C.; Kasse, Robert M.; Heath, Khloe N.; Perdue, Brian R.; Toney, Michael F.

In this study, a novel cross-sectional battery cellwas developed to characterize lithium-sulfur batteries usingX-ray spectromicroscopy. Chemically sensitive X-raymapswere collected operando at energies relevant to the expected sulfur species andwere used to correlate changes in sulfur species with electrochemistry. Significant changes in the sulfur/carbon composite electrode were observed from cycle to cycle including rearrangement of the elemental sulfur matrix and PEO10LiTFSI binder. Polysulfide concentration and area of spatial diffusion increased with cycling, indicating that some polysulfide dissolution is irreversible, leading to polysulfide shuttle. Fitting of the maps using standard sulfur and polysulfide XANES spectra indicated that upon subsequent discharge/charge cycles, the initial sulfur concentration was not fully recovered; polysulfides and lithium sulfide remained at the cathodes with higher order polysulfides as the primary species in the region of interest. Quantification of the polysulfide concentration across the electrolyte and electrode interfaces shows that the polysulfide concentration before the first discharge and after the third charge is constant within the electrolyte, but while cycling, a significant increase in polysulfides and a gradient toward the lithium metal anode forms. This chemically and spatially sensitive characterization and analysis provides a foundation for further operando spectromicroscopy of lithium-sulfur batteries.

More Details

Response predictions of reduced models with whole joints

Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics

Kuether, Robert J.; Najera-Flores, David A.

Structural dynamic models of mechanical, aerospace, and civil structures often involve connections of multiple subcomponents with rivets, bolts, press fits, or other joining processes. Recent model order reduction advances have been made for jointed structures using appropriately defined whole joint models in combination with linear substructuring techniques. A whole joint model condenses the interface nodes onto a single node with multi-point constraints resulting in drastic increases in computational speeds to predict transient responses. One drawback to this strategy is that the whole joint models are empirical and require calibration with test or high-fidelity model data. A new framework is proposed to calibrate whole joint models by computing global responses from high-fidelity finite element models and utilizing global optimization to determine the optimal joint parameters. The method matches the amplitude dependent damping and natural frequencies predicted for each vibration mode using quasi-static modal analysis.

More Details

Adsorption of copper (II) on mesoporous silica: The effect of nano-scale confinement

Geochemical Transactions

Knight, A.W.; Tigges, Austen B.; Ilgen, Anastasia G.

Nano-scale spatial confinement can alter chemistry at mineral-water interfaces. These nano-scale confinement effects can lead to anomalous fate and transport behavior of aqueous metal species. When a fluid resides in a nano-porous environments (pore size under 100 nm), the observed density, surface tension, and dielectric constant diverge from those measured in the bulk. To evaluate the impact of nano-scale confinement on the adsorption of copper (Cu2+), we performed batch adsorption studies using mesoporous silica. Mesoporous silica with the narrow distribution of pore diameters (SBA-15; 8, 6, and 4 nm pore diameters) was chosen since the silanol functional groups are typical to surface environments. Batch adsorption isotherms were fit with adsorption models (Langmuir, Freundlich, and Dubinin-Radushkevich) and adsorption kinetic data were fit to a pseudo-first-order reaction model. We found that with decreasing pore size, the maximum surface area-normalized uptake of Cu2+ increased. The pseudo-first-order kinetic model demonstrates that the adsorption is faster as the pore size decreases from 8 to 4 nm. We attribute these effects to the deviations in fundamental water properties as pore diameter decreases. In particular, these effects are most notable in SBA-15 with a 4-nm pore where the changes in water properties may be responsible for the enhanced Cu mobility, and therefore, faster Cu adsorption kinetics.

More Details

Efficient random vibration analysis of nonlinear systems with long short-term memory networks for uncertainty quantification

Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics

Najera-Flores, David A.; Brink, Adam R.

Complex mechanical structures are often subjected to random vibration environments. One strategy to analyze these nonlinear structures numerically is to use finite element analysis with an explicit solver to resolve interactions in the time domain. However, this approach is impractical because the solver is conditionally stable and requires thousands of iterations to resolve the contact algorithms. As a result, only short runs can be performed practically because of the extremely long runtime needed to obtain sufficient sampling for long-time statistics. The proposed approach uses a machine learning algorithm known as the Long Short-Term Memory (LSTM) network to model the response of the nonlinear system to random input. The LSTM extends the capability of the explicit solver approach by taking short samples and extending them to arbitrarily long signals. The efficient LSTM algorithm enables the capability to perform Monte Carlo simulations to quantify model-form and aleatoric uncertainty due to the random input.

More Details

The effect of aluminum and platinum additives on hydrogen adsorption on mesoporous silicates

Physical Chemistry Chemical Physics

Melaet, Gerome; Stavila, Vitalie; Klebanoff, Lennie

Recent theoretical predictions indicate that functional groups and additives could have a favorable impact on the hydrogen adsorption characteristics of sorbents; however, no definite evidence has been obtained to date and little is known about the impact of such modifications on the thermodynamics of hydrogen uptake and overall capacity. In this work, we investigate the effect of two types of additives on the cryoadsorption of hydrogen to mesoporous silica. First, Lewis and Brønsted acid sites were evaluated by grafting aluminum to the surface of mesoporous silica (MCF-17) and characterizing the resulting silicate materials' surface area and the concentration of Brønsted and Lewis acid sites created. Heat of adsorption measurements found little influence of surface acidity on the enthalpy of hydrogen cryoadsorption. Secondly, platinum nanoparticles of 1.5 nm and 7.1 nm in diameter were loaded into MCF-17, and characterized by TEM. Hydrogen absorption measurements revealed that the addition of small amounts of metallic platinum nanoparticles increases by up to two-fold the amount of hydrogen adsorbed at liquid nitrogen temperature. Moreover, we found a direct correlation between the size of platinum particles and the amount of hydrogen stored, in favor of smaller particles.

More Details

Physical foundations of Landauer’s principle

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Frank, Michael P.

We review the physical foundations of Landauer’s Principle, which relates the loss of information from a computational process to an increase in thermodynamic entropy. Despite the long history of the Principle, its fundamental rationale and proper interpretation remain frequently misunderstood. Contrary to some misinterpretations of the Principle, the mere transfer of entropy between computational and non-computational subsystems can occur in a thermodynamically reversible way without increasing total entropy. However, Landauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. Any uncontrolled thermal system will, by definition, continually re-randomize the physical information in its thermal state, from our perspective as observers who cannot predict the exact dynamical evolution of the microstates of such environments. Thus, any correlations involving information that is ejected into and subsequently thermalized by the environment will be lost from our perspective, resulting directly in an irreversible increase in thermodynamic entropy. Avoiding the ejection and thermalization of correlated computational information motivates the reversible computing paradigm, although the requirements for computations to be thermodynamically reversible are less restrictive than frequently described, particularly in the case of stochastic computational operations. There are interesting possibilities for the design of computational processes that utilize stochastic, many-to-one computational operations while nevertheless avoiding net entropy increase that remain to be fully explored.

More Details

Simultaneous PSP and DIC measurements for fluid-structure interactions in a shock tube

2018 Fluid Dynamics Conference

Lynch, Kyle P.; Jones, E.M.C.; Wagner, Justin L.

Simultaneous pressure sensitive paint (PSP) and stereo digital image correlation (DIC) measurements on a jointed beam structure are presented. Tests are conducted in a shock tube, providing an impulsive starting condition followed by approximately uniform high-speed flow conditions for 5.0 msec. The unsteady pressure loading generated by shock waves and vortex shedding results in the excitation of various structural modes in the beam. The combined data characterizes the structural loading input (pressure) and the resulting structural behavior output (deformation). Time-series filtering is used to remove external bias errors such as shock tube motion, and proper orthogonal decomposition (POD) is used to extract mode shapes from the deformation data. This demonstrates the utility of using fast-response PSP together with stereo digital image correlation (DIC), which provides a valuable capability for validating structural dynamics simulations.

More Details

BDDC algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

Mathematics of Computation

Oh, Duk S.; Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.

A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced. Under the assumption that the subdomains are all built from elements of a coarse triangulation of the given domain, that the meshes of each subdomain are quasi uniform and that the material parameters are constant in each subdomain, a bound is obtained for the condition number of the preconditioned linear system which is independent of the values and the jumps of these parameters across the interface between the subdomains as well as the number of subdomains. Numerical experiments, using the PETSc library, are also presented which support the theory and show the effectiveness of the algorithms even for problems not covered by the theory. Included are also experiments with Brezzi-Douglas-Marini finite element approximations.

More Details

Design of continuously graded elastic acoustic cloaks

Journal of the Acoustical Society of America

Walsh, Timothy; Aquino, Wilkins; Sanders, Clay

This letter demonstrates the design of continuously graded elastic cylinders to achieve passive cloaking from harmonic acoustic excitation, both at single frequencies and over extended bandwidths. The constitutive parameters in a multilayered, constant-density cylinder are selected in a partial differential equation-constrained optimization problem, such that the residual between the pressure field from an unobstructed spreading wave in a fluid and the pressure field produced by the cylindrical inclusion is minimized. The radial variation in bulk modulus appears fundamental to the cloaking behavior, while the shear modulus distribution plays a secondary role. Such structures could be realized with functionally-graded elastic materials.

More Details

In situ tem observations of corrosion in nanocrystalline fe thin films

Ceramic Transactions

Gross, David; Kacher, Josh; Hattar, Khalid M.; Robertson, Ian M.

The corrosion of pulsed-laser deposited Fe thin films by aqueous acetic acid solution was explored in real time by performing dynamic microfluidic experiments in situ in a transmission electron microscope. The films were examined in both the as-deposited condition and after annealing. In the as-deposited films, discrete events featuring the localized dissolution of grains were observed with the dissolved volumes ranging in size from ~1.5 x 10-5 μm3 to 3.4 x 10-7 μm3. The annealed samples had larger grains than the as-deposited samples, were more resistant to corrosion, and did not show similar discrete dissolution events. The electron beam was observed to accelerate the corrosion, especially on the as-deposited samples. The effects of grain surface energy, grain boundary energy and the electron beam-specimen interactions are discussed in relation to the observed behavior.

More Details

A model for simulating adaptive, dynamic flows on networks: Application to petroleum infrastructure

Reliability Engineering and System Safety

Corbet Jr., Thomas F.; Beyeler, Walter E.; Wilson, Michael L.; Flanagan, Tatiana P.

Simulation models can improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, then on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.

More Details

Stochastic least-squares petrov-galerkin method for parameterized linear system

SIAM-ASA Journal on Uncertainty Quantification

Lee, Kookjin; Carlberg, Kevin T.; Elman, Howard C.

We consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error [A. Mugler and H.-J. Starkloff, ESAIM Math. Model. Numer. Anal., 47 (2013), pp. 1237-1263]. As a remedy for this, we propose a novel stochatic least-squares Petrov-Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted2-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted2-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

More Details

Modeling actinide solubilities in alkaline to hyperalkaline solutions: Solubility of AM(OH)3(S) in KOH solutions

Solution Chemistry: Advances in Research and Applications

Xiong, Yongliang

In this work, a Pitzer model is developed for the K+(Na+)-Am(OH)4−-Cl−-OH− system based on Am(OH)3(s) solubility data in highly alkaline KOH solutions. Under highly alkaline conditions, the solubility reaction of Am(OH)3(s) is expressed as: Solubilities of Am(OH)3(s) based on the above reaction are modeled as a function of KOH concentrations. The stability constant for Am(OH)4− is evaluated using Am(OH)3(s) solubility data in KOH solutions up to 12 mol•kg-1 taken from the literature. The Pitzer interaction parameters related to Al(OH)4- are used as analogs for the interaction parameters involving Am(OH)4- to obtain the stability constant for Am(OH)4-. The for the reaction is -11.34 ± 0.15 (2σ).

More Details

Nonlocal and mixed-locality multiscale finite element methods

Multiscale Modeling and Simulation

Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.

More Details

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

IEEE Transactions on Visualization and Computer Graphics

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Wang, Zhiyuan; Wilson, Andrew T.

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

More Details

Source detection at 100 meter standoff with a time-encoded imaging system

Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment

Brennan, J.; Brubaker, E.; Gerling, Mark; Marleau, P.; Monterial, M.; Nowack, A.; Schuster, P.; Sturm, B.; Sweany, Melinda D.

We present the design, characterization, and testing of a laboratory prototype radiological search and localization system. The system, based on time-encoded imaging, uses the attenuation signature of neutrons in time, induced by the geometrical layout and motion of the system. We have demonstrated the ability to detect a ∼1mCi252Cf radiological source at 100m standoff with 90% detection efficiency and 10% false positives against background in 12min. This same detection efficiency is met at 15s for a 40m standoff, and 1.2s for a 20m standoff.

More Details

Full long-term design response analysis of a wave energy converter

Renewable Energy

Coe, Ryan G.; Michelen, Carlos; Eckert, Aubrey; Sallaberry, Cedric

Efficient design of wave energy converters requires an accurate understanding of expected loads and responses during the deployment lifetime of a device. A study has been conducted to better understand best-practices for prediction of design responses in a wave energy converter. A case-study was performed in which a simplified wave energy converter was analyzed to predict several important device design responses. The application and performance of a full long-term analysis, in which numerical simulations were used to predict the device response for a large number of distinct sea states, was studied. Environmental characterization and selection of sea states for this analysis at the intended deployment site were performed using principle-components analysis. The full long-term analysis applied here was shown to be stable when implemented with a relatively low number of sea states and convergent with an increasing number of sea states. As the number of sea states utilized in the analysis was increased, predicted response levels did not change appreciably. However, uncertainty in the response levels was reduced as more sea states were utilized.

More Details

Experimental-Analytical Substructuring of a Complicated Jointed Structure Using Nonlinear Modal Models

Conference Proceedings of the Society for Experimental Mechanics Series

Roettgen, Daniel R.; Pacini, Benjamin R.; Mayes, Randall L.; Schoenherr, Tyler F.

This work extends recent methods to calculate dynamic substructuring predictions of a weakly nonlinear structure using nonlinear pseudo-modal models. In previous works, constitutive joint models (such as the modal Iwan element) were used to capture the nonlinearity of each subcomponent on a mode-by-mode basis. This work uses simpler polynomial stiffness and damping elements to capture nonlinear dynamics from more diverse jointed connections including large continuous interfaces. The proposed method requires that the modes of the system remain distinct and uncoupled in the amplitude range of interest. A windowed sinusoidal loading is used to excite each experimental subcomponent mode in order to identify the nonlinear pseudo-modal models. This allows for a higher modal amplitude to be achieved when fitting these models and extends the applicable amplitude range of this method. Once subcomponent modal models have been experimentally extracted for each mode, the Transmission Simulator method is implemented to assemble the subcomponent models into a nonlinear assembled prediction. Numerical integration methods are used to evaluate this prediction compared to a truth test of the nonlinear assembly.

More Details

Thermally activated delayed fluorescence of a Zr-based metal-organic framework

Chemical Communications

Mieno, H.; Kabe, R.; Allendorf, Mark; Adachi, C.

The first metal-organic framework exhibiting thermally activated delayed fluorescence (TADF) was developed. The zirconium-based framework (UiO-68-dpa) uses a newly designed linker composed of a terphenyl backbone, an electron-accepting carboxyl group, and an electron-donating diphenylamine and exhibits green TADF emission with a photoluminescence quantum yield of 30% and high thermal stability.

More Details

A class of simple and effective UQ methods for sparse replicate data applied to the cantilever beam end-to-end UQ problem

AIAA Non-Deterministic Approaches Conference, 2018

Romero, Vicente J.; Weirs, Gregory

When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.

More Details

Characterization of freestream disturbances in conventional hypersonic wind tunnels

AIAA Aerospace Sciences Meeting, 2018

Duan, Lian; Choudhari, Meelan M.; Chou, Amanda; Munoz, Federico; Ali, Syed R.C.; Radespiel, Rolf; Schilden, Thomas; Schroder, Wolfgang; Marineau, Eric C.; Casper, Katya M.; Chaudhry, Ross S.; Candler, Graham V.; Gray, Kathryn; Sweeney, Cameron J.; Schneider, Steven P.

While low disturbance (“quiet”) hypersonic wind tunnels are believed to provide more reliable extrapolation of boundary layer transition behavior from ground to flight, the presently available quiet facilities are limited to Mach 6, moderate Reynolds numbers, low freestream enthalpy, and subscale models. As a result, only conventional (“noisy”) wind tunnels can reproduce both Reynolds numbers and enthalpies of hypersonic flight configurations, and must therefore be used for flight vehicle test and evaluation involving high Mach number, high enthalpy, and larger models. This article outlines the recent progress and achievements in the characterization of tunnel noise that have resulted from the coordinated effort within the AVT-240 specialists group on hypersonic boundary layer transition prediction. New Direct Numerical Simulation (DNS) datasets elucidate the physics of noise generation inside the turbulent nozzle wall boundary layer, characterize the spatiotemporal structure of the freestream noise, and account for the propagation and transfer of the freestream disturbances to a pitot-mounted sensor. The new experimental measurements cover a range of conventional wind tunnels with different sizes and Mach numbers from 6 to 14 and extend the database of freestream fluctuations within the spectral range of boundary layer instability waves over commonly tested models. Prospects for applying the computational and measurement datasets for developing mechanism-based transition prediction models are discussed.

More Details

A bond-order potential for the Al-Cu-H ternary system

New Journal of Chemistry

Zhou, Xiaowang; Ward, Donald K.; Foster, Michael E.

Al-Based Al-Cu alloys have a very high strength to density ratio, and are therefore important materials for transportation systems including vehicles and aircrafts. These alloys also appear to have a high resistance to hydrogen embrittlement, and as a result, are being explored for hydrogen related applications. To enable fundamental studies of mechanical behavior of Al-Cu alloys under hydrogen environments, we have developed an Al-Cu-H bond-order potential according to the formalism implemented in the molecular dynamics code LAMMPS. Our potential not only fits well to properties of a variety of elemental and compound configurations (with coordination varying from 1 to 12) including small clusters, bulk lattices, defects, and surfaces, but also passes stringent molecular dynamics simulation tests that sample chaotic configurations. Careful studies verified that this Al-Cu-H potential predicts structural property trends close to experimental results and quantum-mechanical calculations; in addition, it properly captures Al-Cu, Al-H, and Cu-H phase diagrams and enables simulations of H2 dissociation, chemisorption, and absorption on Al-Cu surfaces.

More Details

Evaluating the performance of fasteners subjected to multiple loadings and loadings rates and identifying sensitivities of the modeling process

AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2018

Mersch, John; Smith, Jeffrey A.; Johnson, Evan; Bosiljevac, Thomas B.

This study details a complimentary testing and finite element analysis effort to model threaded fasteners subjected to multiple loadings and loading rates while identifying modeling sensitivities that impact this process. NAS1352-06-6P fasteners were tested in tension at quasistatic loading rates and tension and shear at dynamic loading rates. The quasistatic tension tests provided calibration and validation data for constitutive model fitting, but this process was complicated by the difference in the conventional (global) and novel (local) displacement measurements. The consequences of these differences are investigated in detail by obtaining calibrated models from both displacement measurements and assessing their performance when extended to the dynamic tension and shear applications. Common quantities of interest are explored, including failure load, time-to-failure, and displacement-at-failure. Finally, the mesh sensitivities of both dynamic analysis models are investigated to assess robustness and inform modeling fidelity. This study is performed in the context of applying these fastener models into large-scale, full system finite element analyses of complex structures, and therefore the models chosen are relatively basic to accommodate this desire and reflect typical modeling approaches. The quasistatic tension results reveal the sensitivity and importance of displacement measurement techniques in the testing procedure, especially when performing experiments involving multiple components that inhibit local specimen measurements. Additional compliance from test fixturing and load frames have an increasingly significant effect on displacement data as the measurement becomes more global, and models must necessarily capture these effects to accurately reproduce the test data. Analysis difficulties were also discovered in the modeling of shear loadings, as the results were very sensitive to mesh discretization, further complicating the ability to analyze joints subjected to diverse loadings. These variables can significantly contribute to the error and uncertainty associated with the model, and this study begins to quantify this behavior and provide guidance on mitigating these effects. When attempting to capture multiple loadings and loading rates in fasteners through simulation, it becomes necessary to thoroughly exercise and explore test and analysis procedures to ensure the final model is appropriate for the desired application.

More Details

Consistent turbulent boundary layer wall pressure spectra and coherence functions

AIAA Aerospace Sciences Meeting 2018

Dechant, Lawrence; Smith, Justin

Fluctuating boundary layer pressure fluctuations are an important loading component for high speed reentry vehicles. Characterization of the unsteady time series requires access to longitudinal and lateral coherence expressions as well spatial correlation and frequency power-spectral density models. Coherence, spatial correlation and frequency power spectral density are related as through their cross-spectral density definitions. However the frequency PSD and the spatial correlation are often based upon measurements or approximate models which may introduce bias in the associated derived coherence function. Here, we examine the effect of measurement and model form associated with frequency spectrum and correlation on the longitudinal and lateral coherence for supersonic pressure fluctuation flow fields. The widely utilized Corcos separable coherence model functional form has been employed in this study. The associated integral equations which relate coherence and correlation are solved using a simple iterative approach. To minimize distortion in results due to computational issues a high accuracy numerical integration procedure is utilized. Despite a more robust computational approach, solution accuracy is limited for some problems by the functional form of the longitudinal coherence model. These limitations are discussed in detail. This overall approach is applied to Mach 5 and Mach 8 seven degree sharp cone pressure fluctuation measurements. Estimates for the parameters associated with the Corcos coherence expressions are typically larger than more traditional values especially for the longitudinal coherence. These larger values suggest that fluctuations streamwise correlation length is small. Limited longitudinal correlation can be associated with shock influence and is explored as a possible cause.

More Details

Unstructured grid adaptation and solver technology for turbulent flows

AIAA Aerospace Sciences Meeting, 2018

Park, Michael A.; Barral, Nicolas; Ibanez-Granados, Daniel A.; Kamenetskiy, Dmitry S.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien

Unstructured grid adaptation is a tool to control Computational Fluid Dynamics (CFD) discretization error. However, adaptive grid techniques have made limited impact on production analysis workflows where the control of discretization error is critical to obtaining reliable simulation results. Issues that prevent the use of adaptive grid methods are identified by applying unstructured grid adaptation methods to a series of benchmark cases. Once identified, these challenges to existing adaptive workflows can be addressed. Unstructured grid adaptation is evaluated for test cases described on the Turbulence Modeling Resource (TMR) web site, which documents uniform grid refinement of multiple schemes. The cases are turbulent flow over a Hemisphere Cylinder and an ONERA M6 Wing. Adaptive grid force and moment trajectories are shown for three integrated grid adaptation processes with Mach interpolation control and output error based metrics. The integrated grid adaptation process with a finite element (FE) discretization produced results consistent with uniform grid refinement of fixed grids. The integrated grid adaptation processes with finite volume schemes were slower to converge to the reference solution than the FE method. Metric conformity is documented on grid/metric snapshots for five grid adaptation mechanics implementations. These tools produce anisotropic boundary conforming grids requested by the adaptation process.

More Details

False alarms and the IMS infrasound network: Understanding the factors influencing the creation of false events

Geophysical Journal International

Arrowsmith, Stephen J.

The International Monitoring System (IMS) infrasound network has been designed to acquire the necessary data to detect and locate explosions in the atmosphere with a yield equivalent to 1 kiloton of TNT anywhere on Earth. A major associated challenge is the task of automatically processing data from all IMS infrasound stations to identify possible nuclear tests for subsequent review by analysts. This paper is the first attempt to quantify the false alarm rate (FAR) of the IMS network, and in particular to assess how the FAR is affected by the numbers and distributions of detections at each infrasound station. To ensure that the results are sufficiently general, and not dependent entirely on one detection algorithm, the assessment is based on two detection algorithms that can be thought of as end members in their approach to the trade-offbetween missed detections and false alarms. The results show that the FAR for events formed at only two arrays is extremely high (ranging from 10s to 100s of false events per day across the IMS network, depending on the detector tuning). It is further shown that the FAR for events formed at three or more IMS arrays is driven by ocean-generated waves (microbaroms), despite efforts within both detection algorithms for avoiding these signals, indicating that further research into this issue is merited. Overall, the results highlight the challenge of processing data from a globally sparse network of stations to detect and form events. The results suggest that more work is required to reduce false alarms caused by the detection of microbarom signals.

More Details

Heterogeneity, pore pressure, and injectate chemistry: Control measures for geologic carbon storage

International Journal of Greenhouse Gas Control

Dewers, Thomas; Eichhubl, Peter; Ganis, Ben; Gomez, Steven P.; Heath, Jason E.; Jammoul, Mohamad; Kobos, Peter; Liu, Ruijie; Major, Jonathan; Matteo, Edward N.; Newell, Pania; Rinehart, Alex; Sobolik, Steven; Stormont, John; Reda Taha, Mahmoud; Wheeler, Mary; White, Deandra

Desirable outcomes for geologic carbon storage include maximizing storage efficiency, preserving injectivity, and avoiding unwanted consequences such as caprock or wellbore leakage or induced seismicity during and post injection. To achieve these outcomes, three control measures are evident including pore pressure, injectate chemistry, and knowledge and prudent use of geologic heterogeneity. Field, experimental, and modeling examples are presented that demonstrate controllable GCS via these three measures. Observed changes in reservoir response accompanying CO2 injection at the Cranfield (Mississippi, USA) site, along with lab testing, show potential for use of injectate chemistry as a means to alter fracture permeability (with concomitant improvements for sweep and storage efficiency). Further control of reservoir sweep attends brine extraction from reservoirs, with benefit for pressure control, mitigation of reservoir and wellbore damage, and water use. State-of-the-art validated models predict the extent of damage and deformation associated with pore pressure hazards in reservoirs, timing and location of networks of fractures, and development of localized leakage pathways. Experimentally validated geomechanics models show where wellbore failure is likely to occur during injection, and efficiency of repair methods. Use of heterogeneity as a control measure includes where best to inject, and where to avoid attempts at storage. An example is use of waste zones or leaky seals to both reduce pore pressure hazards and enhance residual CO2 trapping.

More Details

Estimation of rotor loads due to wake steering

Wind Energy Symposium, 2018

White, Jonathan R.; Ennis, Brandon L.; Herges, T.

To reduce the levelized cost of wind energy, wind plant controllers are being developed to improve overall performance by increasing energy capture. Previous work has shown that increased energy capture is possible by steering the wake around downstream turbines; however, the impact this steering action has on the loading of the turbines continues to need further investigation with operational data to determine overall benefit. In this work, rotor loading data from a wind turbine operating a wake steering wind plant controller at the DOE/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) Facility is evaluated. Rotor loading was estimated from fiber optic strain sensors acquired with a state-of-the-art Micron Optics Hyperion interrogator mounted within the rotor and synchronized to the open-source SWiFT controller. A variety of ground and operational calibrations were performed to produce accurate measurements of rotor blade root strains. Time- and rotational-domain signal processing methods were used to estimate bending moment at the root of the rotor blade. Results indicate a correlation of wake steering angle with: one-perrevolution thrust moment amplitude, two-per-revolution torque phase, and three-perrevolution torque amplitude and phase. Future work is needed to fully explain the correlations observed in this work and study additional multi-variable relationships that may also exist.

More Details

Hypersonic fluid-structure interactions on a slender cone

AIAA Aerospace Sciences Meeting, 2018

Casper, Katya M.; Beresh, Steven J.; Henfling, John F.; Spillers, Russell; Hunter, Patrick; Spitzer, Seth M.

Fluid-structure interactions were studied on a 7◦ half-angle cone in the Sandia Hypersonic Wind Tunnel at Mach 5 and 8 and in the Purdue Boeing/AFOSR Mach 6 Quiet Tunnel. A thin composite panel was integrated into the cone and the response to boundary-layer disturbances was characterized by accelerometers on the backside of the panel. Under quiet-flow conditions at Mach 6, the cone boundary layer remained laminar. Artificially generated turbulent spots excited a directionally dependent panel response which would last much longer than the spot duration. When the spot generation frequency matched a structural natural frequency of the panel, resonance would occur and responses over 200 times greater than under a laminar boundary layer were obtained. At Mach 5 and 8 under noisy flow conditions, natural transition driven by the wind-tunnel acoustic noise dominated the panel response. An elevated vibrational response was observed during transition at frequencies corresponding to the distribution of turbulent spots in the transitional flow. Once turbulent flow developed, the structural response dropped because the intermittent forcing from the spots no longer drove panel vibration.

More Details

Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

IEEE Transactions on Nuclear Science

Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; Zhang, En X.; Dodds, Nathaniel A.; Jordan, Scott L.; Fleetwood, Daniel M.; Reed, Robert A.; Schrimpf, Ronald D.

Two-photon absorption (TPA) pulsed-laser testing is used to analyze the TPA-induced single-event latchup sensitive-area of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transits between 0 and 1 as the laser pulse energy increases. This variability is attributed to a combination of laser pulse-to-pulse variability and variations in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated. Calculation of latchup cross section at lower laser energies, relative to onset, is improved significantly by taking into account the full probability distribution. The transition from low probability of latchup to high probability is more abrupt near the source contacts than for surrounding areas.

More Details

A comparison of methods for assessing power output in non-uniform onshore wind farms

Wind Energy

Staid, Andrea; Verhulst, Claire; Guikema, Seth D.

Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshore farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. We show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.

More Details

Silicon qubits

Encyclopedia of Modern Optics

Carroll, M.S.

Silicon is a promising material candidate for qubits due to the combination of worldwide infrastructure in silicon microelectronics fabrication and the capability to drastically reduce decohering noise channels via chemical purification and isotopic enhancement. However, a variety of challenges in fabrication, control, and measurement leaves unclear the best strategy for fully realizing this material’s future potential. In this article, we survey three basic qubit types: those based on substitutional donors, on metal-oxide-semiconductor (MOS) structures, and on Si/SiGe heterostructures. We also discuss the multiple schema used to define and control Si qubits, which may exploit the manipulation and detection of a single electron charge, the state of a single electron spin, or the collective states of multiple spins. Far from being comprehensive, this article provides a brief orientation to the rapidly evolving field of silicon qubit technology and is intended as an approachable entry point for a researcher new to this field.

More Details

Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

Physical Chemistry Chemical Physics

Mason, H.E.; Uribe, Eva U.; Shusterman, J.A.

The development of multi-dimensional statistical methods has been demonstrated on variable contact time (VCT) 29Si{1H} cross-polarization magic angle spinning (CP/MAS) data sets collected using Carr-Purcell-Meiboom-Gill (CPMG) type acquisition. These methods utilize the transformation of the collected 2D VCT data set into a 3D data set and use tensor-rank decomposition to extract the spectral components that vary as a function of transverse relaxation time (T2) and CP contact time. The result is a data dense spectral set that can be used to reconstruct CP/MAS spectra at any contact time with a high signal to noise ratio and with an excellent agreement to 29Si{1H} CP/MAS spectra collected using conventional acquisition. These CPMG data can be collected in a fraction of time that would be required to collect a conventional VCT data set. We demonstrate the method on samples of functionalized mesoporous silica materials and show that the method can provide valuable surface specific information about their functional chemistry.

More Details

Hydrogenation properties of lithium and sodium hydride- closo -borate, [B10H10]2- and [B12H12]2-, composites

Physical Chemistry Chemical Physics

Jensen, Steffen R.H.; Paskevicius, Mark; Hansen, Bjarne R.S.; Jakobsen, Anders S.; Moller, Kasper T.; White, James L.; Allendorf, Mark; Stavila, Vitalie; Skibsted, Jorgen; Jensen, Torben R.

The hydrogen absorption properties of metal closo-borate/metal hydride composites, M2B10H10-8MH and M2B12H12-10MH, M = Li or Na, are studied under high hydrogen pressures to understand the formation mechanism of metal borohydrides. The hydrogen storage properties of the composites have been investigated by in situ synchrotron radiation powder X-ray diffraction at p(H2) = 400 bar and by ex situ hydrogen absorption measurements at p(H2) = 526 to 998 bar. The in situ experiments reveal the formation of crystalline intermediates before metal borohydrides (MBH4) are formed. On the contrary, the M2B12H12-10MH (M = Li and Na) systems show no formation of the metal borohydride at T = 400 °C and p(H2) = 537 to 970 bar. 11B MAS NMR of the M2B10H10-8MH composites reveal that the molar ratio of LiBH4 or NaBH4 and the remaining B species is 1:0.63 and 1:0.21, respectively. Solution and solid-state 11B NMR spectra reveal new intermediates with a B:H ratio close to 1:1. Our results indicate that the M2B10H10 (M = Li, Na) salts display a higher reactivity towards hydrogen in the presence of metal hydrides compared to the corresponding [B12H12]2- composites, which represents an important step towards understanding the factors that determine the stability and reversibility of high hydrogen capacity metal borohydrides for hydrogen storage.

More Details

Ab initio studies of hydrogen ion insertion into β-, R-, and γ-MnO2 polymorphs and the implications for shallow-cycled rechargeable Zn/MnO2 batteries

Journal of the Electrochemical Society

Vasiliev, Igor; Magar, Birendra A.; Duay, Jonathon; Lambert, Timothy N.; Chalamala, Babu C.

At a low depth of discharge, the performance of rechargeable alkaline Zn/MnO2 batteries is determined by the concomitant processes of hydrogen ion insertion and electro-reduction in the solid phase of γ-MnO2. Ab initio computational methods based on density functional theory (DFT) were applied to study the mechanism of hydrogen ion insertion into the pyrolusite (β), ramsdellite (R), and nsutite (γ) MnO2 polymorphs. It was found that hydrogen ion insertion induced significant distortion in the crystal structures of MnO2 polymorphs. Calculations demonstrated that the hydrogen ions inserted into γ-MnO2 initially occupied the larger 2×1 ramsdellite tunnels. The protonated form of γ-MnO2 was found to be stable over the discharge range during which up to two hydrogen ions were inserted into each 2×1 tunnel. At the same time, the study showed that the insertion of hydrogen ions into the 1×1 pyrolusite tunnels of γ-MnO2 created instability leading to the structural breakdown of γ-MnO2. The results of this study explain the presence of groutite (α-MnOOH) and the absence of manganite (γ-MnOOH) among the reaction products of partially reduced γ-MnO2

More Details

Oxygen reduction on stainless steel in concentrated chloride media

Journal of the Electrochemical Society

Alexander, Christopher L.; Liu, Chao; Kelly, Robert G.; Carpenter, Jacob; Bryan, Charles

In this work, a rotating disk electrode was used to measure the cathodic kinetics on stainless steel as a function of diffusion layer thickness (6 to 60 μm) and chloride concentration (0.6 to 5.3 M NaCl). It was found that, while the cathodic kinetics followed the Levich equation for large diffusion layer thicknesses, the Levich equation overpredicts the mass-transfer limited current density for diffusion layer thicknesses less than 20 μm. Also, an unusual transitory response between the activation and mass-transfer controlled regions was observed for small diffusion layer thicknesses that was more apparent in lower concentration solutions. The presence and reduction of an oxide film and a transition in the oxygen reduction mechanism were identified as possible reasons for this response. The implications of these results on atmospheric corrosion kinetics under thin electrolyte layers is discussed.

More Details

Predicting high-temperature decomposition of lithiated graphite: Part II. Passivation layer evolution and the role of surface area

Journal of the Electrochemical Society

Shurtz, Randy C.; Engerer, Jeffrey D.; Hewson, John C.

The surface area dependence of the decomposition reaction between lithiated graphites and electrolytes for temperatures above 100◦C up to ~200◦C is explored through comparison of model predictions to published calorimetry data. The initial rate of the reaction is found to scale super-linearly with the particle surface area. Initial reaction rates are suggested to scale with edge area, which has also been measured to scale super-linearly with particle area. As in previous modeling studies, this work assumes that electron tunneling through the solid electrolyte interphase (SEI) limits the rate of the reaction between lithium and electrolyte. Comparison of model predictions to calorimetry data indicates that the development of the tunneling barrier is not linear with BET surface area; rather, the tunneling barrier correlates best with the square root of specific surface area. This result suggests that tunneling though the SEI may be controlled by defects with linear characteristics. The effect of activation energy on the tunneling-limited reaction is also investigated. The modified area dependence results in a model that predicts with reasonable accuracy the range of observed heat-release rates in the important temperature range from 100◦C to 200◦C where transition to thermal runaway typically occurs at the cell level.

More Details

Understanding the dynamics of primary Zn-MnO2 alkaline battery gassing with operando visualization and pressure cells

Journal of the Electrochemical Society

Faegh, Ehsan; Omasta, Travis; Hull, Matthew; Ferrin, Sean; Shrestha, Sujan; Lechman, Jeremy B.; Bolintineanu, Dan S.; Zuraw, Michael; Mustain, William E.

The leading cause for safety vent rupture in alkaline batteries is the intrinsic instability of Zn in the highly alkaline reacting environment. Zn and aqueous KOH react in a parasitic process to generate hydrogen gas, which can rupture the seal and vent the hydrogen along with small amounts of electrolyte, and thus, damage consumer devices. Abusive conditions, particularly deep discharge, are known to accelerate this “gassing” phenomena. In order to understand the fundamental drivers and mechanisms for such gassing behavior, the results from multiphysics modeling, ex-situ microscopy and operando measurements of cell potential, pressure and visualization have been combined. Operando measurements were enabled by the development a new research platform that enables a cross-sectional view of a cylindrical Zn-MnO2 primary alkaline battery throughout its discharge and recovery. A second version of this cell can actively measure the in-cell pressure during the discharge. It is shown that steep concentration gradients emerge during the cell discharge through a redox electrolyte mechanism, leading to the formation of high surface area Zn deposits that experience rapid corrosion when the cell rests to its open circuit voltage. Such corrosion is paired with the release of hydrogen and high cell pressure – eventually leading to cell rupture.

More Details

Study of low temperature chlorine atom initiated oxidation of methyl and ethyl butyrate using synchrotron photoionization TOF-mass spectrometry

Physical Chemistry Chemical Physics

Osborn, David L.; Czekner, Joseph; Taatjes, Craig A.; Meloni, Giovanni

The initial oxidation products of methyl butyrate (MB) and ethyl butyrate (EB) are studied using a time- and energy-resolved photoionization mass spectrometer. Reactions are initiated with Cl radicals in an excess of oxygen at a temperature of 550 K and a pressure of 6 Torr. Ethyl crotonate is the sole isomeric product that is observed from concerted HO2-elimination from initial alkylperoxy radicals formed in the oxidation of EB. Analysis of the potential energy surface of each possible alkylperoxy radical shows that the CH3CH(OO)CH2C(O)OCH2CH3 (RγO2) and CH3CH2CH(OO)C(O)OCH2CH3 (RβO2) radicals are the isomers that could undergo this concerted HO2-elimination. Two lower-mass products (formaldehyde and acetaldehyde) are observed in both methyl and ethyl butyrate reactions. Secondary reactions of alkylperoxy radicals with HO2 radicals can decompose into the aforementioned products and smaller radicals. These pathways are the likely explanation for the formation of formaldehyde and acetaldehyde.

More Details

On-sun testing of a high temperature bladed solar receiver and transient efficiency evaluation using AIR

ASME 2018 12th International Conference on Energy Sustainability, ES 2018, collocated with the ASME 2018 Power Conference and the ASME 2018 Nuclear Forum

Ortega, Jesus; Khivsara, Sagar D.; Christian, Josh; Dutta, Pradip; Ho, Clifford K.

Prior research at Sandia National Laboratories showed the potential advantages of using light-trapping features which are not currently used in direct tubular receivers. A horizontal bladed receiver arrangement showed the best potential for increasing the effective solar absorptance by increasing the ratio of effective surface area to the aperture footprint. Previous test results and models of the bladed receiver showed a receiver efficiency increase over a flat receiver panel of ~ 5-7% over a range of average irradiances, while showing that the receiver tubes can withstand temperatures > 800 °C with no issues. The bladed receiver is being tested at various peak heat fluxes ranging 75-150 kW/m2 under transient conditions using Air as a heat transfer fluid at inlet pressure ~250 kPa (~36 psi) using a regulating flow loop. The flow loop was designed and tested to maintain a steady mass flow rate for ~15 minutes using pressurized bottles as gas supply. Due to the limited flow-time available, a novel transient methodology to evaluate the thermal efficiencies is presented in this work. Computational fluid dynamics (CFD) models are used to predict the temperature distribution and the resulting transient receiver efficiencies. The CFD simulations results using air as heat transfer fluid have been validated experimentally at the National Solar Thermal Test Facility in Sandia National Labs.

More Details

Supercomputer in a Laptop: Distributed Application and Runtime Development via Architecture Simulation

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Knight, Samuel; Kenny, Joseph; Wilke, Jeremiah

Architecture simulation can aid in predicting and understanding application performance, particularly for proposed hardware or large system designs that do not exist. In network design studies for high-performance computing, most simulators focus on the dominant message passing (MPI) model. Currently, many simulators build and maintain their own simulator-specific implementations of MPI. This approach has several drawbacks. Rather than reusing an existing MPI library, simulator developers must implement all semantics, collectives, and protocols. Additionally, alternative runtimes like GASNet cannot be simulated without again building a simulator-specific version. It would be far more sustainable and flexible to maintain lower-level layers like uGNI or IB-verbs and reuse the production runtime code. Directly building and running production communication runtimes inside a simulator poses technical challenges, however. We discuss these challenges and show how they are overcome via the macroscale components for the Structural Simulation Toolkit (SST), leveraging a basic source-to-source tool to automatically adapt production code for simulation. SST is able to encapsulate and virtualize thousands of MPI ranks in a single simulator process, providing a “supercomputer in a laptop” environment. We demonstrate the approach for the production GASNet runtime over uGNI running inside SST. We then discuss the capabilities enabled, including investigating performance with tunable delays, deterministic debugging of race conditions, and distributed debugging with serial debuggers.

More Details

Development of a downhole piston motor power section for improved directional drilling: Part I - Design, modeling & Analysis

Transactions - Geothermal Resources Council

Raymond, David W.

Directional drilling can be used to enable multi-lateral completions from a single well pad to improve well productivity and decrease environmental impact. Downhole rotation is typically developed with a motor in the Bottom Hole Assembly (BHA) that develops drilling power necessary to rotate the bit apart from the rotation developed by the surface rig. Historically, wellbore deviation has been introduced by a “bent-sub” that introduces a small angular deviation to allow the bit to drill off-axis with orientation of the BHA controlled via surface rotation. The geothermal drilling industry has not realized the benefit of Rotary Steerable Systems, and struggles with conventional downhole rotation systems that use bent-subs for directional control due to shortcomings with downhole motors. Commercially-available Positive Displacement Motors are limited to approximately 350 F (177C) and introduce lateral vibration to the bottom hole assembly contributing to hardware failures and compromising directional drilling objectives. Mud turbines operate at higher temperatures but do not have the low-speed, high torque performance envelope for use with conventional geothermal drill bits. Development of a fit-for purpose downhole motor would enable geothermal directional drilling. Sandia National Laboratories is developing technology for a downhole piston motor to enable directional drilling in high temperature, high strength rock. Application of conventional hydraulic piston motor power cycles using drilling fluids is detailed. Work is described regarding conceiving downhole piston motor power sections; modeling and analysis of potential solutions; and development and laboratory testing of prototype hardware. These developments will lead to more reliable access to geothermal resources and allow preferential wellbore trajectories resulting in improved resource recovery, decreased environmental impact and enhanced well construction economics.

More Details

Uncertainty Assessment of Octane Index Framework for Stoichiometric Knock Limits of Co-Optima Gasoline Fuel Blends

SAE International Journal of Fuels and Lubricants

Vuilleumier, David; Huan, Xun H.; Casey, T.; Sjoberg, Carl M.

This study evaluates the applicability of the Octane Index (OI) framework under conventional spark ignition (SI) and “beyond Research Octane Number (RON)” conditions using nine fuels operated under stoichiometric, knock-limited conditions in a direct injection spark ignition (DISI) engine, supported by Monte Carlo-type simulations which interrogate the effects of measurement uncertainty. Of the nine tested fuels, three fuels are “Tier III” fuel blends, meaning that they are blends of molecules which have passed two levels of screening, and have been evaluated to be ready for tests in research engines. These molecules have been blended into a four-component gasoline surrogate at varying volume fractions in order to achieve a RON rating of 98. The molecules under consideration are isobutanol, 2-butanol, and diisobutylene (which is a mixture of two isomers of octene). The remaining six fuels were research-grade gasolines of varying formulations. The DISI research engine was used to measure knock limits at heated and unheated intake temperature conditions, as well as throttled and boosted intake pressures, all at an engine speed of 1400 rpm. The tested knock-limited operating conditions conceptually exist both between the Motor Octane Number (MON) and RON conditions, as well as “beyond RON” conditions (conditions which are conceptually at lower temperatures, higher pressures, or longer residence times than the RON condition). In addition to directly assessing the performance of the Tier III blends relative to other gasolines, the OI framework was evaluated with considerations of experimental uncertainty in the knock-limited combustion phasing (KL-CA50) measurements, as well as RON and MON test uncertainties. The OI was found to hold to the first order, explaining more than 80% of the knock-limited behavior, although the remaining variation in fuel performance from OI behavior was found to be beyond the likely experimental uncertainties. This indicates that the effects of specific fuel components on knock which are not captured by RON and MON ratings, and complicating the assessment of a given fuel by RON and MON ratings alone.

More Details

Direct kinetics study of CH2OO + methyl vinyl ketone and CH2OO + methacrolein reactions and an upper limit determination for CH2OO + CO reaction

Physical Chemistry Chemical Physics

Eskola, Arkke J.; Dontgen, Malte; Rotavera, Brandon; Caravan, Rebecca L.; Welz, Oliver; Savee, John D.; Osborn, David L.; Shallcross, Dudley E.; Percival, Carl J.; Taatjes, Craig A.

Methyl vinyl ketone (MVK) and methacrolein (MACR) are important intermediate products in atmospheric degradation of volatile organic compounds, especially of isoprene. This work investigates the reactions of the smallest Criegee intermediate, CH2OO, with its co-products from isoprene ozonolysis, MVK and MACR, using multiplexed photoionization mass spectrometry (MPIMS), with either tunable synchrotron radiation from the Advanced Light Source or Lyman-α (10.2 eV) radiation for photoionization. CH2OO was produced via pulsed laser photolysis of CH2I2 in the presence of excess O2. Time-resolved measurements of reactant disappearance and of product formation were performed to monitor reaction progress; first order rate coefficients were obtained from exponential fits to the CH2OO decays. The bimolecular reaction rate coefficients at 300 K and 4 Torr are k(CH2OO + MVK) = (5.0 ± 0.4) × 10-13 cm3 s-1 and k(CH2OO + MACR) = (4.4 ± 1.0) × 10-13 cm3 s-1, where the stated ±2σ uncertainties are statistical uncertainties. Adduct formation is observed for both reactions and is attributed to the formation of a secondary ozonides (1,2,4-trioxolanes), supported by master equation calculations of the kinetics and the agreement between measured and calculated adiabatic ionization energies. Kinetics measurements were also performed for a possible bimolecular CH2OO + CO reaction and for the reaction of CH2OO with CF3CHCH2 at 300 K and 4 Torr. For CH2OO + CO, no reaction is observed and an upper limit is determined: k(CH2OO + CO) < 2 × 10-16 cm3 s-1. For CH2OO + CF3CHCH2, an upper limit of k(CH2OO + CF3CHCH2) < 2 × 10-14 cm3 s-1 is obtained.

More Details

Machine learning models of plastic flow based on representation theory

CMES - Computer Modeling in Engineering and Sciences

Templeton, J.A.; Sanders, Clay; Ostien, Jakob T.

We use machine learning (ML) to infer stress and plastic flow rules using data from representative polycrystalline simulations. In particular, we use so-called deep (multilayer) neural networks (NN) to represent the two response functions. The ML process does not choose appropriate inputs or outputs, rather it is trained on selected inputs and output. Likewise, its discrimination of features is crucially connected to the chosen input-output map. Hence, we draw upon classical constitutive modeling to select inputs and enforce well-accepted symmetries and other properties. In the context of the results of numerous simulations, we discuss the design, stability and accuracy of constitutive NNs trained on typical experimental data. With these developments, we enable rapid model building in real-time with experiments, and guide data collection and feature discovery.

More Details

Inter-area oscillation damping in large-scale power systems using decentralized control

ASME 2018 Dynamic Systems and Control Conference, DSCC 2018

Biroon, Roghieh A.; Pisu, Pierluigi; Schoenwald, David A.

Inter-area oscillation is one of the main concerns in power system small signal stability. It involves wide area in power system, therefore identifying the causes and damping these oscillations are challenging. Undamped inter-area oscillations may cause severe problems in power systems including large-scale blackouts. Designing a proper controller for power systems also is a challenging problem due to the complexity of the system. Moreover, for a large-scale system it is impractical to collect all system information in one location to design a centralized controller. Decentralized controller will be more desirable for large scale systems to minimize the inter area oscillations by using local information. In this paper, we consider a large-scale power system consisting of three areas. After decomposing the system into three subsystems, each subsystem is modeled with a lower order system. Finally, a decentralized controller is designed for each subsystem to maintain the large-scale system frequency at the desired level even in the presence of disturbances.

More Details

3D acoustic and elastic modeling with marmousi2

Society of Exploration Geophysicists - SEG International Exposition and 76tth Annual Meeting 2006, SEG 2006

Symons, Neill P.; Aldridge, David F.; Haney, Matthew M.

Three-dimensional (3D) seismic wave propagation is simulated in the newly-developed Marmousi2 elastic model, using both acoustic and elastic finite-difference (FD) algorithms. Although acoustic and elastic ocean-bottom particle velocity seismograms display distinct differences, only subtle variations are discernable in pressure seismograms recorded in the marine water layer.

More Details

NNSA Public Affairs Program Plan

Deshler, Tim A.

The plan is based on various implementation plans and strategies that define ongoing activities. If significant updates, such as changes or additions to goals and messaging occur, Sandia will submit an updated plan to the Contracting Officer for approval.

More Details

Analysis and Experiment of Management Sciences Inc.'s Arc Fault Protection Connectors

Flicker, Jack D.; Armijo, Kenneth M.

The photovoltaics industry is in dire need of a cheap, robust, reliable arc fault detector that is sensitive enough to detect arc faults before they can develop into a fire while robust enough to noise to limit unwanted tripping. Management Sciences has developed an arc fault detector that is housed in a standard PV connector, which will disconnect the PV array when it detects the surge current from an arc fault. Sandia National Labs, an industry leader in the detection, characterization, and mitigation of arc faults in PV arrays, will work with Management Sciences to characterize, demonstrate, and develop their arc fault detection/connector technology.

More Details

Environmental Restoration Operations Consolidated Quarterly Report (Jul-Sep 2017)

Cochran, John R.

This Sandia National Laboratories, New Mexico Environmental Restoration Operations (ER) Consolidated Quarterly Report (ER Quarterly Report) fulfills all quarterly reporting requirements set forth in the Compliance Order on Consent. The 12 sites in the corrective action process are listed in Table I-1. This ER Quarterly Report presents activities and data.

More Details

Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Schoeniger, Joseph S.

Bacterial pathogens have numerous processes by which their genomic DNA is acquired or rearranged as part of their normal physiology (e.g., exchange of plasmids through conjugation) or by bacteriophage that parasitize bacteria and often insert into the bacterial genome as prophages. These processes occur with relatively high probability/frequency, and may lead to sudden changes in virulence, as new genetic material is added to the chromosome, or structural changes in the chromosome affect gene expression. We set out to devise methods to measure the rates of these processes in bacteria using next generation DNA sequencing. Using very deep sequencing on genomes we had assembled, using library preparation methods and bioinformatics tools designed to help find mobile elements and signs of their insertion, we were able to find numerous examples of attempted novel genome arrangements, revealing data that can be used to calculate rates of different mechanisms of genome change.

More Details

Discharge Permit-1845 Quarterly Status Report (Jul-Sep 2017)

Li, Jun

Trichloroethene (TCE) and nitrate have been identified as constituents of concern in groundwater at the Sandia National Laboratories, New Mexico (SNL/NM) Technical Area (TA)-V Groundwater (TAVG) Area of Concern (AOC) based on detections above the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) in samples collected from monitoring wells. The EPA MCLs and the State of New Mexico drinking water standards for TCE and nitrate are 5 micrograms per liter and 10 milligrams per liter (as nitrogen), respectively. A phased Treatability Study/Interim Measure (TS/IM) of in-situ bioremediation (ISB) will be implemented to evaluate the effectiveness of ISB as a potential technology to treat the groundwater contamination at TAVG AOC (New Mexico Environment Department [NMED] April 2016). The NMED Hazardous Waste Bureau (HWB) approved the Revised Treatability Study Work Plan (TSWP) (SNL/NM March 2016) in May 2016 (NMED May 2016). The SNL/NM Environmental Restoration Operations (ER) personnel are responsible for implementing the TS/IM of ISB at TAVG AOC in accordance with the Revised TSWP.

More Details

Sandia National Laboratories

Dirk, Shawn M.

Sandia National Laboratories is a multimission laboratory managed by National Technology & Engineering Solutions of Sandia, LLC a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration. Sandia has a long history of pioneering AM technology development. In the mid 1990s, Sandia developed laser-engineered net shaping (LENS), one of the first direct metal AM technologies, which was commercialized by Optomec. Robocast, an extrusion-based direct-write process for 3D ceramic parts, is another technology developed at Sandia. It was commercialized by Robocast Enterprises, LLC. As of January, 2019, Sandia was conducting AM R&D projects valued at about $25 million, with an emphasis on: 1) analysis-driven design, 2) materials reliability, and 3) multi-material AM.

More Details

Hill AFB Test Plan February-March 2018

Reu, P.L.

This document will outline the test plans for the Hill AFB Mk 84 aging studies. The goal of the test series is to measure early case expansion velocities, sample the fragment field at various locations, and measure the overall shockwave and large fragment trajectories. This will be accomplished with 3 imaging systems as outlined in the sections below.

More Details

Public Views about Storage and Disposal Options for Spent Nuclear Fuel: Energy and Environment Survey, 2017

Author, No

The US currently has about 80,000 metric tons of uranium in 279,000 used fuel assemblies from commercial spent nuclear fuel (SNF), most of which is stored “on-site” at or near power plants where it was produced. On-site storage facilities were not designed to provide a permanent solution for the disposal of spent nuclear fuel, and building a permanent disposal facility will likely take decades. Therefore, it is important to consider how the US public views options for constructing one or more storage facilities for safely consolidating and storing SNF in the interim. The 2017 iteration of the Energy and Environment survey (EE17) by the Center for Energy, Security, & Society (CES&S) included a battery of questions that measure public views about SNF storage and disposal options. The questions gauge general support for continued on-site storage, interim storage, and permanent disposal. EE17 also measured support for several of the specific sites under consideration, including the two private initiatives for interim storage of SNF in New Mexico and Texas. In addition, EE17 respondents provide insight into the factors likely to affect broader public support for these initiatives as the siting process unfolds, including public views about the importance of support for a prospective facility by host communities and host state residents.

More Details

High Performance Reduction/Oxidation Metal Oxides for Thermochemical Energy Storage (PROMOTES) /CSP

Ambrosini, Andrea A.

Thermochemical energy storage (TCES) offers the potential for greatly increased storage density relative to sensible-only energy storage. Moreover, heat may be stored indefinitely in the form of chemical bonds via TCES, accessed upon demand, and converted to heat at temperatures significantly higher than current solar thermal electricity production technology and is therefore well-suited to more efficient high-temperature power cycles. However, this potential has yet to be realized as no current TCES system satisfies all requirements. This project involves the design, development, and demonstration of a robust and innovative storage cycle based on redox-active metal oxides that are Mixed Ionic-Electronic Conductors (MIECs). We will develop, characterize, and demonstrate a first of its kind 100kWth particle-based TCES system for direct integration with combined-cycle Air Brayton based on the endothermic reduction and exothermic reoxidation of MIECs. Air Brayton cycles require temperatures in the range of 1000-1230 °C for smaller axial flow turbines and are therefore inaccessible to all but the most robust storage solutions such as metal oxides. The choice of MIECs, with exceptional tunability and stability over the specified operating conditions allows us to optimally target this high impact cycle and to introduce the innovation of directly driving the turbine with the reacting/heat recovery fluid. The potential for high temperature thermal storage has direct bearing on next-gen CSP, and an appropriate investment for SETO.

More Details

Interplume Velocity and Extinction Imaging Measurements To Understand Spray Collapse When Varying Injection Duration Or Number Of Injections

Atomization and Sprays

Sphicas, Panos; Pickett, Lyle M.; Skeen, Scott A.; Frank, Jonathan H.; Parrish, S.

The collapse or merging of individual plumes of direct-injection gasoline injectors is of fundamental importance to engine performance because of its impact on fuel-air mixing. However, the mechanisms of spray collapse are not fully understood. The purpose of this work is to study the effects of injection duration and multiple injections on the interaction and/or collapse of multi-plume GDI sprays. High-speed (100 kHz) Particle Image Velocimetry (PIV) is applied along a plane between plumes to observe the full temporal evolution of plume-interaction and potential collapse, resolved for individual injection events. Supporting information along a line of sight is obtained using Diffused Back Illumination (DBI). Experiments are performed under simulated engine conditions using a symmetric 8-hole injector in a high-temperature, high-pressure vessel at the "Spray G" operating conditions of the Engine Combustion Network (ECN). Longer injection duration is found to promote plume collapse, while staging fuel delivery with multiple, shorter injections is resistant to plume collapse.

More Details

Toward Uncertainty Quantification for Supervised Classification

Darling, Michael C.; Stracuzzi, David J.

Our goal is to develop a general theoretical basis for quantifying uncertainty in supervised machine learning models. Current machine learning accuracy-based validation metrics indicate how well a classifier performs on a given data set as a whole. However, these metrics do not tell us a model's efficacy in predicting particular samples. We quantify uncertainty by constructing probability distributions of the predictions made by an ensemble of classifiers. This report details our initial investigations into uncertainty quantification for supervised machine learning. We apply an uncertainty analysis to the problem of malicious website detection. Machine learning models can be trained to find suspicious characteristics in the text of a website's Uniform Resource Locator (URL). However, given the vast numbers of URLs and the ever changing tactics of malicious actors, it will always be possible to find sets of websites which are outliers with respect to a model's hypothesis. Therefore, we seek to understand a model's per-sample reliability when classifying URL data. Acknowledgements This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program.

More Details

Sandia Cable Tester: Reference Manual

Kostka, Timothy D.

The Sandia Cable Tester is an automated cable testing solution capable of quickly testing connectivity and isolation of arbitrary cables. This manual describes how to operate the unit including system setup and how to read out results. This manual also goes into detail on the design and theory behind the system.

More Details

Radiological Dispersal Devices

Bosey, Lynita J.

The purpose of this paper is to review what a ‘dirty bomb” is and what the challenges are to actually developing and utilizing one. Additionally, it will review whether there are other radiological options for the non-state actor desiring to utilize them as a WMD. Integrated in this study and analysis is to determine if a non-state actor has the potential for better access to, and the ability to utilize, certain radiological WMDs.

More Details

Modeling Heat Transfer and Pressurization of Polymeric Methylene Diisocyanate (PMDI) Polyurethane Foam in a Sealed Container

Scott, Sarah N.

Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container to protect the electronics from hostile environments, such as a crash that produces a fire. However, in fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of the sealed system. In this work, a detailed study of thermally decomposing polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam in a sealed container is presented. Both experimental and computational work is discussed. Three models of increasing physics fidelity are presented: No Flow, Porous Media, and Porous Media with VLE. Each model us described in detail, compared to experiment, and uncertainty quantification is performed. While the Porous Media with VLE model matches has the best agreement with experiment, it also requires the most computational resources.

More Details

Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures

Deveci, Mehmet; Trott, Christian R.; Rajamanickam, Sivasankaran

Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

More Details
Results 31801–32000 of 99,299
Results 31801–32000 of 99,299