Meshless Remap of Native Fields for Earth System Models via Generalized Moving Least Squares
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Electron Devices
Vertical gallium nitride (GaN) p-n diodes have garnered significant interest for use in power electronics where high-voltage blocking and high-power efficiency are of concern. In this article, we detail the growth and fabrication methods used to develop a large area (1 mm2) vertical GaN p-n diode capable of a 6.0-kV breakdown. We also demonstrate a large area diode with a forward pulsed current of 3.5 A, an 8.3-mΩ·cm2 differential specific ON-resistance, and a 5.3-kV reverse breakdown. In addition, we report on a smaller area diode (0.063 mm2) that is capable of 6.4-kV breakdown with a differential specific ON-resistance of 10.2 m·Ω·cm2, when accounting for current spreading through the drift region at a 45° angle. Finally, the demonstration of avalanche breakdown is shown for a 0.063-mm2 diode with a room temperature breakdown of 5.6 kV. These results were achieved via epitaxial growth of a 50-μm drift region with a very low carrier concentration of < 1×1015 cm-3 and a carefully designed four-zone junction termination extension.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Sound and Vibration
The concept of a nonlocal elastic metasurface has been recently proposed and experimentally demonstrated in Zhu et al. (2020). When implemented in the form of a total-internal-reflection (TIR) interface, the metasurface can act as an elastic wave barrier that is impenetrable to deep subwavelength waves over an exceptionally wide frequency band. The underlying physical mechanism capable of delivering this broadband subwavelength performance relies on an intentionally nonlocal design that leverages long-range connections between the units forming the fundamental supercell. This paper explores the design and application of a nonlocal TIR metasurface to achieve broadband passive vibration isolation in a structural assembly made of multiple dissimilar elastic waveguides. The specific structural system comprises shell, plate, and beam waveguides, and can be seen as a prototypical structure emulating mechanical assemblies of practical interest for many engineering applications. The study also reports the results of an experimental investigation that confirms the significant vibration isolation capabilities afforded by the embedded nonlocal TIR metasurface. These results are particularly remarkable because they show that the performance of the nonlocal metasurface is preserved when applied to a complex structural assembly and under non-ideal incidence conditions of the incoming wave, hence significantly extending the validity of the results presented in Zhu et al. (2020). Results also confirm that, under proper conditions, the original concept of a planar metasurface can be morphed into a curved interface while still preserving full wave control capabilities.
Journal of Sound and Vibration
We report a Bayesian framework for concurrent selection of physics-based models and (modeling) error models. We investigate the use of colored noise to capture the mismatch between the predictions of calibrated models and observational data that cannot be explained by measurement error alone within the context of Bayesian estimation for stochastic ordinary differential equations. Proposed models are characterized by the average data-fit, a measure of how well a model fits the measurements, and the model complexity measured using the Kullback–Leibler divergence. The use of a more complex error models increases the average data-fit but also increases the complexity of the combined model, possibly over-fitting the data. Bayesian model selection is used to find the optimal physical model as well as the optimal error model. The optimal model is defined using the evidence, where the average data-fit is balanced by the complexity of the model. The effect of colored noise process is illustrated using a nonlinear aeroelastic oscillator representing a rigid NACA0012 airfoil undergoing limit cycle oscillations due to complex fluid–structure interactions. Several quasi-steady and unsteady aerodynamic models are proposed with colored noise or white noise for the model error. The use of colored noise improves the predictive capabilities of simpler models.
Sierra/SolidMechanics (Sierra/SM) is a three-dimensional solid mechanics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. This document provides information about the functionality in Sierra/SM and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. Sierra/SM provides both explicit transient dynamics and implicit quasistatics and dynamics capabilities. Both the explicit and implicit modules are highly scalable in a parallel computing environment. In the past, the explicit and implicit capabilities were provided by two separate codes, known as Presto and Adagio, respectively. These capabilities have been consolidated into a single code. The executable is named Adagio, but it provides the full suite of solid mechanics capabilities, for both implicit and explicit. The Presto executable has been disabled as a consequence of this consolidation.
SAE Technical Papers
To comply with increasingly stringent pollutant emissions regulations, diesel engine operation in a catalyst-heating mode is critical to achieve rapid light-off of exhaust aftertreatment catalysts during the first minutes of cold starting. Current approaches to catalyst-heating operation typically involve one or more late post injections to retard combustion phasing and increase exhaust temperatures. The ability to retard post injection timing(s) while maintaining acceptable pollutant emissions levels is pivotal for improved catalyst-heating calibrations. Higher fuel cetane number has been reported to enable later post injections with increased exhaust heat and decreased pollutant emissions, but the mechanism is not well understood. The purpose of this experimental and numerical simulation study is to provide further insight into the ways in which fuel cetane number affects combustion and pollutant formation in a medium-duty diesel engine. Three full boiling-range diesel fuels with cetane numbers of approximately 45, 50, and 55 are employed in this study with a well-controlled set of calibrations employing a five-injection strategy. The two post injections are block-shifted to increasingly retarded timings, and the effects on exhaust heat and pollutant emissions are quantified for each fuel. For a given injection strategy calibration, increasing cetane number enables increased exhaust temperature and decreased hydrocarbon and carbon monoxide emissions for a fixed load. The increase in exhaust temperature is attributed to an increased fueling requirement to compensate for additional wall heat losses caused by earlier, more robust pilot combustion with the more reactive fuels. Formaldehyde is predicted to form in the fuel-lean periphery of the first pilot injection spray and can persist until exhaust valve opening in the absence of direct interactions with subsequent injections. Unreacted fuel-air mixture in the fuel-rich interior of the first-pilot spray is likely too cool for any significant reactions, and can persist until exhaust valve opening in the absence of turbulence/chemistry interactions and/or direct heating through interactions with subsequent injections.
SAE Technical Papers
Injector performance in gasoline Direct-Injection Spark-Ignition (DISI) engines is a key focus in the automotive industry as the vehicle parc transitions from Port Fuel Injected (PFI) to DISI engine technology. DISI injector deposits, which may impact the fuel delivery process in the engine, sometimes accumulate over longer time periods and greater vehicle mileages than traditional combustion chamber deposits (CCD). These higher mileages and longer timeframes make the evaluation of these deposits in a laboratory setting more challenging due to the extended test durations necessary to achieve representative in-use levels of fouling. The need to generate injector tip deposits for research purposes begs the questions, can an artificial fouling agent to speed deposit accumulation be used, and does this result in deposits similar to those formed naturally by market fuels? In this study, a collection of DISI injectors with different types of conditioning, ranging from controlled engine-stand tests with market or profould fuels, to vehicle tests run over drive cycles, to uncontrolled field use, were analyzed to understand the characteristics of their injector tip deposits and their functional impacts. The DISI injectors, both naturally and profouled, were holistically evaluated for their spray performance, deposit composition, and deposit morphology relative to one another. The testing and accompanying analysis reveals both similarities and differences among naturally fouled, fouled through long time periods with market fuel, and profouled injectors, fouled artificially through the use of a sulfur dopant. Profouled injectors were chemically distinct from naturally fouled injectors, and found to contain higher levels of sulfur dioxide. Also, profouled injectors exhibited greater volumes of deposits on the face of the injector tip. However, functionally, both naturally-fouled and profouled injectors featured similar impacts on their spray performance relative to clean injectors, with the fouled injector spray plumes remaining narrower, limiting plume-to-plume interactions, and altering the liquid-spray penetration dynamics., insights from which can guide future research into injector tip deposits.
SAE Technical Papers
Spray-wall interactions in diesel engines have a strong influence on turbulent flow evolution and mixing, which influences the engine's thermal efficiency and pollutant-emissions behavior. Previous optical experiments and numerical investigations of a stepped-lip diesel piston bowl focused on how spray-wall interactions influence the formation of squish-region vortices and their sensitivity to injection timing. Such vortices are stronger and longer-lived at retarded injection timings and are correlated with faster late-cycle heat release and soot reductions, but are weaker and shorter-lived as injection timing is advanced. Computational fluid dynamics (CFD) simulations predict that piston bowls with more space in the squish region can enhance the strength of these vortices at near-TDC injection timings, which is hypothesized to further improve peak thermal efficiency and reduce emissions. The dimpled stepped-lip (DSL) piston is such a design. In this study, the in-cylinder flow is simulated with a DSL piston to investigate the effects of dimple geometry parameters on squish-region vortex formation via a design sensitivity study. The rotational energy and size of the squish-region vortices are quantified. The results suggest that the DSL piston is capable of enhancing vortex formation compared to the stepped-lip piston at near-TDC injection timings. The sensitivity study led to the design of an improved DSL bowl with shallower, narrower, and steeper-curved dimples that are further out into the squish region, which enhances predicted vortex formation with 27#x00025; larger and 44#x00025; more rotationally energetic vortices compared to the baseline DSL bowl. Engine experiments with the baseline DSL piston demonstrate that it can reduce combustion duration and improve thermal efficiency by as much as 1.4#x00025; with main injection timings near TDC, due to improved rotational energy, but with 69#x00025; increased soot emissions and no penalty in NOx emissions.
SAE Technical Papers
A one-dimensional, non-equilibrium, compressible law of the wall model is proposed to increase the accuracy of heat transfer predictions from computational fluid dynamics (CFD) simulations of internal combustion engine flows on engineering grids. Our 1D model solves the transient turbulent Navier-Stokes equations for mass, momentum, energy and turbulence under the thin-layer assumption, using a finite-difference spatial scheme and a high-order implicit time integration method. A new algebraic eddy-viscosity closure, derived from the Han-Reitz equilibrium law of the wall, with enhanced Prandtl number sensitivity and compressibility effects, was developed for optimal performance. Several eddy viscosity sub-models were tested for turbulence closure, including the two-equation k-epsilon and k-omega, which gave insufficient performance. Validation against pulsating channel flow experiments highlighted the superior capability of the 1D model to capture transient near-wall velocity and temperature profiles, and the need to appropriately model the eddy viscosity using a low-Reynolds method, which could not be achieved with the standard two-equation models. The results indicate that the non-equilibrium model can capture the near-wall velocity profile dynamics (including velocity profile inversion) while equilibrium models cannot, and simultaneously reduce heat flux prediction errors by up to one order of magnitude. The proposed optimal configuration reduced heat flux error for the pulsating channel flow case from 18.4#x00025; (Launder-Spalding law of the wall) down to 1.67#x00025;.
SAE Technical Papers
This work is a comprehensive technical review of existing literature and a synthesis of current understanding of the governing physics behind the interaction of multiple fuel injections, ignition, and combustion behavior of multiple-injections in diesel engines. Multiple-injection is a widely adopted operating strategy applied in modern compression-ignition engines, which involves various combinations of small pre-injections and post-injections of fuel before and after the main injection and splitting the main injection into multiple smaller injections. This strategy has been conclusively shown to improve fuel economy in diesel engines while achieving simultaneous NOX, soot, and combustion noise reduction - in addition to a reduction in the emissions of unburned hydrocarbons (UHC) and CO by preventing fuel wetting and flame quenching at the piston wall. Despite the widespread adoption and an extensive literature documenting the effects of multiple-injection strategies in engines, little is known about the complex interplay between the underlying flow physics and combustion chemistry involved in such flows, which ultimately governs the ignition and subsequent combustion processes thereby dictating the effectiveness of this strategy. In this work, we provide a comprehensive overview of the literature on the interaction between the jets in a multiple-injection event, the resulting mixture, and finally the ignition and combustion dynamics as a function of engine operational parameters including injection duration and dwell. The understanding of the underlying processes is facilitated by a new conceptual model of multiple-injection physics. We conclude by identifying the major remaining research questions that need to be addressed to refine and help achieve a design-level understanding to optimize advanced multiple-injection strategies that can lead to higher engine efficiency and lower emissions.
Journal of Applied Physics
Uniaxial strain, reverse-ballistic impact experiments were performed on wrought 17-4 PH H1025 stainless steel, and the resulting Hugoniot was determined to a peak stress of 25 GPa through impedance matching to known standard materials. The measured Hugoniot showed evidence of a solid-solid phase transition, consistent with other martensitic Fe-alloys. The phase transition stress in the wrought 17-4 PH H1025 stainless steel was measured in a uniaxial strain, forward-ballistic impact experiment to be 11.4 GPa. Linear fits to the Hugoniot for both the low and high pressure phase are presented with corresponding uncertainty. The low pressure martensitic phase exhibits a shock velocity that is weakly dependent on the particle velocity, consistent with other martensitic Fe-alloys.
ACM International Conference Proceeding Series
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
ACM International Conference Proceeding Series
Abstract not provided.
Journal of Materials Research
Understanding of structural and morphological evolution in nanomaterials is critical in tailoring their functionality for applications such as energy conversion and storage. Here, we examine irradiation effects on the morphology and structure of amorphous TiO2 nanotubes in comparison with their crystalline counterpart, anatase TiO2 nanotubes, using high-resolution transmission electron microscopy (TEM), in situ ion irradiation TEM, and molecular dynamics (MD) simulations. Anatase TiO2 nanotubes exhibit morphological and structural stability under irradiation due to their high concentration of grain boundaries and surfaces as defect sinks. On the other hand, amorphous TiO2 nanotubes undergo irradiation-induced crystallization, with some tubes remaining only partially crystallized. The partially crystalline tubes bend due to internal stresses associated with densification during crystallization as suggested by MD calculations. These results present a novel irradiation-based pathway for potentially tuning structure and morphology of energy storage materials. Graphical abstract: [Figure not available: see fulltext.]
ACM International Conference Proceeding Series
It has been demonstrated that grid cells are encoding physical locations using hexagonally spaced, periodic phase-space representations. Theories of how the brain is decoding this phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method on how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this extended abstract, we walk through our approach with simulations using a digital elevation model.
Fuel Communications
Fusel alcohol mixtures containing ethanol, isobutanol, isopentanol, and 2-phenylethanol have been shown to be a promising means to maximize renewable fuel yield from various biomass feedstocks and waste streams. We hypothesized that use of these fusel alcohol mixtures as a blending agent with gasoline can significantly lower the greenhouse gas emissions from the light-duty fleet. Since the composition of fusel alcohol mixtures derived from fermentation is dependent on a variety of factors such as biocatalyst selection and feedstock composition, multi-objective optimization was performed to identify optimal fusel alcohol blends in gasoline that simultaneously maximize thermodynamic efficiency gain and energy density. Pareto front analysis combined with fuel property predictions and a Merit Score-based metric led to prediction of optimal fusel alcohol-gasoline blends over a range of blending volumes. The optimal fusel blends were analyzed based on a Net Fuel Economy Improvement Potential metric for volumetric blending in a gasoline base fuel. The results demonstrate that various fusel alcohol blends provide the ability to maximize efficiency improvement while minimizing increases to blending vapor pressure and decreases to energy density compared to an ethanol-only bioblendstock. Fusel blends exhibit predicted Net Fuel Economy Improvement Potential comparable to neat ethanol when blended with gasoline in all scenarios, with increased improvement over ethanol at moderate to high bio-blendstock blending levels. The optimal fusel blend that was identified was a mixture of 90% v/v isobutanol and 10% v/v 2-phenylethanol, blended at 45% v/v with gasoline, yielding a predicted 4.67% increase in Net Fuel Economy Improvement Potential. These findings suggest that incorporation of fusel alcohols as a gasoline bioblendstock can improve both fuel performance and the net fuel yield of the bioethanol industry.
Physical Review Letters
We introduce novel higher-order topological phases of matter in chiral-symmetric systems (class AIII of the tenfold classification), most of which would be misidentified as trivial by current theories. These phases are protected by "multipole chiral numbers,"bulk integer topological invariants that in 2D and 3D are built from sublattice multipole moment operators, as defined herein. The integer value of a multipole chiral number indicates how many degenerate zero-energy states localize at each corner of a system. These higher-order topological phases of matter are generally boundary-obstructed and robust in the presence of chiral-symmetry-preserving disorder.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
This report provides an assessment of the value of the LDRD program to Sandia National Laboratories during fiscal year 2021.
Microscopy Today
Low-Z nanocrystalline diamond (NCD) grids have been developed to reduce spurious fluorescence and avoid X-ray peak overlaps or interferences between the specimen and conventional metal grids. Here, the low-Z NCD grids are non-toxic and safe to handle, conductive, can be subjected to high-temperature heating experiments, and may be used for analytical work in lieu of metal grids. Both a half-grid geometry, which can be used for any lift-out method, or a full-grid geometry that can be used for ex situ lift-out or thin film analyses, can be fabricated and used for experiments.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The experiment investigates free expansion of a supercritical fluid into a two-phase liquid-vapor coexistence region. A huge molecular dynamics simulation (6 billion Lennard-Jones atoms) was run on 5760 GPUs (33% of LLNL Sierra) using LAMMPS/Kokkos software. This improved visualization workflow and started preliminary simulations of aluminum using SNAP machine learning potential.
Presented in this document are the theoretical aspects of capabilities contained in the Sierra/SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilities come closer to production level.
Presented in this document are tests that exist in the Sierra/SolidMechanics example problem suite, which is a subset of the Sierra/SM regression and performance test suite. These examples showcase common and advanced code capabilities. A wide variety of other regression and verification tests exist in the Sierra/SM test suite that are not included in this manual.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to User's Manual. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
ACS Applied Materials and Interfaces
This work investigates the role of water and oxygen on the shear-induced structural modifications of molybdenum disulfide (MoS2) coatings for space applications and the impact on friction due to oxidation from aging. We observed from transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS) that sliding in both an inert environment (i.e., dry N2) or humid lab air forms basally oriented (002) running films of varying thickness and structure. Tribological testing of the basally oriented surfaces created in dry N2 and air showed lower initial friction than a coating with an amorphous or nanocrystalline microstructure. Aging of coatings with basally oriented surfaces was performed by heating samples at 250 °C for 24 h. Post aging tribological testing of the as-deposited coating showed increased initial friction and a longer transition from higher friction to lower friction (i.e., run-in) due to oxidation of the surface. Tribological testing of raster patches formed in dry N2 and air both showed an improved resistance to oxidation and reduced initial friction after aging. The results from this study have implications for the use of MoS2-coated mechanisms in aerospace and space applications and highlight the importance of preflight testing. Preflight cycling of components in inert or air environments provides an oriented surface microstructure with fewer interaction sites for oxidation and a lower shear strength, reducing the initial friction coefficient and oxidation due to aging or exposure to reactive species (i.e., atomic oxygen).
Geophysical Journal International
More realistic models for infrasound signal propagation across a region can be used to improve the precision and accuracy of spatial and temporal source localization estimates. Here, motivated by incomplete infrasound event bulletins in the Western US, the location capabilities of a regional infrasonic network of stations located between 84–458 km from the Utah Test and Training Range, Utah, USA, is assessed using a series of near-surface explosive events with complementary ground truth (GT) information. Signal arrival times and backazimuth estimates are determined with an automatic F-statistic based signal detector and manually refined by an analyst. This study represents the first application of three distinct celerity-range and backazimuth models to an extensive suite of realistic signal detections for event location purposes. A singular celerity and backazimuth deviation model was previously constructed using ray tracing analysis based on an extensive archive of historical atmospheric specifications and is applied within this study to test location capabilities. Similarly, a set of multivariate, season and location specific models for celerity and backazimuth are compared to an empirical model that depends on the observations across the infrasound network and the GT events, which accounts for atmospheric propagation variations from source to receiver. Discrepancies between observed and predicted signal celerities result in locations with poor accuracy. Application of the empirical model improves both spatial localization precision and accuracy; all but one location estimates retain the true GT location within the 90 per cent confidence bounds. Average mislocation of the events is 15.49 km and average 90 per cent error ellipse areas are 4141 km2. The empirical model additionally reduces origin time residuals; origin time residuals from the other location models are in excess of 160 s while residuals produced with the empirical model are within 30 s of the true origin time. Finally, we demonstrate that event location accuracy is driven by a combination of signal propagation model and the azimuthal gap of detecting stations. A direct relationship between mislocation, error ellipse area and increased station azimuthal gaps indicate that for sparse networks, detection backazimuths may drive location biases over traveltime estimates.
This user's guide documents capabilities in Sierra/SolidMechanics which remain "in-development" and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.6 User's Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Journal of Open Source Software
The ability to perform accurate techno-economic analysis of solar photovoltaic (PV) systems is essential for bankability and investment purposes. Most energy yield models assume an almost flawless operation (i.e., no failures); however, realistically, components fail and get repaired stochastically. This package, PyPVRPM, is a Python translation and improvement of the Language Kit (LK) based PhotoVoltaic Reliability Performance Model (PVRPM), which was first developed at Sandia National Laboratories in Goldsim software (Granata et al., 2011) (Miller et al., 2012). PyPVRPM allows the user to define a PV system at a specific location and incorporate failure, repair, and detection rates and distributions to calculate energy yield and other financial metrics such as the levelized cost of energy and net present value (Klise, Lavrova, et al., 2017). Our package is a simulation tool that uses NREL’s Python interface for System Advisor Model (SAM) (National Renewable Energy Laboratory, 2020b) (National Renewable Energy Laboratory, 2020a) to evaluate the performance of a PV plant throughout its lifetime by considering component reliability metrics. Besides the numerous benefits from migrating to Python (e.g., speed, libraries, batch analyses), it also expands on the failure and repair processes from the LK version by including the ability to vary monitoring strategies. These failures, repairs, and monitoring processes are based on user-defined distributions and values, enabling a more accurate and realistic representation of cost and availability throughout a PV system’s lifetime.
Accurate and efficient constitutive modeling remains a cornerstone issue for solid mechanics analysis. Over the years, the LAMÉ advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization of the LAMÉ library in application, this effort seeks to document and verify the various models in the LAMÉ library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
Journal of Applied Physics
Capturing the dynamic response of a material under high strain-rate deformation often demands challenging and time consuming experimental effort. While shock hydrodynamic simulation methods can aid in this area, a priori characterizations of the material strength under shock loading and spall failure are needed in order to parameterize constitutive models needed for these computational tools. Moreover, parameterizations of strain-rate-dependent strength models are needed to capture the full suite of Richtmyer-Meshkov instability (RMI) behavior of shock compressed metals, creating an unrealistic demand for these training data solely on experiments. Herein, we sweep a large range of geometric, crystallographic, and shock conditions within molecular dynamics (MD) simulations and demonstrate the breadth of RMI in Cu that can be captured from the atomic scale. Yield strength measurements from jetted and arrested material from a sinusoidal surface perturbation were quantified as Y RMI = 0.787 ± 0.374 GPa, higher than strain-rate-independent models used in experimentally matched hydrodynamic simulations. Defect-free, single-crystal Cu samples used in MD will overestimate Y RMI, but the drastic scale difference between experiment and MD is highlighted by high confidence neighborhood clustering predictions of RMI characterizations, yielding incorrect classifications.
This is an addendum to the Sierra/SolidMechanics 5.6 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra SolidMechanics 5.6 User's Guide should be referenced for most general descriptions of code capability and use.
Physical Review Letters
Frontiers in Climate
With the urgent need to mitigate climate change and rising global temperatures, technological solutions that reduce atmospheric CO2 are an increasingly important part of the global solution. As a result, the nascent carbon capture, utilization, and storage (CCUS) industry is rapidly growing with a plethora of new technologies in many different sectors. There is a need to holistically evaluate these new technologies in a standardized and consistent manner to determine which technologies will be the most successful and competitive in the global marketplace to achieve decarbonization targets. Life cycle assessment (LCA) and techno-economic assessment (TEA) have been employed as rigorous methodologies for quantitatively measuring a technology's environmental impacts and techno-economic performance, respectively. However, these metrics evaluate a technology's performance in only three dimensions and do not directly incorporate stakeholder needs and values. In addition, technology developers frequently encounter trade-offs during design that increase one metric at the expense of the other. The technology performance level (TPL) combined indicator provides a comprehensive and holistic assessment of an emerging technology's potential, which is described by its techno-economic performance, environmental impacts, social impacts, safety considerations, market/deployability opportunities, use integration impacts, and general risks. TPL incorporates TEA and LCA outputs and quantifies the trade-offs between them directly using stakeholder feedback and requirements. In this article, the TPL methodology is being adapted from the marine energy domain to the CCUS domain. Adapted metrics and definitions, a stakeholder analysis, and a detailed foundation-based application of the systems engineering approach to CCUS are presented. The TPL assessment framework is couched within the internationally standardized LCA framework to improve technical rigor and acceptance. It is demonstrated how stakeholder needs and values can be directly incorporated, how LCA and TEA metrics can be balanced, and how other dimensions (listed earlier) can be integrated into a single metric that measures a technology's potential.
Energy and Fuels
Using chemical kinetic modeling and statistical analysis, we investigate the possibility of correlating key chemical "markers"-typically small molecules-formed during very lean (φ ∼0.001) oxidation experiments with near-stoichiometric (φ ∼1) fuel ignition properties. One goal of this work is to evaluate the feasibility of designing a fuel-screening platform, based on small laboratory reactors that operate at low temperatures and use minimal fuel volume. Buras et al. [Combust. Flame 2020, 216, 472-484] have shown that convolutional neural net (CNN) fitting can be used to correlate first-stage ignition delay times (IDTs) with OH/HO2measurements during very lean oxidation in low-T flow reactors with better than factor-of-2 accuracy. In this work, we test the limits of applying this correlation-based approach to predict the low-temperature heat release (LTHR) and total IDT, including the sensitivity of total IDT to the equivalence ratio, φ. We demonstrate that first-stage IDT can be reliably correlated with very lean oxidation measurements using compressed sensing (CS), which is simpler to implement than CNN fitting. LTHR can also be predicted via CS analysis, although the correlation quality is somewhat lower than for first-stage IDT. In contrast, the accuracy of total IDT prediction at φ = 1 is significantly lower (within a factor of 4 or worse). These results can be rationalized by the fact that the first-stage IDT and LTHR are primarily determined by low-temperature chemistry, whereas total IDT depends on low-, intermediate-, and high-temperature chemistry. Oxidation reactions are most important at low temperatures, and therefore, measurements of universal molecular markers of oxidation do not capture the full chemical complexity required to accurately predict the total IDT even at a single equivalence ratio. As a result, we find that φ-sensitivity of ignition delay cannot be predicted at all using solely correlation with lean low-T chemical speciation measurements.
In this report, we assess the data recorded by a Distributed Acoustic Sensing (DAS) cable deployed during the Source Physics Experiment, Phase II (DAG) in comparison with the data recorded by nearby 4.5-Hz geophones. DAS is a novel recording method with unprecedented spatial resolution, but there are significant concerns around the data fidelity as the technology is ramped up to more common usage. Here we run a series of tests to quantify the similarity between DAS data and more conventional data and investigate cases where the higher spatial resolution of the DAS can provide new insights into the wavefield. These tests include 1D modeling with seismic refraction and bootstrap uncertainties, assessing the amplitude spectra with distance from the source, measuring the frequency dependent inter-station coherency, estimating time-dependent phase velocity with beamforming and semblance, and measuring the cross-correlation between the geophone and the particle velocity inferred from the DAS. In most cases, we find high similarity between the two datasets, but the higher spatial resolution of the DAS provides increased details and methods of estimating uncertainty.
Fraud in the Environmental Benefit Credit (EBC) markets is pervasive. To make matters worse, the cost of creating EBCs is often higher than the market price. Consequently, a method to create, validate, and verify EBCs and their relevance is needed to mitigate fraud. The EBC market has focused on geologic (fossil fuel) CO2 sequestration projects that are often over budget and behind schedule and has failed to capture the "lowest hanging fruit" EBCs - terrestrial sequestration via the agricultural industry. This project reviews a methodology to attain possibly the least costly EBCs by tracking the reduction of inputs required to grow crops. The use of bio- stimulant products, such as humate, allows a farmer to use less nitrogen without adversely affecting crop yield. Using less nitrogen qualifies for EBCs by reducing nitrous oxide emissions and nitrate runoff from a farmer's field. A blockchain that tracks the bio-stimulant material from source to application provides a link between a tangible (bio-stimulant commodity) and the associated intangible (EBCs) assets. Covert insertion of taggants in the bio-stimulant products creates a unique barcode that allows a product to be digitally tracked from beginning to end. This process (blockchain technology) is so robust, logical, and transparent that it will enhance the value of the associated EBCs by mitigating fraud. It provides a real time method for monetizing the benefits of the material. Substantial amounts of energy are required to produce, transport, and distribute agricultural inputs including fertilizer and water. Intelligent optimization of the use of agricultural inputs can drive meaningful cost savings. Tagging and verification of product application provides a valuable understanding of the dynamics in the water/food energy nexus, a major food security and sustainability issue. As technology in agriculture evolves so to must methods to verify the Enterprise Resource Planning (ERP) potential of innovative solutions. The technology reviewed provides the ability to combine blockchain and taggants ("taggant blockchains") as the engine by which to (1) mitigate fraudulent carbon credits; (2) improve food chain security, and (3) monitor and manage sustainability. The verification of product quality and application is a requirement to validate benefits. Recent upgrades to humic and fulvic quality protocols known as ISO CD 19822 TC134 offers an analytical procedure. This work has been assisted by the Humic Products Trade Association and International Humic Substance Society. In addition, providing proof of application of these products and verification of the correct application of prescriptive humic and bio-stimulant products is required. Individual sources of humate have unique and verifiable characteristics. Additionally, methods for prescription of site- specific agricultural inputs in agricultural fields are available. (See US Patents 734867B2, US 90658633B2.) Finally, a method to assure application rate is required through the use of taggants. Sensors using organic solid to liquid phase change nanoparticles of various types and melting temperatures added to the naturally occurring materials provide a barcode. Over 100 types of nanoparticles exist ensuring numerous possible barcodes to reduce industry fraud. Taggant materials can be collected from soil samples of plant material to validate a blockchain of humic, fulvic and other soil amendment products. Other non-organic materials are also available as taggants; however, the organic tags are biodegradable and safe in the environment allowing for use during differing application timeliness.
Deep neural networks have emerged as a leading set of algorithms to infer information from a variety of data sources such as images and time series data. In their most basic form, neural networks lack the ability to adapt to new classes of information. Continual learning is a field of study attempting to give previously trained deep learning models the ability to adapt to a changing environment. Previous work developed a CL method called Neurogenesis for Deep Learning (NDL). Here, we combine NDL with a specific neural network architecture (the Ladder Network) to produce a system capable of automatically adapting a classification neural network to new classes of data. The NDL Ladder Network was evaluated against other leading CL methods. While the NDL and Ladder Network system did not match the cutting edge performance achieved by other CL methods, in most cases it performed comparably and is the only system evaluated that can learn new classes of information with no human intervention.
Vacuum
We report the formation of Al3Sc, in 100 nm Al0.8Sc0.2 films, is found to be driven by exposure to high temperature through higher deposition temperature or annealing. High film resistivity was observed in films with lower deposition temperature that exhibited a lack of crystallinity, which is anticipated to cause more electron scattering. An increase in deposition temperature allows for the nucleation and growth of crystalline Al3Sc regions that were verified by electron diffraction. The increase in crystallinity reduces electron scattering, which results in lower film resistivity. Annealing Al0.8Sc0.2 films at 600 °C in an Ar vacuum environment also allows for the formation and recrystallization of Al3Sc and Al and yields saturated resistivity values between 9.58 and 10.5 μΩ-cm regardless of sputter conditions. Al3Sc was found to nucleate and grow in a random orientation when deposited on SiO2, and highly {111} textured when deposited on 100 nm Ti and AlN films that were used as template layers. The rocking curve of the Al3Sc 111 reflection for the as-deposited films on Ti and AlN at 450 °C was 1.79° and 1.68°, respectively. Annealing the film deposited on the AlN template reduced the rocking curve substantially to 1.01° due to recrystallization of Al3Sc and Al within the film.
Abstract not provided.
Frontiers in Materials
Digital twins are emerging as powerful tools for supporting innovation as well as optimizing the in-service performance of a broad range of complex physical machines, devices, and components. A digital twin is generally designed to provide accurate in-silico representation of the form (i.e., appearance) and the functional response of a specified (unique) physical twin. This paper offers a new perspective on how the emerging concept of digital twins could be applied to accelerate materials innovation efforts. Specifically, it is argued that the material itself can be considered as a highly complex multiscale physical system whose form (i.e., details of the material structure over a hierarchy of material length) and function (i.e., response to external stimuli typically characterized through suitably defined material properties) can be captured suitably in a digital twin. Accordingly, the digital twin can represent the evolution of structure, process, and performance of the material over time, with regard to both process history and in-service environment. This paper establishes the foundational concepts and frameworks needed to formulate and continuously update both the form and function of the digital twin of a selected material physical twin. The form of the proposed material digital twin can be captured effectively using the broadly applicable framework of n-point spatial correlations, while its function at the different length scales can be captured using homogenization and localization process-structure-property surrogate models calibrated to collections of available experimental and physics-based simulation data.
ACS Photonics
Since the discovery of the laser, optical nonlinearities have been at the core of efficient light conversion sources. Typically, thick transparent crystals or quasi-phase matched waveguides are utilized in conjunction with phase-matching techniques to select a single parametric process. In recent years, due to the rapid developments in artificially structured materials, optical frequency mixing has been achieved at the nanoscale in subwavelength resonators arrayed as metasurfaces. Phase matching becomes relaxed for these wavelength-scale structures, and all allowed nonlinear processes can, in principle, occur on an equal footing. This could promote harmonic generation via a cascaded (consisting of several frequency mixing steps) process. However, so far, all reported work on dielectric metasurfaces have assumed frequency mixing from a direct (single step) nonlinear process. In this work, we prove the existence of cascaded second-order optical nonlinearities by analyzing the second- and third-wave mixing from a highly nonlinear metasurface in conjunction with polarization selection rules and crystal symmetries. We find that the third-wave mixing signal from a cascaded process can be of comparable strength to that from conventional third-harmonic generation and that surface nonlinearities are the dominant mechanism that contributes to cascaded second-order nonlinearities in our metasurface.
This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.
In this report we describe the testing of a novel scheme for state preparation of trapped ions in a quantum computing setup. This technique optimally would allow for similar precision and speed of state preparation while allowing for individual addressability of single ions in a chain using technology already available in a trapped ion experiment. As quantum computing experiments become more complicated, mid-experiment measurements will become necessary to achieve algorithms such as quantum error correction. Any mid-experiment measurement then requires the measured qubit to be re-prepared to a known quantum state. Currently this involves the protected qubits to be moved a sizeable distance away from the qubit being re-prepared which can be costly in terms of experiment length as well as introducing errors. Theoretical calculations predict that a three-photon process would allow for state preparation without qubit movement with similar efficiencies to current state preparation methods.
Journal of Computational Physics
We develop numerical methods for computing statistics of stochastic processes on surfaces of general shape with drift-diffusion dynamics dXt=a(Xt)dt+b(Xt)dWt. We formulate descriptions of Brownian motion and general drift-diffusion processes on surfaces. We consider statistics of the form u(x)=Ex[∫0τg(Xt)dt]+Ex[f(Xτ)] for a domain Ω and the exit stopping time τ=inft{t>0|Xt∉Ω}, where f,g are general smooth functions. For computing these statistics, we develop high-order Generalized Moving Least Squares (GMLS) solvers for associated surface PDE boundary-value problems based on Backward-Kolmogorov equations. We focus particularly on the mean First Passage Times (FPTs) given by the case f=0,g=1 where u(x)=Ex[τ]. We perform studies for a variety of shapes showing our methods converge with high-order accuracy both in capturing the geometry and the surface PDE solutions. We then perform studies showing how statistics are influenced by the surface geometry, drift dynamics, and spatially dependent diffusivities.
Mechanical Systems and Signal Processing
In this work, we study how a contact/impact nonlinearity interacts with a geometric cubic nonlinearity in an oscillator system. Specific focus is shown to the effects on bifurcation behavior and secondary resonances (i.e., super- and sub-harmonic resonances). The effects of the individual nonlinearities are first explored for comparison, and then the influences of the combined nonlinearities, varying one parameter at a time, are analyzed and discussed. Nonlinear characterization is then performed on an arbitrary system configuration to study super- and sub-harmonic resonances and grazing contacts or bifurcations. Both the cubic and contact nonlinearities cause a drop in amplitude and shift up in frequency for the primary resonance, and they activate high-amplitude subharmonic resonance regions. The nonlinearities seem to never destructively interfere. The contact nonlinearity generally affects the system's superharmonic resonance behavior more, particularly with regard to the occurrence of grazing contacts and the activation of many bifurcations in the system's response. The subharmonic resonance behavior is more strongly affected by the cubic nonlinearity and is prone to multistable behavior. Perturbation theory proved useful for determining when the cubic nonlinearity would be dominant compared to the contact nonlinearity. The limiting behaviors of the contact stiffness and freeplay gap size indicate the cubic nonlinearity is dominant overall. It is demonstrated that the presence of contact may result in the activation of several bifurcations. In addition, it is proved that the system's subharmonic resonance region is prone to multistable dynamical responses having distinct magnitudes.
Plasma etching of semiconductors is an essential process in the production of microchips which enable nearly every aspect of modern life. Two frequencies of applied voltage are often used to provide control of both the ion fluxes and energy distribution.
Physical Chemistry Chemical Physics
Depleted uranium hexafluoride (UF6), a stockpiled byproduct of the nuclear fuel cycle, reacts readily with atmospheric humidity, but the mechanism is poorly understood. We compare several potential initiation steps at a consistent level of theory, generating underlying structures and vibrational modes using hybrid density functional theory (DFT) and computing relative energies of stationary points with double-hybrid (DH) DFT. A benchmark comparison is performed to assess the quality of DH-DFT data using reference energy differences obtained using a complete-basis-limit coupled-cluster (CC) composite method. The associated large-basis CC computations were enabled by a new general-purpose pseudopotential capability implemented as part of this work. Dispersion-corrected parameter-free DH-DFT methods, namely PBE0-DH-D3(BJ) and PBE-QIDH-D3(BJ), provided mean unsigned errors within chemical accuracy (1 kcal mol−1) for a set of barrier heights corresponding to the most energetically favorable initiation steps. The hydrolysis mechanism is found to proceed via intermolecular hydrogen transfer within van der Waals complexes involving UF6, UF5OH, and UOF4, in agreement with previous studies, followed by the formation of a previously unappreciated dihydroxide intermediate, UF4(OH)2. The dihydroxide is predicted to form under both kinetic and thermodynamic control, and, unlike the alternate pathway leading to the UO2F2 monomer, its reaction energy is exothermic, in agreement with observation. Finally, harmonic and anharmonic vibrational simulations are performed to reinterpret literature infrared spectroscopy in light of this newly identified species.
Journal of Physics A: Mathematical and Theoretical
In a quantum network, a key challenge is to minimize the direct reflection of flying qubits as they couple to stationary, resonator-based memory qubits, as the reflected amplitude represents state transfer infidelity that cannot be directly recovered. Optimizing the transfer fidelity can be accomplished by dynamically varying the resonator's coupling rate to the flying qubit field. Here, we analytically derive the optimal coupling rate profile in the presence of intrinsic loss of the quantum memory using an open quantum systems method that can account for intrinsic resonator losses. We show that, since the resonator field must be initially empty, an initial amplitude in the resonator must be generated in order to cancel reflections via destructive interference; moreover, we show that this initial amplitude can be made sufficiently small as to allow the net infidelity throughout the complete transfer process to be close to unity. We then derive the time-varying resonator coupling that maximizes the state transfer fidelity as a function of the initial population and intrinsic loss rate, providing a complete protocol for optimal quantum state transfer between the flying qubit and resonator qubit. We present analytical expressions and numerical examples of the fidelities for the complete protocol using exponential and Gaussian profiles. We show that a state transfer fidelity of around 99.9% can be reached momentarily before the quantum information is lost due to the intrinsic loss in practical resonators used as quantum memories.
Optics Letters
Coherent anti-Stokes Raman scattering (CARS) is commonly used for thermometry and concentration measurement of major species. The quadratic scaling of CARS signal with number density has limited the use of CARS for detection of minor species, where more sensitive approaches may be more attractive. However, significant advancements in ultrafast CARS approaches have been made over the past two decades, including the development of hybrid CARS demonstrated to yield greatly increased excitation efficiencies. Yet, detailed detection limits of hybrid CARS have not been well established. In this Letter, detection limits for N2, H2, CO, and C2H4 by point-wise hybrid femtosecond (fs)/picosecond (ps) CARS are determined to be of the order of 1015 molecules/cm3. Here, the possible benefit of fs/nanosecond (ns) hybrid CARS is also discussed.
ACS Applied Polymer Materials
In alkaline zinc–manganese dioxide batteries, there is a need for selective polymeric separators that have good hydroxide ion conductivity but that prevent the transport of zincate (Zn(OH)4)2-. Here we investigate the nanoscale structure and hydroxide transport in two cationic polysulfones that are promising for these separators. We present the synthesis and characterization for a tetraethylammonium-functionalized polysulfone (TEA-PSU) and compare it to our previous work on an N-butylimidazolium-functionalized polysulfone (NBI-PSU). We perform atomistic molecular dynamics (MD) simulations of both polymers at experimentally relevant water contents. The MD simulations show that both polymers develop well phase separated nanoscale water domains that percolate through the polymer. Calculation of the total scattering intensity from the MD simulations reveal weak or nonexistent ionomer peaks at low wave vectors. The lack of an ionomer peak is due to a loss of contrast in the scattering. The small water domains in both polymers, with median diameters on the order of 0.5–0.7 nm, lead to hydroxide and water diffusion constants that are 1–2 orders of magnitude smaller than their values in bulk water. This confinement lowers the conductivity but also may explain the strong exclusion of zincate from the PSU membranes seen experimentally.
This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.
Journal of the Electrochemical Society
Concerns about the safety of lithium-ion batteries have motivated numerous studies on the response of fresh cells to abusive, off-nominal conditions, but studies on aged cells are relatively rare. This perspective considers all open literature on the thermal, electrical, and mechanical abuse response of aged lithium-ion cells and modules to identify critical changes in their behavior relative to fresh cells. We outline data gaps in aged cell safety, including electrical and mechanical testing, and module-level experiments. Understanding how the abuse response of aged cells differs from fresh cells will enable the design of more effective energy storage failure mitigation systems.
Innovations in Systems and Software Engineering
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
Physical Chemistry Chemical Physics
This paper describes a detailed understanding of how nanofillers function as radiation barriers within the polymer matrix, and how their effectiveness is impacted by factors such as composition, size, loading, surface chemistry, and dispersion. We designed a comprehensive investigation of heavy ion irradiation resistance in epoxy matrix composites loaded with surface-modified ceria nanofillers, utilizing tandem computational and experimental methods to elucidate radiolytic damage processes and relate them to chemical and structural changes observed through thermal analysis, vibrational spectroscopy, and electron microscopy. A detailed mechanistic examination supported by FTIR spectroscopy data identified the bisphenol A moiety as a primary target for degradation reactions. Results of computational modeling by the Stopping Range of Ions in Matter (SRIM) Monte Carlo simulation were in good agreement with damage analysis from surface and cross-sectional SEM imaging. All metrics indicated that ceria nanofillers reduce the damage area in polymer nanocomposites, and that nanofiller loading and homogeneity of dispersion are key to effective damage prevention. The results of this study represent a significant pathway for engineered irradiation tolerance in a diverse array of polymer nanocomposite materials. Numerous areas of materials science can benefit from utilizing this facile and effective method to extend the reliability of polymer materials.
European Physical Journal. Special Topics
We report that an investigation is carried out for the purpose of simultaneously controlling a base-excited dynamical system and enhancing the effectiveness of a piezoelectric energy harvesting absorber. Amplitude absorbers are included to improve the energy harvested by the absorber with the possibility of activating broadband resonant regions to increase the operable range of the absorber. This study optimizes the stoppers’ ability for the energy harvesting absorber to generate energy by investigating asymmetric gap and stiffness configurations. Medium stiffnesses of 5 x 104 N/m and 1 x 105 N/m show significant impact on the primary system’s dynamics and improvement in the level of the harvested power for the absorber. A solo stopper configuration when the gap distance is 0.02m improves 29% in peak power and 9% in average power over the symmetrical case. Additionally, an asymmetric stiffness configuration when one of the stiffnesses is 1 x 105 N/m and a gap size of 0.02m indicates an improvement of 25% and 8% for peak and average harvested power, respectively, and the second stopper’s stiffness is 5 x 103 N/m. Hard stopper configurations shows improvements with both asymmetric cases, but not enough improvements to outperform the system without amplitude stoppers.
ACS Applied Materials and Interfaces
A rapid and facile design strategy to create a highly complex optical tag with programmable, multimodal photoluminescent properties is described. This was achieved via intrinsic and DNA-fluorophore hidden signatures. As a first covert feature of the tag, an intricate novel heterometallic near-infrared (NIR)-emitting mesoporous metal-organic framework (MOF) was designed and synthesized. The material is constructed from two chemically distinct, homometallic hexanuclear clusters based on Nd and Yb. Uniquely, the Nd-based cluster is observed here for the first time in a MOF and consists of two staggered Nd μ3-oxo trimers. To generate controlled, multimodal, and tailorable emission with difficult to counterfeit features, the NIR-emissive MOF was post-synthetically modified via a fluorescent DNA oligo labeling design strategy. The surface attachment of several distinct fluorophores, including the simultaneous attachment of up to three distinct fluorescently labeled oligos was achieved, with excitation and emission properties across the visible spectrum (480-800 nm). The DNA inclusion as a secondary covert element in the tag was demonstrated via the detection of SYBR Gold dye association. Importantly, the approach implemented here serves as a rapid and tailorable way to encrypt distinct information in a facile and modular fashion and provides an innovative technology in the quest toward complex optical tags.
Abstract not provided.
Modelling and Simulation in Materials Science and Engineering
This review discusses atomistic modeling techniques used to simulate radiation damage in crystalline materials. Radiation damage due to energetic particles results in the formation of defects. The subsequent evolution of these defects over multiple length and time scales requiring numerous simulations techniques to model the gamut of behaviors. This work focuses attention on current and new methodologies at the atomistic scale regarding the mechanisms of defect formation at the primary damage state.
Digital Threats: Research and Practice
Advances on differentiating between malicious intent and natural "organizational evolution"to explain observed anomalies in operational workplace patterns suggest benefit from evaluating collective behaviors observed in the facilities to improve insider threat detection and mitigation (ITDM). Advances in artificial neural networks (ANN) provide more robust pathways for capturing, analyzing, and collating disparate data signals into quantitative descriptions of operational workplace patterns. In response, a joint study by Sandia National Laboratories and the University of Texas at Austin explored the effectiveness of commercial artificial neural network (ANN) software to improve ITDM. This research demonstrates the benefit of learning patterns of organizational behaviors, detecting off-normal (or anomalous) deviations from these patterns, and alerting when certain types, frequencies, or quantities of deviations emerge for improving ITDM. Evaluating nearly 33,000 access control data points and over 1,600 intrusion sensor data points collected over a nearly twelve-month period, this study's results demonstrated the ANN could recognize operational patterns at the Nuclear Engineering Teaching Laboratory (NETL) and detect off-normal behaviors - suggesting that ANNs can be used to support a data-analytic approach to ITDM. Several representative experiments were conducted to further evaluate these conclusions, with the resultant insights supporting collective behavior-based analytical approaches to quantitatively describe insider threat detection and mitigation.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Nuclear Science
We investigate the sensitivity of silicon-oxide-nitride-silicon-oxide (SONOS) charge trapping memory technology to heavy-ion induced single-event effects. Threshold voltage ( V_T ) statistics were collected across multiple test chips that contained in total 18 Mb of 40-nm SONOS memory arrays. The arrays were irradiated with Kr and Ar ion beams, and the changes in their V_T distributions were analyzed as a function of linear energy transfer (LET), beam fluence, and operating temperature. We observe that heavy ion irradiation induces a tail of disturbed devices in the 'program' state distribution, which has also been seen in the response of floating-gate (FG) flash cells. However, the V_T distribution of SONOS cells lacks a distinct secondary peak, which is generally attributed to direct ion strikes to the gate-stack of FG cells. This property, combined with the observed change in the V_T distribution with LET, suggests that SONOS cells are not particularly sensitive to direct ion strikes but cells in the proximity of an ion's absorption can still experience a V_T shift. These results shed new light on the physical mechanisms underlying the V_T shift induced by a single heavy ion in scaled charge trap memory.
Materials Today
With the proliferation of additive manufacturing and 3D printing technologies, a broader palette of material properties can be elicited from cellular solids, also known as metamaterials, architected foams, programmable materials, or lattice structures. Metamaterials are designed and optimized under the assumption of perfect geometry and a homogeneous underlying base material. Yet in practice real lattices contain thousands or even millions of complex features, each with imperfections in shape and material constituency. While the role of these defects on the mean properties of metamaterials has been well studied, little attention has been paid to the stochastic properties of metamaterials, a crucial next step for high reliability aerospace or biomedical applications. In this work we show that it is precisely the large quantity of features that serves to homogenize the heterogeneities of the individual features, thereby reducing the variability of the collective structure and achieving effective properties that can be even more consistent than the monolithic base material. In this first statistical study of additive lattice variability, a total of 239 strut-based lattices were mechanically tested for two pedagogical lattice topologies (body centered cubic and face centered cubic) at three different relative densities. The variability in yield strength and modulus was observed to exponentially decrease with feature count (to the power −0.5), a scaling trend that we show can be predicted using an analytic model or a finite element beam model. The latter provides an efficient pathway to extend the current concepts to arbitrary/complex geometries and loading scenarios. These results not only illustrate the homogenizing benefit of lattices, but also provide governing design principles that can be used to mitigate manufacturing inconsistencies via topological design.
Corrosion
The effects of applied stress, ranging from tensile to compressive, on the atmospheric pitting corrosion behavior of 304L stainless steel (SS304L) were analyzed through accelerated atmospheric laboratory exposures and microelectrochemical cell analysis. After exposing the lateral surface of a SS304L four-point bend specimen to artificial seawater at 50°C and 35% relative humidity for 50 d, pitting characteristics were determined using optical profilometry and scanning electron microscopy. The SS304L microstructure was analyzed using electron backscatter diffraction. Additionally, localized electrochemical measurements were performed on a similar, unexposed, SS304L four-point bend bar to determine the effects of applied stress on corrosion susceptibility. Under the applied loads and the environment tested, the observed pitting characteristics showed no correlation with the applied stress (from 250 MPa to -250 MPa). Pitting depth, surface area, roundness, and distribution were found to be independent of location on the sample or applied stress. The lack of correlation between pitting statistics and applied stress was more likely due to the aggressive exposure environment, with a sea salt loading of 4 g/m2 chloride. The pitting characteristics observed were instead governed by the available cathode current and salt distribution, which are a function of sea salt loading, as well as pre-existing underlying microstructure. In microelectrochemical cell experiments performed in Cl- environments comparable to the atmospheric exposure and in environments containing orders of magnitude lower Cl- concentrations, effects of the applied stress on corrosion susceptibility were only apparent in open-circuit potential in low Cl- concentration solutions. Cl- concentration governed the current density and transpassive dissolution potential.
IEEE Transactions on Nuclear Science
In this article, we provide an analytical model for the total ionizing dose (TID) effects on the bit error statistics of commercial flash memory chips. We have validated the model with experimental data collected by irradiating several commercial NAND flash memory chips from different technology nodes. We find that our analytical model can project bit errors at higher TID values [20 krad (Si)] from measured data at lower TID values [<1 krad (Si)]. Based on our model and the measured data, we have formulated basic design rules for using a commercial flash memory chip as a dosimeter. We discuss the impact of NAND chip-to-chip variability, noise margin, and the intrinsic errors on the dosimeter design using detailed experimentation.
IEEE Transactions on Reliability
Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.
IEEE Transactions on Nuclear Science
We utilize electrically detected magnetic resonance (EDMR) measurements to compare high-field stressed, and gamma irradiated Si/SiO2 metal-oxide-silicon (MOS) structures. We utilize spin-dependent recombination (SDR) EDMR detected using the Fitzgerald and Grove dc $I-V$ approach to compare the effects of high-field electrical stressing and gamma irradiation on defect formation at and near the Si/SiO2 interface. As anticipated, both greatly increase the concentration of $P_{b}$ centers (silicon dangling bonds at the interface) densities. The irradiation also generated a significant increase in the dc $I-V$ EDMR response of $E^{\prime }$ centers (oxygen vacancies in the SiO2 films), whereas the generation of an $E^{\prime }$ EDMR response in high-field stressing is much weaker than in the gamma irradiation case. These results likely suggest a difference in their physical distribution resulting from radiation damage and high electric field stressing.
International Journal of Rock Mechanics and Mining Sciences
Accurate predictions of room closure are important for hazardous waste repositories in rock salt formations, such as the Waste Isolation Pilot Plant (WIPP). When Munson and co-workers simulated several room closure experiments conducted at the WIPP during the 1980's and 1990's, their simulated closure curves closely agreed with the closure measurements. A careful review of their work, however, raised concerns and prompted the reinvestigation in this paper. To begin the reinvestigation, Munson's legacy Room D closure simulation was reasonably recreated in a current-day finite element code. Next, special care was taken to obtain numerically converged results, re-introduce the anhydrite strata intermittently ignored by Munson, and calibrate the Munson–Dawson (M–D) constitutive model for salt as much as possible from laboratory test measurements. When this new model was used to simulate Room D's closure, it under-predicted the horizontal and vertical closure rates by 2.34× and 3.10×, respectively, at 5.7 years after room excavation. As a result, the M–D model was extended to capture the newly established creep behavior at low equivalent stresses (<8MPa) and replace the Tresca with the Hosford equivalent stress. Simulations using the new M–D model over-predicted the horizontal closure rate by 1.15× and under-predicted the vertical closure rate by 1.08× at 5.7 years, averaged over three room closure experiments. Although further improvements could be made, the new model has a stronger scientific foundation than Munson's legacy model and appears ready for careful engineering use.
IEEE Transactions on Nuclear Science
This article evaluates the data retention characteristics of irradiated multilevel-cell (MLC) 3-D NAND flash memories. We irradiated the memory chips by a Co-60 gamma-ray source for up to 50 krad(Si) and then wrote a random data pattern on the irradiated chips to find their retention characteristics. The experimental results show that the data retention property of the irradiated chips is significantly degraded when compared to the un-irradiated ones. We evaluated two independent strategies to improve the data retention characteristics of the irradiated chips. The first method involves high-temperature annealing of the irradiated chips, while the second method suggests preprogramming the memory modules before deploying them into radiation-prone environments.
Journal of Vacuum Science and Technology A: Vacuum, Surfaces and Films
Structural disorder causes materials' surface electronic properties, e.g., work function (φ), to vary spatially, yet it is challenging to prove exact causal relationships to underlying ensemble disorder, e.g., roughness or granularity. For polycrystalline Pt, nanoscale resolution photoemission threshold mapping reveals a spatially varying φ = 5.70 ± 0.03 eV over a distribution of (111) vicinal grain surfaces prepared by sputter deposition and annealing. With regard to field emission and related phenomena, e.g., vacuum arc initiation, a salient feature of the φ distribution is that it is skewed with a long tail to values down to 5.4 eV, i.e., far below the mean, which is exponentially impactful to field emission via the Fowler-Nordheim relation. We show that the φ spatial variation and distribution can be explained by ensemble variations of granular tilts and surface slopes via a Smoluchowski smoothing model wherein local φ variations result from spatially varying densities of electric dipole moments, intrinsic to atomic steps, that locally modify φ. Atomic step-terrace structure is confirmed with scanning tunneling microscopy (STM) at several locations on our surfaces, and prior works showed STM evidence for atomic step dipoles at various metal surfaces. From our model, we find an atomic step edge dipole μ = 0.12 D/edge atom, which is comparable to values reported in studies that utilized other methods and materials. Our results elucidate a connection between macroscopic φ and the nanostructure that may contribute to the spread of reported φ for Pt and other surfaces and may be useful toward more complete descriptions of polycrystalline metals in the models of field emission and other related vacuum electronics phenomena, e.g., arc initiation.
Journal of Materials Science
For transformers and inductors to meet the world’s growing demand for electrical power, more efficient soft magnetic materials with high saturation magnetic polarization and high electrical resistivity are needed. This work aimed at the development of a soft magnetic composite synthesized via spark plasma sintering with both high saturation magnetic polarization and high electrical resistivity for efficient soft magnetic cores. CoFe powder particles coated with an insulating layer of Al2O3 were used as feedstock material to improve the electrical resistivity while retaining high saturation magnetic polarization. By maintaining a continuous non-magnetic Al2O3 phase throughout the material, both a high saturation magnetic polarization, above 1.5 T, and high electrical resistivity, above 100 μΩ·m, were achieved. Through microstructural characterization of samples consolidated at various temperatures, the role of microstructural evolution on the magnetic and electronic properties of the composite was elucidated. Upon consolidation at relatively high temperature, the CoFe was to found plastically deform and flow into the Al2O3 phase at the particle boundaries and this phenomenon was attributed to low resistivity in the composite. In contrast, at lower consolidation temperatures, perforation of the Al2O3 phase was not observed and a high electrical resistivity was achieved, while maintaining a high magnetic polarization, ideal for more efficient soft magnetic materials for transformers and inductors.
Physics of Plasmas
Auto-magnetizing (AutoMag) liners are cylindrical tubes composed of discrete metallic helices encapsulated in insulating material; when driven with a ∼2 MA, ∼100-ns prepulse on the 20 MA, 100-ns rise time Z accelerator, AutoMag targets produced >150 T internal axial magnetic fields [Shipley et al., Phys. Plasmas 26, 052705 (2019)]. Once the current rise rate of the pulsed power driver reaches sufficient magnitude, the induced electric fields in the liner cause dielectric breakdown of the insulator material and, with sufficient current, the cylindrical target radially implodes. The dielectric breakdown process of the insulating material in AutoMag liners has been studied in experiments on the 500-900 kA, ∼100-ns rise time Mykonos accelerator. Multi-frame gated imaging enabled the first time-resolved observations of photoemission from dynamically evolving plasma distributions during the breakdown process in AutoMag targets. Using magnetohydrodynamic simulations, we calculate the induced electric field distribution and provide a detailed comparison to the experimental data. We find that breakdown in AutoMag targets does not primarily depend on the induced electric field in the gaps between conductive helices as previously thought. Finally, to better control the dielectric breakdown time, a 12-32 mJ, 170 ps ultraviolet (λ = 266 nm) laser was implemented to irradiate the outer surface of AutoMag targets to promote breakdown in a controlled manner at a lower internal axial field. The laser had an observable effect on the time of breakdown and subsequent plasma evolution, indicating that pulsed UV lasers can be used to control breakdown timing in AutoMag.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
The RISC-V instruction set architecture open licensing policy has spawned a hive of development activity, making a range of implementations publicly available. The environments in which RISC-V operates have expanded correspondingly, driving the need for a generalized approach to evaluating the reliability of RISC-V implementations under adverse operating conditions or after normal wear-out periods. Fault injection (FI) refers to the process of changing the state of registers or wires, either permanently or momentarily, and then observing execution behavior. The analysis provides insight into the development of countermeasures that protect against the leakage or corruption of sensitive information, which might occur because of unexpected execution behavior. In this article, we develop a hardware-software co-design architecture that enables fast, configurable fault emulation and utilize it for information leakage and data corruption analysis. Modern system-on-chip FPGAs enable building an evaluation platform, where control elements run on a processor(s) (PS) simultaneously with the target design running in the programmable logic (PL). Software components of the FI system introduce faults and report execution behavior. A pair of RISC-V FI-instrumented implementations are created and configured to execute the Advanced Encryption Standard and Twister algorithms. Key and plaintext information leakage and degraded pseudorandom sequences are both observed in the output for a subset of the emulated faults.
Solar
The timely detection of photovoltaic (PV) system failures is important for maintaining optimal performance and lifetime reliability. A main challenge remains the lack of a unified health-state architecture for the uninterrupted monitoring and predictive performance of PV systems. To this end, existing failure detection models are strongly dependent on the availability and quality of site-specific historic data. The scope of this work is to address these fundamental challenges by presenting a health-state architecture for advanced PV system monitoring. The proposed architecture comprises of a machine learning model for PV performance modeling and accurate failure diagnosis. The predictive model is optimally trained on low amounts of on-site data using minimal features and coupled to functional routines for data quality verification, whereas the classifier is trained under an enhanced supervised learning regime. The results demonstrated high accuracies for the implemented predictive model, exhibiting normalized root mean square errors lower than 3.40% even when trained with low data shares. The classification results provided evidence that fault conditions can be detected with a sensitivity of 83.91% for synthetic power-loss events (power reduction of 5%) and of 97.99% for field-emulated failures in the test-bench PV system. Finally, this work provides insights on how to construct an accurate PV system with predictive and classification models for the timely detection of faults and uninterrupted monitoring of PV systems, regardless of historic data availability and quality. Such guidelines and insights on the development of accurate health-state architectures for PV plants can have positive implications in operation and maintenance and monitoring strategies, thus improving the system’s performance.
Fuel
Current state-of-the-art gasoline direct-injection (GDI) engines use multiple injections as one of the key technologies to improve exhaust emissions and fuel efficiency. For this technology to be successful, secured adequate control of fuel quantity for each injection is mandatory. However, nonlinearity and variations in the injection quantity can deteriorate the accuracy of fuel control, especially with small fuel injections. Therefore, it is necessary to understand the complex injection behavior and to develop a predictive model to be utilized in the development process. This study presents a methodology for rate of injection (ROI) and solenoid voltage modeling using artificial neural networks (ANNs) constructed from a set of Zeuch-style hydraulic experimental measurements conducted over a wide range of conditions. A quantitative comparison between the ANN model and the experimental data shows that the model is capable of predicting not only general features of the ROI trend, but also transient and non-linear behaviors at particular conditions. In addition, the end of injection (EOI) could be detected precisely with a virtually generated solenoid voltage signal and the signal processing method, which applies to an actual engine control unit. A correlation between the detected EOI timings calculated from the modeled signal and the measurement results showed a high coefficient of determination.
Computer Methods in Applied Mechanics and Engineering
Efficient and accurate calculation of spatial integrals is of major interest in the numerical implementation of peridynamics (PD). The standard way to perform this calculation is a particle-based approach that discretizes the strong form of the PD governing equation. This approach has rapidly been adopted by the PD community since it offers some advantages. It is computationally cheaper than other available schemes, can conveniently handle material separation, and effectively deals with nonlinear PD models. Nevertheless, PD models are still computationally very expensive compared with those based on the classical continuum mechanics theory, particularly for large-scale problems in three dimensions. This results from the nonlocal nature of the PD theory which leads to interactions of each node of a discretized body with multiple surrounding nodes. Here, we propose a new approach to significantly boost the numerical efficiency of PD models. We propose a discretization scheme that employs a simple collocation procedure and is truly meshfree; i.e., it does not depend on any background integration cells. In contrast to the standard scheme, the proposed scheme requires a much smaller set of neighboring nodes (keeping the same physical length scale) to achieve a specific accuracy and is thus computationally more efficient. Our new scheme is applicable to the case of linear PD models and within neighborhoods where the solution can be approximated by smooth basis functions. Therefore, to fully exploit the advantages of both the standard and the proposed schemes, a hybrid discretization is presented that combines both approaches within an adaptive framework. The high performance of the developed framework is illustrated by several numerical examples, including brittle fracture and corrosion problems in two and three dimensions.
Radio Science
We present an experiment to detect one ton TNT-equivalent chemical explosions using pulsed Doppler radar observations of isodensity layers in the ionospheric E region during two campaigns. The first campaign, conducted on 15 October 2019, produced potential detections of all three shots. The detections closely resemble the temporal and spectral properties predicted using the InfraGA ray tracing and weakly nonlinear waveform propagation model. Here the model predicts that within 6.5–7.25 min of each shot a waveform peaking between 0.9 and 0.4 Hz will impact the ionosphere at 100 km. As the waves pass through this region, they will imprint their signal on an isodensity layer, which is detectable using a Doppler radar operating at the plasma frequency of the isodensity. Within the time windows of each of the three shots in the first campaign, we detect enhanced wave activity peaking near 0.5 Hz. These waves were imprinted on the Doppler signal probing an isodensity layer at 2.785 MHz near 100 km altitude. Despite these detections, the method appears to be unreliable as none of the six shots from the second campaign, conducted on 10 July 2020 were detected. The observations from this campaign were characterized by an increased acoustic noise environment in the microbarom band and persistent scintillation on the radar returns. These effects obscured any detectable signal from these shots and the baseline noise was well above the detection levels of the first campaign.
Astrophysical Journal
White dwarfs (WDs) are useful across a wide range of astrophysical contexts. The appropriate interpretation of their spectra relies on the accuracy of WD atmosphere models. One essential ingredient of atmosphere models is the theory used for the broadening of spectral lines. To date, the models have relied on Vidal et al., known as the unified theory of line broadening (VCS). There have since been advancements in the theory; however, the calculations used in model atmosphere codes have only received minor updates. Meanwhile, advances in instrumentation and data have uncovered indications of inaccuracies: spectroscopic temperatures are roughly 10% higher and spectroscopic masses are roughly 0.1 M higher than their photometric counterparts. The evidence suggests that VCS-based treatments of line profiles may be at least partly responsible. Gomez et al. developed a simulation-based line-profile code Xenomorph using an improved theoretical treatment that can be used to inform questions around the discrepancy. However, the code required revisions to sufficiently decrease noise for use in model spectra and to make it computationally tractable and physically realistic. In particular, we investigate three additional physical effects that are not captured in the VCS calculations: ion dynamics, higher-order multipole expansion, and an expanded basis set. We also implement a simulation-based approach to occupation probability. The present study limits the scope to the first three hydrogen Balmer transitions (Hα, Hβ, and Hγ). We find that screening effects and occupation probability have the largest effects on the line shapes and will likely have important consequences in stellar synthetic spectra.
Vadose Zone Journal
Two-phase fluid flow properties underlie quantitative prediction of water and gas movement, but constraining these properties typically requires multiple time-consuming laboratory methods. The estimation of two-phase flow properties (van Genuchten parameters, porosity, and intrinsic permeability) is illustrated in cores of vitric nonwelded volcanic tuff using Bayesian parameter estimation that fits numerical models to observations from spontaneous imbibition experiments. The uniqueness and correlation of the estimated parameters is explored using different modeling assumptions and subsets of the observed data. The resulting estimation process is sensitive to both moisture retention and relative permeability functions, thereby offering a comprehensive method for constraining both functions. The data collected during this relatively simple laboratory experiment, used in conjunction with a numerical model and a global optimizer, result in a viable approach for augmenting more traditional capillary pressure data obtained from hanging water column, membrane plate extractor, or mercury intrusion methods. This method may be useful when imbibition rather than drainage parameters are sought, when larger samples (e.g., including heterogeneity or fractures) need to be tested that cannot be accommodated in more traditional methods, or when in educational laboratory settings.
International Journal of Multiphase Flow
In this work, we present a detailed implementation and validation of the droplet modeling framework proposed by Dahms and Oefelein (2016) into the engine commercial CFD software CONVERGE using the User Defined Function (UDF) interface. The model accounts for the nonlinear deformation and oscillation experienced by liquid spray droplet injected into high pressure and temperature. Lagrangian spray simulations of Engine Combustion Network (ECN) Spray A are performed. Model validation against standard experimental measurements of liquid velocity, vapor mixture fraction is conducted. To perform more rigorous model validation, new experimental measurements based on Diffused Back Illumination (DBI) are introduced. The new measurements are processed for Projected Liquid Volume (PLV), which offers as close as possible one-to-one model validation for liquid penetration while offering new insights into the spray physics. Comparison with a One-D model based on adiabatic mixing theory by Siebers (1999) and Desantes et al. (2007) are also conducted. Through these model validation exercises, it is shown that the new framework improves liquid-phase penetration predictions, following a tendency for enhanced evaporation, compared to the standard approach for both Reynolds Average Navier Stokes (RANS) and Large Eddy Simulation (LES). At the liquid length, maximum mixture fraction values predicted by the new approach are in good agreement those of an adiabatic mixing model. Qualitative analysis of the spray behaviors during the early stage of the injection process reveals that the proposed framework predicts significant increase in droplet evaporation rate with lower drop drag compared to the current standard approach.
AIP Advances
Impact ionization coefficients play a critical role in semiconductors. In addition to silicon, silicon carbide and gallium nitride are important semiconductors that are being seen more as mainstream semiconductor technologies. As a reflection of the maturity of these semiconductors, predictive modeling has become essential to device and circuit designers, and impact ionization coefficients play a key role here. Recently, several studies have measured impact ionization coefficients. We dedicated the first part of our study to comparing three experimental methods to estimate impact ionization coefficients in GaN, which are all based on photomultiplication but feature characteristic differences. The first method inserts an InGaN hole-injection layer, the accuracy of which is challenged by the dominance of ionization in InGaN, leading to possible overestimation of the coefficients. The second method utilizes the Franz-Keldysh effect for hole injection but not for electrons, where the mixed injection of induced carriers would require a margin of error. The third method uses complementary p-n and n-p structures that have been at the basis of this estimation in Si and SiC and leans on the assumption of a constant electric field, and any deviation would require a margin of error. In the second part of our study, we evaluated the models using recent experimental data from diodes demonstrating avalanche breakdown.
Journal of the Mechanics and Physics of Solids
Garnet-type, solid electrolytes, such as Li7La3Zr2O12 (LLZO), are a promising alternative to liquid electrolytes for lithium-metal batteries. However, such solid-electrolyte materials frequently exhibit undesirable lithium (Li) metal plating and fracture along grain boundaries. In this study, we employ atomistic simulations to investigate the mechanisms and key fracture properties associated with intergranular fracture along one such boundary. Our results show that, in the case of a Σ5(310) grain boundary, this boundary exhibits brittle fracture behavior, i.e. the absence of dislocation activity ahead of the propagating crack tip, accompanied with a decrease in work of separation, peak stress, and maximum stress intensity factor as the temperature increases from 300 K to 1500 K. As the crack propagates, we predict two temperature-dependent Li clustering regimes. For temperatures at or below 900 K, Li tends to cluster in the bulk region away from the crack plane driven by a void-coalescence mechanism concomitant a simultaneous cubic-to-tetragonal phase transition. The tetragonalization of LLZO in this temperature regime acts as an emerging toughening mechanism. At higher temperatures, this phase transition mechanism is suppressed leading to a more uniform distribution of Li throughout the grain-boundary system and lower fracture properties as compared to lower temperatures.
IEEE Transactions on Nuclear Science
This article analyzes the total ionizing dose (TID) effects on noise characteristics of commercial multi-level-cell (MLC) 3-D NAND memory technology during the read operation. The chips were exposed to a Co-60 gamma-ray source for up to 100 krad(Si) of TID. We find that the number of noisy cells in the irradiated chip increases with TID. Bit-flip noise was more dominant for cells in an erased state during irradiation compared to programmed cells.
IEEE Transactions on Nuclear Science
Integrated silicon microwave pin diodes are exposed to 10-keV X-rays up to a dose of 2 Mrad(SiO2) and 14-MeV fast neutrons up to a fluence of 2.2, × ,10,^ 13 cm-2. Changes in both dc leakage current and small-signal circuit components are examined. Degradation in performance due to total-ionizing dose (TID) is shown to be suppressed by non-quasi-static (NQS) effects during radio frequency (RF) operation. Tolerance to displacement damage from fast neutrons is also observed, which is explained using technology computer-aided design (TCAD) simulations. Overall, the characterized pin diodes are tolerant to cumulative radiation at levels consistent with space applications such as geosynchronous weather satellites.
IEEE Transactions on Nuclear Science
In this article, we present a unique method of measuring single-event transient (SET) sensitivity in 12-nm FinFET technology. A test structure is presented that approximately measures the length of SETs using flip-flop shift registers with clock inputs driven by an inverter chain. The test structure was irradiated with ions at linear energy transfers (LETs) of 4.0, 5.6, 10.4, and 17.9 MeV-cm2/mg, and the cross sections of SET pulses measured down to 12.7 ps are presented. The experimental results are interpreted using a modeling methodology that combines TCAD and radiation effect simulations to capture the SET physics, and SPICE simulations to model the SETs in a circuit. The modeling shows that only ion strikes on the fin structure of the transistor would result in enough charge collected to produce SETs, while strikes in the subfin and substrate do not result in enough charge collected to produce measurable transients. Comparisons of the cumulative cross sections obtained from the experiment and from the simulations validate the modeling methodology presented.
Abstract not provided.
Combustion and Flame
Transforming polymorphs, melting, and boiling are physical processes that can accelerate decomposition rates during cookoff of PETN and make measurements difficult. For example, splashing liquids from large bubbles filled with decomposition products clog pressure tubing in sealed experiments. Boil over can also extinguish thermal excursions in vented experiments making ignition difficult. For better measurements, we have modified the Sandia Instrumented Thermal Ignition (SITI) experiment to obtain better sealed and vented cookoff data for PETN by reducing the sample size and including additional gas space to prevent clogged tubing and boil over. Ignition times were not affected by 1) increasing the gas space by a factor of 3 in sealed SITI experiments or by 2) venting the decomposition gasses. That is, thermal ignition of PETN is not pressure dependent and the rate-limiting step during PETN decomposition likely occurs in the condensed phase. A simple decomposition model was calibrated using these observations and includes rate acceleration caused by melting and boiling. The model is used to predict internal temperatures, pressurization, and thermal ignition in a wide variety of experiments. The model is also used with SITI data to estimate the previously unreported latent enthalpy (5 J/g) associated with the α (PETN-I) to β (PETN-II) polymorphic phase transformation of PETN.
Journal of Instrumentation
We present the analysis and results of the first dataset collected with the MARS neutron detector deployed at the Oak Ridge National Laboratory Spallation Neutron Source (SNS) for the purpose of monitoring and characterizing the beam-related neutron (BRN) background for the COHERENT collaboration. MARS was positioned next to the COH-CsI coherent elastic neutrino-nucleus scattering detector in the SNS basement corridor. This is the basement location of closest proximity to the SNS target and thus, of highest neutrino flux, but it is also well shielded from the BRN flux by infill concrete and gravel. These data show the detector registered roughly one BRN per day. Using MARS' measured detection efficiency, the incoming BRN flux is estimated to be 1.20 ± 0.56 neutrons/m2/MWh for neutron energies above ∼3.5 MeV and up to a few tens of MeV. We compare our results with previous BRN measurements in the SNS basement corridor reported by other neutron detectors.
Materialia
In the pursuit of improving additively manufactured (AM) component quality and reliability, fine-tuning critical process parameters such as laser power and scan speed is a great first step toward limiting defect formation and optimizing the microstructure. However, the synergistic effects between these process parameters, layer thickness, and feedstock attributes (e.g. powder size distribution) on part characteristics such as microstructure, density, hardness, and surface roughness are not as well-studied. In this work, we investigate 316L stainless steel density cubes built via laser powder bed fusion (L-PBF), emphasizing the significant microstructural changes that occur due to altering the volumetric energy density (VED) via laser power, scan speed, and layer thickness changes, coupled with different starting powder size distributions. This study demonstrates that there is not one ideal process set and powder size distribution for each machine. Instead, there are several combinations or feedstock/process parameter ‘recipes’ to achieve similar goals. This study also establishes that for equivalent VEDs, changing powder size can significantly alter part density, GND density, and hardness. Through proper parameter and feedstock control, part attributes such as density, grain size, texture, dislocation density, hardness, and surface roughness can be customized, thereby creating multiple high-performance regions in the AM process space.
Fuel
Decarbonizing the transportation sector is likely to require both electrification and increased incorporation of biofuels and/or bioblendstocks. While the social and environmental benefits of bioblendstocks are well understood, their real value for the fuel producers has not been established. This work considers prenol as a bioblendstock case study to identify sources of intrinsic value to fuel blenders by studying the properties of binary mixtures with gasoline components. The considered refinery blendstocks were samples of full range naphthas from the distillation, fluidized catalytic cracking, isomerization, alkylation, and reforming units. Octane numbers, Reid vapor pressure, distillation curves, and sulfur content were evaluated. Our results indicate the need for adjusting the formulation of the base fuel, depending on the interplay among the properties of the bioblendstock and those of the base fuel. Prenol increased research octane number (RON) and octane sensitivity (OS) of the base fuel, by up to 25 and 10 octane numbers, respectively. Additionally, 10 vol% prenol reduced RVP up to 2.2 psi, for the more volatile blendstock. Thus, considering prenol as a low volatility, RON/OS boosting bioblendstock, the composition of the preferred base fuel was proposed as containing reduced olefins and aromatics, and increase light fractions. The potential impact of this new gasoline formulation on refining processes and products gives rise to direct sources of value to the refiners, such as exporting products to the chemicals market, increasing the value of intermediate refinery streams, decreasing operating severity of certain refinery units, and broadening of the product suite.
Optimization and Engineering
We present three core principles for engineering-oriented integrated modeling and optimization tool sets—intuitive modeling contexts, systematic computer-aided reformulations, and flexible solution strategies—and describe how new developments in Pyomo.GDP for Generalized Disjunctive Programming (GDP) advance this vision. We describe a new logical expression system implementation for Pyomo.GDP allowing for a more intuitive description of logical propositions. The logical expression system supports automated reformulation of these logical constraints to linear constraints. We also describe two new logic-based global optimization solver implementations built on Pyomo.GDP that exploit logical structure to avoid “zero-flow” numerical difficulties that arise in nonlinear network design problems when nodes or streams disappear. These new solvers also demonstrate the capability to link to external libraries for expanded functionality within an integrated implementation. We present these new solvers in the context of a flexible array of solution paths available to GDP models. Finally, we present results on a new library of GDP models demonstrating the value of multiple solution approaches.
AVS Quantum Science
Stochastic incorporation kinetics can be a limiting factor in the scalability of semiconductor fabrication technologies using atomic-precision techniques. While these technologies have recently been extended from donors to acceptors, the extent to which kinetics will impact single-acceptor incorporation has yet to be assessed. To identify the precursor molecule and dosing conditions that are promising for deterministic incorporation, we develop and apply an atomistic model for the single-acceptor incorporation rates of several recently demonstrated molecules: diborane (B2H6), boron trichloride (BCl3), and aluminum trichloride in both monomer (AlCl3) and dimer forms (Al2Cl6). While all three precursors can realize single-acceptor incorporation, we predict that diborane is unlikely to realize deterministic incorporation, boron trichloride can realize deterministic incorporation with modest heating (50 °C), and aluminum trichloride can realize deterministic incorporation at room temperature. We conclude that both boron and aluminum trichloride are promising precursors for atomic-precision single-acceptor applications, with the potential to enable the reliable production of large arrays of single-atom quantum devices.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report details a method to estimate the energy content of various types of seismic body waves. The method is based on the strain energy of an elastic wavefield and Hooke’s Law. We present a detailed derivation of a set of equations that explicitly partition the seismic strain energy into two parts: one for compressional (P) waves and one for shear (S) waves. We posit that the ratio of these two quantities can be used to determine the relative contribution of seismic P and S waves, possibly as a method to discriminate between earthquakes and buried explosions. We demonstrate the efficacy of our method by using it to compute the strain energy of synthetic seismograms with differing source characteristics. Specifically, we find that explosion-generated seismograms contain a preponderance of P wave strain energy when compared to earthquake-generated synthetic seismograms. Conversely, earthquake-generated synthetic seismograms contain a much greater degree of S wave strain energy when compared to explosion-generated seismograms.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics of Plasmas
In an x-ray driven cavity experiment, an intense flux of soft x rays on the emitting surface produces significant emission of photoelectrons having several kiloelectronvolts of kinetic energy. At the same time, rapid heating of the emitting surface occurs, resulting in the release of adsorbed surface impurities and subsequent formation of an impurity plasma. This numerical study explores a simple model for the photoelectric currents and the impurity plasma. In this work, attention is given to the effect of varying the composition of the impurity plasma. The presence of protons or hydrogen molecular ions leads to a substantially enhanced cavity current, while heavier plasma ions are seen to have a limited effect on the cavity current due to their lower mobility. Additionally, it is demonstrated that an additional peak in the current waveform can appear due to the impurity plasma. A correlation between the impurity plasma composition and the timing of this peak is elucidated.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Dependable and Secure Computing
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.