The research team developed models of Attentional Control (AC) that are unique to existing modeling approaches in the literature. The goal was to enable the research team to (1) make predictions about AC and human performance in real-world scenarios and (2) to make predictions about individual characteristics based on human data. First, the team developed a proof-of-concept approach for representing an experimental design and human subjects data in a Bayesian model, then demonstrated an ability to draw inferences about conditions of interest relevant to real-world scenarios. Ultimately, this effort was successful, and we were able to make reasonable (meaning supported by behavioral data) inferences about conditions of interest to develop a risk model for AC (where risk is defined as a mismatch between AC and attentional demand). The team additionally defined a path forward for a human-constrained machine learning (HCML) approach to make predictions about an individual's state based on performance data. The effort represents a successful first step in both modeling efforts and serves as a basis for future work activities. Numerous opportunities for future work have been defined.
This is the Sandia report from a joint NSRD project between Sandia National Labs and Savannah River National Labs. The project involved development of simulation tools and data intended to be useful for tritium operations safety assessment. Tritium is a synthetic isotope of hydrogen that has a limited lifetime, and it is found at many tritium facilities in the form of elemental gas (T2). The most serious risk of reasonable probability in an accident scenario is when the tritium is released and reacts with oxygen to form a water molecule, which is subsequently absorbed into the human body. This tritium oxide is more readily absorbed by the body and therefore represents a limiting factor for safety analysis. The abnormal condition of a fire may result in conversion of the safer T2 inventory to the more hazardous oxidized form. It is this risk that tends to govern the safety protocols. Tritium fire datasets do not exist, so prescriptive safety guidance is largely conservative and reliant on means other than testing to formulate guidelines. This can have a consequence in terms of expensive and/or unnecessary mitigation design, handling protocols, and operational activities. This issue can be addressed through added studies on the behavior of tritium under representative conditions. Due to the hazards associated with the tests, this is being approached mainly from a modeling and simulation standpoint and surrogate testing. This study largely establishes the capability to generate simulation predictions with sufficiently credible characteristics to be accepted for safety guidelines as a surrogate for actual data through a variety of testing and modeling activities.
The purpose of this report is to document updates to the simulation of commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes. This report documents testing updates for the Dashpot Drying Apparatus (DDA), an apparatus constructed at a reduced scale with multiple Pressurized Water Reactor (PWR) fuel rod surrogates and a single guide tube dashpot. This apparatus is fashioned from a truncated 5×5 section of a prototypic 17×17 PWR fuel skeleton and includes the lowest segment of a single guide tube, often referred to as the dashpot region. The guide tube in this assembly is open and allows for insertion of a poison rod (neutron absorber) surrogate.
High-speed, optical imaging diagnostics are presented for three-dimensional (3D) quantification of explosively driven metal fragmentation. At early times after detonation, Digital Image Correlation (DIC) provides non-contact measures of 3D case velocities, strains, and strain rates, while a proposed stereo imaging configuration quantifies in-flight fragment masses and velocities at later times. Experiments are performed using commercially obtained RP-80 detonators from Teledyne RISI, which are shown to create a reproducible fragment field at the benchtop scale. DIC measurements are compared with 3D simulations, which have been ‘leveled’ to match the spatial resolution of DIC. Results demonstrate improved ability to identify predicted quantities-of-interest that fall outside of measurement uncertainty and shot-to-shot variability. Similarly, video measures of fragment trajectories and masses allow rapid experimental repetition and provide correlated fragment size-velocity measurements. Measured and simulated fragment mass distributions are shown to agree within confidence bounds, while some statistically meaningful differences are observed between the measured and predicted conditionally averaged fragment velocities. Together these techniques demonstrate new opportunities to improve future model validation.
Corynebacterium glutamicum has been successfully employed for the industrial production of amino acids and other bioproducts, partially due to its native ability to utilize a wide range of carbon substrates. We demonstrated C. glutamicum as an efficient microbial host for utilizing diverse carbon substrates present in biomass hydrolysates, such as glucose, arabinose, and xylose, in addition to its natural ability to assimilate lignin-derived aromatics. As a case study to demonstrate its bioproduction capabilities, L-lactate was chosen as the primary fermentation end product along with acetate and succinate. C. glutamicum was found to grow well in different aromatics (benzoic acid, cinnamic acid, vanillic acid, and p-coumaric acid) up to a concentration of 40 mM. Besides, 13C-fingerprinting confirmed that carbon from aromatics enter the primary metabolism via TCA cycle confirming the presence of β-ketoadipate pathway in C. glutamicum. 13C-fingerprinting in the presence of both glucose and aromatics also revealed coumarate to be the most preferred aromatic by C. glutamicum contributing 74 and 59% of its carbon for the synthesis of glutamate and aspartate respectively. 13C-fingerprinting also confirmed the activity of ortho-cleavage pathway, anaplerotic pathway, and cataplerotic pathways. Finally, the engineered C. glutamicum strain grew well in biomass hydrolysate containing pentose and hexose sugars and produced L-lactate at a concentration of 47.9 g/L and a yield of 0.639 g/g from sugars with simultaneous utilization of aromatics. Succinate and acetate co-products were produced at concentrations of 8.9 g/L and 3.2 g/L, respectively. Our findings open the door to valorize all the major carbon components of biomass hydrolysate by using C. glutamicum as a microbial host for biomanufacturing.
A straight fiber with nonlocal forces that are independent of bond strain is considered. These internal loads can either stabilize or destabilize the straight configuration. Transverse waves with long wavelength have unstable dispersion properties for certain combinations of nonlocal kernels and internal loads. When these unstable waves occur, deformation of the straight fiber into a circular arc can lower its potential energy in equilibrium. The equilibrium value of the radius of curvature is computed explicitly.
The SPECTACULAR model is a development extension of the Simplified Potential Energy Clock (SPEC) model. Both models are nonlinear viscoelastic constitutive models used to predict a wide range of time-dependent behaviors in epoxies and other glass-forming materials. This report documents the procedures used to generate SPECTACULAR calibrations for two particulate-filled epoxy systems, 828/CTBN/DEA/GMB and 828/DEA/GMB. No previous SPECTACULAR or SPEC calibration exists for 828/CTBN/DEA/GMB, while a legacy SPEC calibration exists for 828/DEA/GMB. To generate the SPECTACULAR calibrations, a step-by-step procedure was executed to determine parameters in groups with minimal coupling between parameter groups. This procedure has often been deployed to calibrate SPEC, therefore the resulting SPECTACULAR calibration is backwards compatible with SPEC (i.e. none of the extensions specific to SPECTACULAR are used). The calibration procedure used legacy Sandia experimental data stored on the Polymer Properties Database website. The experiments used for calibration included shear master curves, isofrequency temperature sweeps under oscillatory shear, the bulk modulus at room temperature, the thermal strain during a temperature sweep, and compression through yield at multiple temperatures below the glass transition temperature. Overall, the calibrated models fit the experimental data remarkably well. However, the glassy shear modulus varies depending on the experiment used to calibrate it. For instance, the shear master curve, isofrequency temperature sweep under oscillatory shear, and the Young's modulus in glassy compression yield values for the glassy shear modulus at the reference temperature that vary by as much as 15 %. Also, for 828/CTBN/DEA/GMB, the temperature dependence of the glassy shear modulus when fit to the Young's modulus at different temperatures is approximately four times larger than when it is determined from the isofrequency temperature sweep under oscillatory shear. For 828/DEA/GMB, the temperature dependence of the shear modulus determined from the isofrequency temperature sweep under oscillatory shear accurately predicts the Young's modulus at different temperatures. When choosing values for the shear modulus, fitting the glassy compression data was prioritized. The new and legacy calibrations for 828/DEA/GMB are similar and appear to have been calibrated from the same data. However, the new calibration improves the fit to the thermal strain data. In addition to the standard calibrations, development calibrations were produced that take advantage of development features of SPECTACULAR , including an updated equilibrium Helmholtz free energy that eliminates undesirable behavior found in previous work. In addition to the previously mentioned experimental data, the development calibrations require data for the heat capacity during a stress-free temperature sweep to calibrate thermal terms.
The objective of this project is the demonstration, and validation of hydrogen fuel cells in the marine environment. The prototype generator can be used to guide commercial development of a fuel cell generator product. Work includes assessment and validation of the commercial value proposition of both the application and the hydrogen supply infrastructure through third-party hosted deployment as the next step towards widespread use of hydrogen fuel cells in the maritime environment.
The concept of a nonlocal elastic metasurface has been recently proposed and experimentally demonstrated in Zhu et al. (2020). When implemented in the form of a total-internal-reflection (TIR) interface, the metasurface can act as an elastic wave barrier that is impenetrable to deep subwavelength waves over an exceptionally wide frequency band. The underlying physical mechanism capable of delivering this broadband subwavelength performance relies on an intentionally nonlocal design that leverages long-range connections between the units forming the fundamental supercell. This paper explores the design and application of a nonlocal TIR metasurface to achieve broadband passive vibration isolation in a structural assembly made of multiple dissimilar elastic waveguides. The specific structural system comprises shell, plate, and beam waveguides, and can be seen as a prototypical structure emulating mechanical assemblies of practical interest for many engineering applications. The study also reports the results of an experimental investigation that confirms the significant vibration isolation capabilities afforded by the embedded nonlocal TIR metasurface. These results are particularly remarkable because they show that the performance of the nonlocal metasurface is preserved when applied to a complex structural assembly and under non-ideal incidence conditions of the incoming wave, hence significantly extending the validity of the results presented in Zhu et al. (2020). Results also confirm that, under proper conditions, the original concept of a planar metasurface can be morphed into a curved interface while still preserving full wave control capabilities.
We report a Bayesian framework for concurrent selection of physics-based models and (modeling) error models. We investigate the use of colored noise to capture the mismatch between the predictions of calibrated models and observational data that cannot be explained by measurement error alone within the context of Bayesian estimation for stochastic ordinary differential equations. Proposed models are characterized by the average data-fit, a measure of how well a model fits the measurements, and the model complexity measured using the Kullback–Leibler divergence. The use of a more complex error models increases the average data-fit but also increases the complexity of the combined model, possibly over-fitting the data. Bayesian model selection is used to find the optimal physical model as well as the optimal error model. The optimal model is defined using the evidence, where the average data-fit is balanced by the complexity of the model. The effect of colored noise process is illustrated using a nonlinear aeroelastic oscillator representing a rigid NACA0012 airfoil undergoing limit cycle oscillations due to complex fluid–structure interactions. Several quasi-steady and unsteady aerodynamic models are proposed with colored noise or white noise for the model error. The use of colored noise improves the predictive capabilities of simpler models.
Sierra/SolidMechanics (Sierra/SM) is a three-dimensional solid mechanics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. This document provides information about the functionality in Sierra/SM and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. Sierra/SM provides both explicit transient dynamics and implicit quasistatics and dynamics capabilities. Both the explicit and implicit modules are highly scalable in a parallel computing environment. In the past, the explicit and implicit capabilities were provided by two separate codes, known as Presto and Adagio, respectively. These capabilities have been consolidated into a single code. The executable is named Adagio, but it provides the full suite of solid mechanics capabilities, for both implicit and explicit. The Presto executable has been disabled as a consequence of this consolidation.
To comply with increasingly stringent pollutant emissions regulations, diesel engine operation in a catalyst-heating mode is critical to achieve rapid light-off of exhaust aftertreatment catalysts during the first minutes of cold starting. Current approaches to catalyst-heating operation typically involve one or more late post injections to retard combustion phasing and increase exhaust temperatures. The ability to retard post injection timing(s) while maintaining acceptable pollutant emissions levels is pivotal for improved catalyst-heating calibrations. Higher fuel cetane number has been reported to enable later post injections with increased exhaust heat and decreased pollutant emissions, but the mechanism is not well understood. The purpose of this experimental and numerical simulation study is to provide further insight into the ways in which fuel cetane number affects combustion and pollutant formation in a medium-duty diesel engine. Three full boiling-range diesel fuels with cetane numbers of approximately 45, 50, and 55 are employed in this study with a well-controlled set of calibrations employing a five-injection strategy. The two post injections are block-shifted to increasingly retarded timings, and the effects on exhaust heat and pollutant emissions are quantified for each fuel. For a given injection strategy calibration, increasing cetane number enables increased exhaust temperature and decreased hydrocarbon and carbon monoxide emissions for a fixed load. The increase in exhaust temperature is attributed to an increased fueling requirement to compensate for additional wall heat losses caused by earlier, more robust pilot combustion with the more reactive fuels. Formaldehyde is predicted to form in the fuel-lean periphery of the first pilot injection spray and can persist until exhaust valve opening in the absence of direct interactions with subsequent injections. Unreacted fuel-air mixture in the fuel-rich interior of the first-pilot spray is likely too cool for any significant reactions, and can persist until exhaust valve opening in the absence of turbulence/chemistry interactions and/or direct heating through interactions with subsequent injections.
Injector performance in gasoline Direct-Injection Spark-Ignition (DISI) engines is a key focus in the automotive industry as the vehicle parc transitions from Port Fuel Injected (PFI) to DISI engine technology. DISI injector deposits, which may impact the fuel delivery process in the engine, sometimes accumulate over longer time periods and greater vehicle mileages than traditional combustion chamber deposits (CCD). These higher mileages and longer timeframes make the evaluation of these deposits in a laboratory setting more challenging due to the extended test durations necessary to achieve representative in-use levels of fouling. The need to generate injector tip deposits for research purposes begs the questions, can an artificial fouling agent to speed deposit accumulation be used, and does this result in deposits similar to those formed naturally by market fuels? In this study, a collection of DISI injectors with different types of conditioning, ranging from controlled engine-stand tests with market or profould fuels, to vehicle tests run over drive cycles, to uncontrolled field use, were analyzed to understand the characteristics of their injector tip deposits and their functional impacts. The DISI injectors, both naturally and profouled, were holistically evaluated for their spray performance, deposit composition, and deposit morphology relative to one another. The testing and accompanying analysis reveals both similarities and differences among naturally fouled, fouled through long time periods with market fuel, and profouled injectors, fouled artificially through the use of a sulfur dopant. Profouled injectors were chemically distinct from naturally fouled injectors, and found to contain higher levels of sulfur dioxide. Also, profouled injectors exhibited greater volumes of deposits on the face of the injector tip. However, functionally, both naturally-fouled and profouled injectors featured similar impacts on their spray performance relative to clean injectors, with the fouled injector spray plumes remaining narrower, limiting plume-to-plume interactions, and altering the liquid-spray penetration dynamics., insights from which can guide future research into injector tip deposits.
Spray-wall interactions in diesel engines have a strong influence on turbulent flow evolution and mixing, which influences the engine's thermal efficiency and pollutant-emissions behavior. Previous optical experiments and numerical investigations of a stepped-lip diesel piston bowl focused on how spray-wall interactions influence the formation of squish-region vortices and their sensitivity to injection timing. Such vortices are stronger and longer-lived at retarded injection timings and are correlated with faster late-cycle heat release and soot reductions, but are weaker and shorter-lived as injection timing is advanced. Computational fluid dynamics (CFD) simulations predict that piston bowls with more space in the squish region can enhance the strength of these vortices at near-TDC injection timings, which is hypothesized to further improve peak thermal efficiency and reduce emissions. The dimpled stepped-lip (DSL) piston is such a design. In this study, the in-cylinder flow is simulated with a DSL piston to investigate the effects of dimple geometry parameters on squish-region vortex formation via a design sensitivity study. The rotational energy and size of the squish-region vortices are quantified. The results suggest that the DSL piston is capable of enhancing vortex formation compared to the stepped-lip piston at near-TDC injection timings. The sensitivity study led to the design of an improved DSL bowl with shallower, narrower, and steeper-curved dimples that are further out into the squish region, which enhances predicted vortex formation with 27#x00025; larger and 44#x00025; more rotationally energetic vortices compared to the baseline DSL bowl. Engine experiments with the baseline DSL piston demonstrate that it can reduce combustion duration and improve thermal efficiency by as much as 1.4#x00025; with main injection timings near TDC, due to improved rotational energy, but with 69#x00025; increased soot emissions and no penalty in NOx emissions.
A one-dimensional, non-equilibrium, compressible law of the wall model is proposed to increase the accuracy of heat transfer predictions from computational fluid dynamics (CFD) simulations of internal combustion engine flows on engineering grids. Our 1D model solves the transient turbulent Navier-Stokes equations for mass, momentum, energy and turbulence under the thin-layer assumption, using a finite-difference spatial scheme and a high-order implicit time integration method. A new algebraic eddy-viscosity closure, derived from the Han-Reitz equilibrium law of the wall, with enhanced Prandtl number sensitivity and compressibility effects, was developed for optimal performance. Several eddy viscosity sub-models were tested for turbulence closure, including the two-equation k-epsilon and k-omega, which gave insufficient performance. Validation against pulsating channel flow experiments highlighted the superior capability of the 1D model to capture transient near-wall velocity and temperature profiles, and the need to appropriately model the eddy viscosity using a low-Reynolds method, which could not be achieved with the standard two-equation models. The results indicate that the non-equilibrium model can capture the near-wall velocity profile dynamics (including velocity profile inversion) while equilibrium models cannot, and simultaneously reduce heat flux prediction errors by up to one order of magnitude. The proposed optimal configuration reduced heat flux error for the pulsating channel flow case from 18.4#x00025; (Launder-Spalding law of the wall) down to 1.67#x00025;.
This work is a comprehensive technical review of existing literature and a synthesis of current understanding of the governing physics behind the interaction of multiple fuel injections, ignition, and combustion behavior of multiple-injections in diesel engines. Multiple-injection is a widely adopted operating strategy applied in modern compression-ignition engines, which involves various combinations of small pre-injections and post-injections of fuel before and after the main injection and splitting the main injection into multiple smaller injections. This strategy has been conclusively shown to improve fuel economy in diesel engines while achieving simultaneous NOX, soot, and combustion noise reduction - in addition to a reduction in the emissions of unburned hydrocarbons (UHC) and CO by preventing fuel wetting and flame quenching at the piston wall. Despite the widespread adoption and an extensive literature documenting the effects of multiple-injection strategies in engines, little is known about the complex interplay between the underlying flow physics and combustion chemistry involved in such flows, which ultimately governs the ignition and subsequent combustion processes thereby dictating the effectiveness of this strategy. In this work, we provide a comprehensive overview of the literature on the interaction between the jets in a multiple-injection event, the resulting mixture, and finally the ignition and combustion dynamics as a function of engine operational parameters including injection duration and dwell. The understanding of the underlying processes is facilitated by a new conceptual model of multiple-injection physics. We conclude by identifying the major remaining research questions that need to be addressed to refine and help achieve a design-level understanding to optimize advanced multiple-injection strategies that can lead to higher engine efficiency and lower emissions.
Understanding of structural and morphological evolution in nanomaterials is critical in tailoring their functionality for applications such as energy conversion and storage. Here, we examine irradiation effects on the morphology and structure of amorphous TiO2 nanotubes in comparison with their crystalline counterpart, anatase TiO2 nanotubes, using high-resolution transmission electron microscopy (TEM), in situ ion irradiation TEM, and molecular dynamics (MD) simulations. Anatase TiO2 nanotubes exhibit morphological and structural stability under irradiation due to their high concentration of grain boundaries and surfaces as defect sinks. On the other hand, amorphous TiO2 nanotubes undergo irradiation-induced crystallization, with some tubes remaining only partially crystallized. The partially crystalline tubes bend due to internal stresses associated with densification during crystallization as suggested by MD calculations. These results present a novel irradiation-based pathway for potentially tuning structure and morphology of energy storage materials. Graphical abstract: [Figure not available: see fulltext.]
Uniaxial strain, reverse-ballistic impact experiments were performed on wrought 17-4 PH H1025 stainless steel, and the resulting Hugoniot was determined to a peak stress of 25 GPa through impedance matching to known standard materials. The measured Hugoniot showed evidence of a solid-solid phase transition, consistent with other martensitic Fe-alloys. The phase transition stress in the wrought 17-4 PH H1025 stainless steel was measured in a uniaxial strain, forward-ballistic impact experiment to be 11.4 GPa. Linear fits to the Hugoniot for both the low and high pressure phase are presented with corresponding uncertainty. The low pressure martensitic phase exhibits a shock velocity that is weakly dependent on the particle velocity, consistent with other martensitic Fe-alloys.
It has been demonstrated that grid cells are encoding physical locations using hexagonally spaced, periodic phase-space representations. Theories of how the brain is decoding this phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method on how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this extended abstract, we walk through our approach with simulations using a digital elevation model.
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
Fusel alcohol mixtures containing ethanol, isobutanol, isopentanol, and 2-phenylethanol have been shown to be a promising means to maximize renewable fuel yield from various biomass feedstocks and waste streams. We hypothesized that use of these fusel alcohol mixtures as a blending agent with gasoline can significantly lower the greenhouse gas emissions from the light-duty fleet. Since the composition of fusel alcohol mixtures derived from fermentation is dependent on a variety of factors such as biocatalyst selection and feedstock composition, multi-objective optimization was performed to identify optimal fusel alcohol blends in gasoline that simultaneously maximize thermodynamic efficiency gain and energy density. Pareto front analysis combined with fuel property predictions and a Merit Score-based metric led to prediction of optimal fusel alcohol-gasoline blends over a range of blending volumes. The optimal fusel blends were analyzed based on a Net Fuel Economy Improvement Potential metric for volumetric blending in a gasoline base fuel. The results demonstrate that various fusel alcohol blends provide the ability to maximize efficiency improvement while minimizing increases to blending vapor pressure and decreases to energy density compared to an ethanol-only bioblendstock. Fusel blends exhibit predicted Net Fuel Economy Improvement Potential comparable to neat ethanol when blended with gasoline in all scenarios, with increased improvement over ethanol at moderate to high bio-blendstock blending levels. The optimal fusel blend that was identified was a mixture of 90% v/v isobutanol and 10% v/v 2-phenylethanol, blended at 45% v/v with gasoline, yielding a predicted 4.67% increase in Net Fuel Economy Improvement Potential. These findings suggest that incorporation of fusel alcohols as a gasoline bioblendstock can improve both fuel performance and the net fuel yield of the bioethanol industry.
Low-Z nanocrystalline diamond (NCD) grids have been developed to reduce spurious fluorescence and avoid X-ray peak overlaps or interferences between the specimen and conventional metal grids. Here, the low-Z NCD grids are non-toxic and safe to handle, conductive, can be subjected to high-temperature heating experiments, and may be used for analytical work in lieu of metal grids. Both a half-grid geometry, which can be used for any lift-out method, or a full-grid geometry that can be used for ex situ lift-out or thin film analyses, can be fabricated and used for experiments.
We introduce novel higher-order topological phases of matter in chiral-symmetric systems (class AIII of the tenfold classification), most of which would be misidentified as trivial by current theories. These phases are protected by "multipole chiral numbers,"bulk integer topological invariants that in 2D and 3D are built from sublattice multipole moment operators, as defined herein. The integer value of a multipole chiral number indicates how many degenerate zero-energy states localize at each corner of a system. These higher-order topological phases of matter are generally boundary-obstructed and robust in the presence of chiral-symmetry-preserving disorder.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
The experiment investigates free expansion of a supercritical fluid into a two-phase liquid-vapor coexistence region. A huge molecular dynamics simulation (6 billion Lennard-Jones atoms) was run on 5760 GPUs (33% of LLNL Sierra) using LAMMPS/Kokkos software. This improved visualization workflow and started preliminary simulations of aluminum using SNAP machine learning potential.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
More realistic models for infrasound signal propagation across a region can be used to improve the precision and accuracy of spatial and temporal source localization estimates. Here, motivated by incomplete infrasound event bulletins in the Western US, the location capabilities of a regional infrasonic network of stations located between 84–458 km from the Utah Test and Training Range, Utah, USA, is assessed using a series of near-surface explosive events with complementary ground truth (GT) information. Signal arrival times and backazimuth estimates are determined with an automatic F-statistic based signal detector and manually refined by an analyst. This study represents the first application of three distinct celerity-range and backazimuth models to an extensive suite of realistic signal detections for event location purposes. A singular celerity and backazimuth deviation model was previously constructed using ray tracing analysis based on an extensive archive of historical atmospheric specifications and is applied within this study to test location capabilities. Similarly, a set of multivariate, season and location specific models for celerity and backazimuth are compared to an empirical model that depends on the observations across the infrasound network and the GT events, which accounts for atmospheric propagation variations from source to receiver. Discrepancies between observed and predicted signal celerities result in locations with poor accuracy. Application of the empirical model improves both spatial localization precision and accuracy; all but one location estimates retain the true GT location within the 90 per cent confidence bounds. Average mislocation of the events is 15.49 km and average 90 per cent error ellipse areas are 4141 km2. The empirical model additionally reduces origin time residuals; origin time residuals from the other location models are in excess of 160 s while residuals produced with the empirical model are within 30 s of the true origin time. Finally, we demonstrate that event location accuracy is driven by a combination of signal propagation model and the azimuthal gap of detecting stations. A direct relationship between mislocation, error ellipse area and increased station azimuthal gaps indicate that for sparse networks, detection backazimuths may drive location biases over traveltime estimates.
This user's guide documents capabilities in Sierra/SolidMechanics which remain "in-development" and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.6 User's Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Presented in this document are the theoretical aspects of capabilities contained in the Sierra/SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilities come closer to production level.
Presented in this document are tests that exist in the Sierra/SolidMechanics example problem suite, which is a subset of the Sierra/SM regression and performance test suite. These examples showcase common and advanced code capabilities. A wide variety of other regression and verification tests exist in the Sierra/SM test suite that are not included in this manual.
This work investigates the role of water and oxygen on the shear-induced structural modifications of molybdenum disulfide (MoS2) coatings for space applications and the impact on friction due to oxidation from aging. We observed from transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS) that sliding in both an inert environment (i.e., dry N2) or humid lab air forms basally oriented (002) running films of varying thickness and structure. Tribological testing of the basally oriented surfaces created in dry N2 and air showed lower initial friction than a coating with an amorphous or nanocrystalline microstructure. Aging of coatings with basally oriented surfaces was performed by heating samples at 250 °C for 24 h. Post aging tribological testing of the as-deposited coating showed increased initial friction and a longer transition from higher friction to lower friction (i.e., run-in) due to oxidation of the surface. Tribological testing of raster patches formed in dry N2 and air both showed an improved resistance to oxidation and reduced initial friction after aging. The results from this study have implications for the use of MoS2-coated mechanisms in aerospace and space applications and highlight the importance of preflight testing. Preflight cycling of components in inert or air environments provides an oriented surface microstructure with fewer interaction sites for oxidation and a lower shear strength, reducing the initial friction coefficient and oxidation due to aging or exposure to reactive species (i.e., atomic oxygen).
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to User's Manual. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
The ability to perform accurate techno-economic analysis of solar photovoltaic (PV) systems is essential for bankability and investment purposes. Most energy yield models assume an almost flawless operation (i.e., no failures); however, realistically, components fail and get repaired stochastically. This package, PyPVRPM, is a Python translation and improvement of the Language Kit (LK) based PhotoVoltaic Reliability Performance Model (PVRPM), which was first developed at Sandia National Laboratories in Goldsim software (Granata et al., 2011) (Miller et al., 2012). PyPVRPM allows the user to define a PV system at a specific location and incorporate failure, repair, and detection rates and distributions to calculate energy yield and other financial metrics such as the levelized cost of energy and net present value (Klise, Lavrova, et al., 2017). Our package is a simulation tool that uses NREL’s Python interface for System Advisor Model (SAM) (National Renewable Energy Laboratory, 2020b) (National Renewable Energy Laboratory, 2020a) to evaluate the performance of a PV plant throughout its lifetime by considering component reliability metrics. Besides the numerous benefits from migrating to Python (e.g., speed, libraries, batch analyses), it also expands on the failure and repair processes from the LK version by including the ability to vary monitoring strategies. These failures, repairs, and monitoring processes are based on user-defined distributions and values, enabling a more accurate and realistic representation of cost and availability throughout a PV system’s lifetime.
This is an addendum to the Sierra/SolidMechanics 5.6 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra SolidMechanics 5.6 User's Guide should be referenced for most general descriptions of code capability and use.
Capturing the dynamic response of a material under high strain-rate deformation often demands challenging and time consuming experimental effort. While shock hydrodynamic simulation methods can aid in this area, a priori characterizations of the material strength under shock loading and spall failure are needed in order to parameterize constitutive models needed for these computational tools. Moreover, parameterizations of strain-rate-dependent strength models are needed to capture the full suite of Richtmyer-Meshkov instability (RMI) behavior of shock compressed metals, creating an unrealistic demand for these training data solely on experiments. Herein, we sweep a large range of geometric, crystallographic, and shock conditions within molecular dynamics (MD) simulations and demonstrate the breadth of RMI in Cu that can be captured from the atomic scale. Yield strength measurements from jetted and arrested material from a sinusoidal surface perturbation were quantified as Y RMI = 0.787 ± 0.374 GPa, higher than strain-rate-independent models used in experimentally matched hydrodynamic simulations. Defect-free, single-crystal Cu samples used in MD will overestimate Y RMI, but the drastic scale difference between experiment and MD is highlighted by high confidence neighborhood clustering predictions of RMI characterizations, yielding incorrect classifications.
Accurate and efficient constitutive modeling remains a cornerstone issue for solid mechanics analysis. Over the years, the LAMÉ advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization of the LAMÉ library in application, this effort seeks to document and verify the various models in the LAMÉ library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
In turbulent flows, kinetic energy is transferred from the largest scales to progressively smaller scales, until it is ultimately converted into heat. The Navier-Stokes equations are almost universally used to study this process. Here, by comparing with molecular-gas-dynamics simulations, we show that the Navier-Stokes equations do not describe turbulent gas flows in the dissipation range because they neglect thermal fluctuations. We investigate decaying turbulence produced by the Taylor-Green vortex and find that in the dissipation range the molecular-gas-dynamics spectra grow quadratically with wave number due to thermal fluctuations, in agreement with previous predictions, while the Navier-Stokes spectra decay exponentially. Furthermore, the transition to quadratic growth occurs at a length scale much larger than the gas molecular mean free path, namely in a regime that the Navier-Stokes equations are widely believed to describe. In fact, our results suggest that the Navier-Stokes equations are not guaranteed to describe the smallest scales of gas turbulence for any positive Knudsen number.
With the urgent need to mitigate climate change and rising global temperatures, technological solutions that reduce atmospheric CO2 are an increasingly important part of the global solution. As a result, the nascent carbon capture, utilization, and storage (CCUS) industry is rapidly growing with a plethora of new technologies in many different sectors. There is a need to holistically evaluate these new technologies in a standardized and consistent manner to determine which technologies will be the most successful and competitive in the global marketplace to achieve decarbonization targets. Life cycle assessment (LCA) and techno-economic assessment (TEA) have been employed as rigorous methodologies for quantitatively measuring a technology's environmental impacts and techno-economic performance, respectively. However, these metrics evaluate a technology's performance in only three dimensions and do not directly incorporate stakeholder needs and values. In addition, technology developers frequently encounter trade-offs during design that increase one metric at the expense of the other. The technology performance level (TPL) combined indicator provides a comprehensive and holistic assessment of an emerging technology's potential, which is described by its techno-economic performance, environmental impacts, social impacts, safety considerations, market/deployability opportunities, use integration impacts, and general risks. TPL incorporates TEA and LCA outputs and quantifies the trade-offs between them directly using stakeholder feedback and requirements. In this article, the TPL methodology is being adapted from the marine energy domain to the CCUS domain. Adapted metrics and definitions, a stakeholder analysis, and a detailed foundation-based application of the systems engineering approach to CCUS are presented. The TPL assessment framework is couched within the internationally standardized LCA framework to improve technical rigor and acceptance. It is demonstrated how stakeholder needs and values can be directly incorporated, how LCA and TEA metrics can be balanced, and how other dimensions (listed earlier) can be integrated into a single metric that measures a technology's potential.
In this report, we assess the data recorded by a Distributed Acoustic Sensing (DAS) cable deployed during the Source Physics Experiment, Phase II (DAG) in comparison with the data recorded by nearby 4.5-Hz geophones. DAS is a novel recording method with unprecedented spatial resolution, but there are significant concerns around the data fidelity as the technology is ramped up to more common usage. Here we run a series of tests to quantify the similarity between DAS data and more conventional data and investigate cases where the higher spatial resolution of the DAS can provide new insights into the wavefield. These tests include 1D modeling with seismic refraction and bootstrap uncertainties, assessing the amplitude spectra with distance from the source, measuring the frequency dependent inter-station coherency, estimating time-dependent phase velocity with beamforming and semblance, and measuring the cross-correlation between the geophone and the particle velocity inferred from the DAS. In most cases, we find high similarity between the two datasets, but the higher spatial resolution of the DAS provides increased details and methods of estimating uncertainty.
Using chemical kinetic modeling and statistical analysis, we investigate the possibility of correlating key chemical "markers"-typically small molecules-formed during very lean (φ ∼0.001) oxidation experiments with near-stoichiometric (φ ∼1) fuel ignition properties. One goal of this work is to evaluate the feasibility of designing a fuel-screening platform, based on small laboratory reactors that operate at low temperatures and use minimal fuel volume. Buras et al. [Combust. Flame 2020, 216, 472-484] have shown that convolutional neural net (CNN) fitting can be used to correlate first-stage ignition delay times (IDTs) with OH/HO2measurements during very lean oxidation in low-T flow reactors with better than factor-of-2 accuracy. In this work, we test the limits of applying this correlation-based approach to predict the low-temperature heat release (LTHR) and total IDT, including the sensitivity of total IDT to the equivalence ratio, φ. We demonstrate that first-stage IDT can be reliably correlated with very lean oxidation measurements using compressed sensing (CS), which is simpler to implement than CNN fitting. LTHR can also be predicted via CS analysis, although the correlation quality is somewhat lower than for first-stage IDT. In contrast, the accuracy of total IDT prediction at φ = 1 is significantly lower (within a factor of 4 or worse). These results can be rationalized by the fact that the first-stage IDT and LTHR are primarily determined by low-temperature chemistry, whereas total IDT depends on low-, intermediate-, and high-temperature chemistry. Oxidation reactions are most important at low temperatures, and therefore, measurements of universal molecular markers of oxidation do not capture the full chemical complexity required to accurately predict the total IDT even at a single equivalence ratio. As a result, we find that φ-sensitivity of ignition delay cannot be predicted at all using solely correlation with lean low-T chemical speciation measurements.
Digital twins are emerging as powerful tools for supporting innovation as well as optimizing the in-service performance of a broad range of complex physical machines, devices, and components. A digital twin is generally designed to provide accurate in-silico representation of the form (i.e., appearance) and the functional response of a specified (unique) physical twin. This paper offers a new perspective on how the emerging concept of digital twins could be applied to accelerate materials innovation efforts. Specifically, it is argued that the material itself can be considered as a highly complex multiscale physical system whose form (i.e., details of the material structure over a hierarchy of material length) and function (i.e., response to external stimuli typically characterized through suitably defined material properties) can be captured suitably in a digital twin. Accordingly, the digital twin can represent the evolution of structure, process, and performance of the material over time, with regard to both process history and in-service environment. This paper establishes the foundational concepts and frameworks needed to formulate and continuously update both the form and function of the digital twin of a selected material physical twin. The form of the proposed material digital twin can be captured effectively using the broadly applicable framework of n-point spatial correlations, while its function at the different length scales can be captured using homogenization and localization process-structure-property surrogate models calibrated to collections of available experimental and physics-based simulation data.
We report the formation of Al3Sc, in 100 nm Al0.8Sc0.2 films, is found to be driven by exposure to high temperature through higher deposition temperature or annealing. High film resistivity was observed in films with lower deposition temperature that exhibited a lack of crystallinity, which is anticipated to cause more electron scattering. An increase in deposition temperature allows for the nucleation and growth of crystalline Al3Sc regions that were verified by electron diffraction. The increase in crystallinity reduces electron scattering, which results in lower film resistivity. Annealing Al0.8Sc0.2 films at 600 °C in an Ar vacuum environment also allows for the formation and recrystallization of Al3Sc and Al and yields saturated resistivity values between 9.58 and 10.5 μΩ-cm regardless of sputter conditions. Al3Sc was found to nucleate and grow in a random orientation when deposited on SiO2, and highly {111} textured when deposited on 100 nm Ti and AlN films that were used as template layers. The rocking curve of the Al3Sc 111 reflection for the as-deposited films on Ti and AlN at 450 °C was 1.79° and 1.68°, respectively. Annealing the film deposited on the AlN template reduced the rocking curve substantially to 1.01° due to recrystallization of Al3Sc and Al within the film.
Since the discovery of the laser, optical nonlinearities have been at the core of efficient light conversion sources. Typically, thick transparent crystals or quasi-phase matched waveguides are utilized in conjunction with phase-matching techniques to select a single parametric process. In recent years, due to the rapid developments in artificially structured materials, optical frequency mixing has been achieved at the nanoscale in subwavelength resonators arrayed as metasurfaces. Phase matching becomes relaxed for these wavelength-scale structures, and all allowed nonlinear processes can, in principle, occur on an equal footing. This could promote harmonic generation via a cascaded (consisting of several frequency mixing steps) process. However, so far, all reported work on dielectric metasurfaces have assumed frequency mixing from a direct (single step) nonlinear process. In this work, we prove the existence of cascaded second-order optical nonlinearities by analyzing the second- and third-wave mixing from a highly nonlinear metasurface in conjunction with polarization selection rules and crystal symmetries. We find that the third-wave mixing signal from a cascaded process can be of comparable strength to that from conventional third-harmonic generation and that surface nonlinearities are the dominant mechanism that contributes to cascaded second-order nonlinearities in our metasurface.
Fraud in the Environmental Benefit Credit (EBC) markets is pervasive. To make matters worse, the cost of creating EBCs is often higher than the market price. Consequently, a method to create, validate, and verify EBCs and their relevance is needed to mitigate fraud. The EBC market has focused on geologic (fossil fuel) CO2 sequestration projects that are often over budget and behind schedule and has failed to capture the "lowest hanging fruit" EBCs - terrestrial sequestration via the agricultural industry. This project reviews a methodology to attain possibly the least costly EBCs by tracking the reduction of inputs required to grow crops. The use of bio- stimulant products, such as humate, allows a farmer to use less nitrogen without adversely affecting crop yield. Using less nitrogen qualifies for EBCs by reducing nitrous oxide emissions and nitrate runoff from a farmer's field. A blockchain that tracks the bio-stimulant material from source to application provides a link between a tangible (bio-stimulant commodity) and the associated intangible (EBCs) assets. Covert insertion of taggants in the bio-stimulant products creates a unique barcode that allows a product to be digitally tracked from beginning to end. This process (blockchain technology) is so robust, logical, and transparent that it will enhance the value of the associated EBCs by mitigating fraud. It provides a real time method for monetizing the benefits of the material. Substantial amounts of energy are required to produce, transport, and distribute agricultural inputs including fertilizer and water. Intelligent optimization of the use of agricultural inputs can drive meaningful cost savings. Tagging and verification of product application provides a valuable understanding of the dynamics in the water/food energy nexus, a major food security and sustainability issue. As technology in agriculture evolves so to must methods to verify the Enterprise Resource Planning (ERP) potential of innovative solutions. The technology reviewed provides the ability to combine blockchain and taggants ("taggant blockchains") as the engine by which to (1) mitigate fraudulent carbon credits; (2) improve food chain security, and (3) monitor and manage sustainability. The verification of product quality and application is a requirement to validate benefits. Recent upgrades to humic and fulvic quality protocols known as ISO CD 19822 TC134 offers an analytical procedure. This work has been assisted by the Humic Products Trade Association and International Humic Substance Society. In addition, providing proof of application of these products and verification of the correct application of prescriptive humic and bio-stimulant products is required. Individual sources of humate have unique and verifiable characteristics. Additionally, methods for prescription of site- specific agricultural inputs in agricultural fields are available. (See US Patents 734867B2, US 90658633B2.) Finally, a method to assure application rate is required through the use of taggants. Sensors using organic solid to liquid phase change nanoparticles of various types and melting temperatures added to the naturally occurring materials provide a barcode. Over 100 types of nanoparticles exist ensuring numerous possible barcodes to reduce industry fraud. Taggant materials can be collected from soil samples of plant material to validate a blockchain of humic, fulvic and other soil amendment products. Other non-organic materials are also available as taggants; however, the organic tags are biodegradable and safe in the environment allowing for use during differing application timeliness.
This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.
Deep neural networks have emerged as a leading set of algorithms to infer information from a variety of data sources such as images and time series data. In their most basic form, neural networks lack the ability to adapt to new classes of information. Continual learning is a field of study attempting to give previously trained deep learning models the ability to adapt to a changing environment. Previous work developed a CL method called Neurogenesis for Deep Learning (NDL). Here, we combine NDL with a specific neural network architecture (the Ladder Network) to produce a system capable of automatically adapting a classification neural network to new classes of data. The NDL Ladder Network was evaluated against other leading CL methods. While the NDL and Ladder Network system did not match the cutting edge performance achieved by other CL methods, in most cases it performed comparably and is the only system evaluated that can learn new classes of information with no human intervention.
We develop numerical methods for computing statistics of stochastic processes on surfaces of general shape with drift-diffusion dynamics dXt=a(Xt)dt+b(Xt)dWt. We formulate descriptions of Brownian motion and general drift-diffusion processes on surfaces. We consider statistics of the form u(x)=Ex[∫0τg(Xt)dt]+Ex[f(Xτ)] for a domain Ω and the exit stopping time τ=inft{t>0|Xt∉Ω}, where f,g are general smooth functions. For computing these statistics, we develop high-order Generalized Moving Least Squares (GMLS) solvers for associated surface PDE boundary-value problems based on Backward-Kolmogorov equations. We focus particularly on the mean First Passage Times (FPTs) given by the case f=0,g=1 where u(x)=Ex[τ]. We perform studies for a variety of shapes showing our methods converge with high-order accuracy both in capturing the geometry and the surface PDE solutions. We then perform studies showing how statistics are influenced by the surface geometry, drift dynamics, and spatially dependent diffusivities.
In this work, we study how a contact/impact nonlinearity interacts with a geometric cubic nonlinearity in an oscillator system. Specific focus is shown to the effects on bifurcation behavior and secondary resonances (i.e., super- and sub-harmonic resonances). The effects of the individual nonlinearities are first explored for comparison, and then the influences of the combined nonlinearities, varying one parameter at a time, are analyzed and discussed. Nonlinear characterization is then performed on an arbitrary system configuration to study super- and sub-harmonic resonances and grazing contacts or bifurcations. Both the cubic and contact nonlinearities cause a drop in amplitude and shift up in frequency for the primary resonance, and they activate high-amplitude subharmonic resonance regions. The nonlinearities seem to never destructively interfere. The contact nonlinearity generally affects the system's superharmonic resonance behavior more, particularly with regard to the occurrence of grazing contacts and the activation of many bifurcations in the system's response. The subharmonic resonance behavior is more strongly affected by the cubic nonlinearity and is prone to multistable behavior. Perturbation theory proved useful for determining when the cubic nonlinearity would be dominant compared to the contact nonlinearity. The limiting behaviors of the contact stiffness and freeplay gap size indicate the cubic nonlinearity is dominant overall. It is demonstrated that the presence of contact may result in the activation of several bifurcations. In addition, it is proved that the system's subharmonic resonance region is prone to multistable dynamical responses having distinct magnitudes.
Plasma etching of semiconductors is an essential process in the production of microchips which enable nearly every aspect of modern life. Two frequencies of applied voltage are often used to provide control of both the ion fluxes and energy distribution.
In this report we describe the testing of a novel scheme for state preparation of trapped ions in a quantum computing setup. This technique optimally would allow for similar precision and speed of state preparation while allowing for individual addressability of single ions in a chain using technology already available in a trapped ion experiment. As quantum computing experiments become more complicated, mid-experiment measurements will become necessary to achieve algorithms such as quantum error correction. Any mid-experiment measurement then requires the measured qubit to be re-prepared to a known quantum state. Currently this involves the protected qubits to be moved a sizeable distance away from the qubit being re-prepared which can be costly in terms of experiment length as well as introducing errors. Theoretical calculations predict that a three-photon process would allow for state preparation without qubit movement with similar efficiencies to current state preparation methods.
Depleted uranium hexafluoride (UF6), a stockpiled byproduct of the nuclear fuel cycle, reacts readily with atmospheric humidity, but the mechanism is poorly understood. We compare several potential initiation steps at a consistent level of theory, generating underlying structures and vibrational modes using hybrid density functional theory (DFT) and computing relative energies of stationary points with double-hybrid (DH) DFT. A benchmark comparison is performed to assess the quality of DH-DFT data using reference energy differences obtained using a complete-basis-limit coupled-cluster (CC) composite method. The associated large-basis CC computations were enabled by a new general-purpose pseudopotential capability implemented as part of this work. Dispersion-corrected parameter-free DH-DFT methods, namely PBE0-DH-D3(BJ) and PBE-QIDH-D3(BJ), provided mean unsigned errors within chemical accuracy (1 kcal mol−1) for a set of barrier heights corresponding to the most energetically favorable initiation steps. The hydrolysis mechanism is found to proceed via intermolecular hydrogen transfer within van der Waals complexes involving UF6, UF5OH, and UOF4, in agreement with previous studies, followed by the formation of a previously unappreciated dihydroxide intermediate, UF4(OH)2. The dihydroxide is predicted to form under both kinetic and thermodynamic control, and, unlike the alternate pathway leading to the UO2F2 monomer, its reaction energy is exothermic, in agreement with observation. Finally, harmonic and anharmonic vibrational simulations are performed to reinterpret literature infrared spectroscopy in light of this newly identified species.
Coherent anti-Stokes Raman scattering (CARS) is commonly used for thermometry and concentration measurement of major species. The quadratic scaling of CARS signal with number density has limited the use of CARS for detection of minor species, where more sensitive approaches may be more attractive. However, significant advancements in ultrafast CARS approaches have been made over the past two decades, including the development of hybrid CARS demonstrated to yield greatly increased excitation efficiencies. Yet, detailed detection limits of hybrid CARS have not been well established. In this Letter, detection limits for N 2 , H 2 , CO, and C 2 H 4 by point-wise hybrid femtosecond (fs)/picosecond (ps) CARS are determined to be of the order of 10 15 molecules/cm 3 . The possible benefit of fs/nanosecond (ns) hybrid CARS is also discussed.
In a quantum network, a key challenge is to minimize the direct reflection of flying qubits as they couple to stationary, resonator-based memory qubits, as the reflected amplitude represents state transfer infidelity that cannot be directly recovered. Optimizing the transfer fidelity can be accomplished by dynamically varying the resonator's coupling rate to the flying qubit field. Here, we analytically derive the optimal coupling rate profile in the presence of intrinsic loss of the quantum memory using an open quantum systems method that can account for intrinsic resonator losses. We show that, since the resonator field must be initially empty, an initial amplitude in the resonator must be generated in order to cancel reflections via destructive interference; moreover, we show that this initial amplitude can be made sufficiently small as to allow the net infidelity throughout the complete transfer process to be close to unity. We then derive the time-varying resonator coupling that maximizes the state transfer fidelity as a function of the initial population and intrinsic loss rate, providing a complete protocol for optimal quantum state transfer between the flying qubit and resonator qubit. We present analytical expressions and numerical examples of the fidelities for the complete protocol using exponential and Gaussian profiles. We show that a state transfer fidelity of around 99.9% can be reached momentarily before the quantum information is lost due to the intrinsic loss in practical resonators used as quantum memories.
In alkaline zinc–manganese dioxide batteries, there is a need for selective polymeric separators that have good hydroxide ion conductivity but that prevent the transport of zincate (Zn(OH)4)2-. Here we investigate the nanoscale structure and hydroxide transport in two cationic polysulfones that are promising for these separators. We present the synthesis and characterization for a tetraethylammonium-functionalized polysulfone (TEA-PSU) and compare it to our previous work on an N-butylimidazolium-functionalized polysulfone (NBI-PSU). We perform atomistic molecular dynamics (MD) simulations of both polymers at experimentally relevant water contents. The MD simulations show that both polymers develop well phase separated nanoscale water domains that percolate through the polymer. Calculation of the total scattering intensity from the MD simulations reveal weak or nonexistent ionomer peaks at low wave vectors. The lack of an ionomer peak is due to a loss of contrast in the scattering. The small water domains in both polymers, with median diameters on the order of 0.5–0.7 nm, lead to hydroxide and water diffusion constants that are 1–2 orders of magnitude smaller than their values in bulk water. This confinement lowers the conductivity but also may explain the strong exclusion of zincate from the PSU membranes seen experimentally.
This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.
Concerns about the safety of lithium-ion batteries have motivated numerous studies on the response of fresh cells to abusive, off-nominal conditions, but studies on aged cells are relatively rare. This perspective considers all open literature on the thermal, electrical, and mechanical abuse response of aged lithium-ion cells and modules to identify critical changes in their behavior relative to fresh cells. We outline data gaps in aged cell safety, including electrical and mechanical testing, and module-level experiments. Understanding how the abuse response of aged cells differs from fresh cells will enable the design of more effective energy storage failure mitigation systems.
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
We report that an investigation is carried out for the purpose of simultaneously controlling a base-excited dynamical system and enhancing the effectiveness of a piezoelectric energy harvesting absorber. Amplitude absorbers are included to improve the energy harvested by the absorber with the possibility of activating broadband resonant regions to increase the operable range of the absorber. This study optimizes the stoppers’ ability for the energy harvesting absorber to generate energy by investigating asymmetric gap and stiffness configurations. Medium stiffnesses of 5 x 104 N/m and 1 x 105 N/m show significant impact on the primary system’s dynamics and improvement in the level of the harvested power for the absorber. A solo stopper configuration when the gap distance is 0.02m improves 29% in peak power and 9% in average power over the symmetrical case. Additionally, an asymmetric stiffness configuration when one of the stiffnesses is 1 x 105 N/m and a gap size of 0.02m indicates an improvement of 25% and 8% for peak and average harvested power, respectively, and the second stopper’s stiffness is 5 x 103 N/m. Hard stopper configurations shows improvements with both asymmetric cases, but not enough improvements to outperform the system without amplitude stoppers.
This paper describes a detailed understanding of how nanofillers function as radiation barriers within the polymer matrix, and how their effectiveness is impacted by factors such as composition, size, loading, surface chemistry, and dispersion. We designed a comprehensive investigation of heavy ion irradiation resistance in epoxy matrix composites loaded with surface-modified ceria nanofillers, utilizing tandem computational and experimental methods to elucidate radiolytic damage processes and relate them to chemical and structural changes observed through thermal analysis, vibrational spectroscopy, and electron microscopy. A detailed mechanistic examination supported by FTIR spectroscopy data identified the bisphenol A moiety as a primary target for degradation reactions. Results of computational modeling by the Stopping Range of Ions in Matter (SRIM) Monte Carlo simulation were in good agreement with damage analysis from surface and cross-sectional SEM imaging. All metrics indicated that ceria nanofillers reduce the damage area in polymer nanocomposites, and that nanofiller loading and homogeneity of dispersion are key to effective damage prevention. The results of this study represent a significant pathway for engineered irradiation tolerance in a diverse array of polymer nanocomposite materials. Numerous areas of materials science can benefit from utilizing this facile and effective method to extend the reliability of polymer materials.
A rapid and facile design strategy to create a highly complex optical tag with programmable, multimodal photoluminescent properties is described. This was achieved via intrinsic and DNA-fluorophore hidden signatures. As a first covert feature of the tag, an intricate novel heterometallic near-infrared (NIR)-emitting mesoporous metal-organic framework (MOF) was designed and synthesized. The material is constructed from two chemically distinct, homometallic hexanuclear clusters based on Nd and Yb. Uniquely, the Nd-based cluster is observed here for the first time in a MOF and consists of two staggered Nd μ3-oxo trimers. To generate controlled, multimodal, and tailorable emission with difficult to counterfeit features, the NIR-emissive MOF was post-synthetically modified via a fluorescent DNA oligo labeling design strategy. The surface attachment of several distinct fluorophores, including the simultaneous attachment of up to three distinct fluorescently labeled oligos was achieved, with excitation and emission properties across the visible spectrum (480-800 nm). The DNA inclusion as a secondary covert element in the tag was demonstrated via the detection of SYBR Gold dye association. Importantly, the approach implemented here serves as a rapid and tailorable way to encrypt distinct information in a facile and modular fashion and provides an innovative technology in the quest toward complex optical tags.
Garnet-type, solid electrolytes, such as Li7La3Zr2O12 (LLZO), are a promising alternative to liquid electrolytes for lithium-metal batteries. However, such solid-electrolyte materials frequently exhibit undesirable lithium (Li) metal plating and fracture along grain boundaries. In this study, we employ atomistic simulations to investigate the mechanisms and key fracture properties associated with intergranular fracture along one such boundary. Our results show that, in the case of a Σ5(310) grain boundary, this boundary exhibits brittle fracture behavior, i.e. the absence of dislocation activity ahead of the propagating crack tip, accompanied with a decrease in work of separation, peak stress, and maximum stress intensity factor as the temperature increases from 300 K to 1500 K. As the crack propagates, we predict two temperature-dependent Li clustering regimes. For temperatures at or below 900 K, Li tends to cluster in the bulk region away from the crack plane driven by a void-coalescence mechanism concomitant a simultaneous cubic-to-tetragonal phase transition. The tetragonalization of LLZO in this temperature regime acts as an emerging toughening mechanism. At higher temperatures, this phase transition mechanism is suppressed leading to a more uniform distribution of Li throughout the grain-boundary system and lower fracture properties as compared to lower temperatures.
Two-phase fluid flow properties underlie quantitative prediction of water and gas movement, but constraining these properties typically requires multiple time-consuming laboratory methods. The estimation of two-phase flow properties (van Genuchten parameters, porosity, and intrinsic permeability) is illustrated in cores of vitric nonwelded volcanic tuff using Bayesian parameter estimation that fits numerical models to observations from spontaneous imbibition experiments. The uniqueness and correlation of the estimated parameters is explored using different modeling assumptions and subsets of the observed data. The resulting estimation process is sensitive to both moisture retention and relative permeability functions, thereby offering a comprehensive method for constraining both functions. The data collected during this relatively simple laboratory experiment, used in conjunction with a numerical model and a global optimizer, result in a viable approach for augmenting more traditional capillary pressure data obtained from hanging water column, membrane plate extractor, or mercury intrusion methods. This method may be useful when imbibition rather than drainage parameters are sought, when larger samples (e.g., including heterogeneity or fractures) need to be tested that cannot be accommodated in more traditional methods, or when in educational laboratory settings.
This article analyzes the total ionizing dose (TID) effects on noise characteristics of commercial multi-level-cell (MLC) 3-D NAND memory technology during the read operation. The chips were exposed to a Co-60 gamma-ray source for up to 100 krad(Si) of TID. We find that the number of noisy cells in the irradiated chip increases with TID. Bit-flip noise was more dominant for cells in an erased state during irradiation compared to programmed cells.
Advances on differentiating between malicious intent and natural "organizational evolution"to explain observed anomalies in operational workplace patterns suggest benefit from evaluating collective behaviors observed in the facilities to improve insider threat detection and mitigation (ITDM). Advances in artificial neural networks (ANN) provide more robust pathways for capturing, analyzing, and collating disparate data signals into quantitative descriptions of operational workplace patterns. In response, a joint study by Sandia National Laboratories and the University of Texas at Austin explored the effectiveness of commercial artificial neural network (ANN) software to improve ITDM. This research demonstrates the benefit of learning patterns of organizational behaviors, detecting off-normal (or anomalous) deviations from these patterns, and alerting when certain types, frequencies, or quantities of deviations emerge for improving ITDM. Evaluating nearly 33,000 access control data points and over 1,600 intrusion sensor data points collected over a nearly twelve-month period, this study's results demonstrated the ANN could recognize operational patterns at the Nuclear Engineering Teaching Laboratory (NETL) and detect off-normal behaviors - suggesting that ANNs can be used to support a data-analytic approach to ITDM. Several representative experiments were conducted to further evaluate these conclusions, with the resultant insights supporting collective behavior-based analytical approaches to quantitatively describe insider threat detection and mitigation.
In the pursuit of improving additively manufactured (AM) component quality and reliability, fine-tuning critical process parameters such as laser power and scan speed is a great first step toward limiting defect formation and optimizing the microstructure. However, the synergistic effects between these process parameters, layer thickness, and feedstock attributes (e.g. powder size distribution) on part characteristics such as microstructure, density, hardness, and surface roughness are not as well-studied. In this work, we investigate 316L stainless steel density cubes built via laser powder bed fusion (L-PBF), emphasizing the significant microstructural changes that occur due to altering the volumetric energy density (VED) via laser power, scan speed, and layer thickness changes, coupled with different starting powder size distributions. This study demonstrates that there is not one ideal process set and powder size distribution for each machine. Instead, there are several combinations or feedstock/process parameter ‘recipes’ to achieve similar goals. This study also establishes that for equivalent VEDs, changing powder size can significantly alter part density, GND density, and hardness. Through proper parameter and feedstock control, part attributes such as density, grain size, texture, dislocation density, hardness, and surface roughness can be customized, thereby creating multiple high-performance regions in the AM process space.
This report documents details of the microstructure and mechanical properties of -tin (Sn), that is used in the Tri-lab (Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL)) collaboration project on Multi-phase Tin Strength. We report microstructural features detailing the crystallographic texture and grain morphology of as-received -tin from electron back scatter diffraction (EBSD). Temperature and strain rate dependent mechanical behavior was investigated by multiple compression tests at temperatures of 200K to 400K and strain rates of 0.0001 /s to 100 /s. Tri-lab tin showed significant temperature and strain rate dependent strength with no significant plastic anisotropy. A sample to sample material variation was observed from duplicate compression tests and texture measurements. Compression data was used to calibrate model parameters for temperature and rate dependent strength models, Johnson-Cook (JC), Zerilli-Armstrong (ZA) and Preston-Tonks-Wallace (PTW) strength models.
This review discusses atomistic modeling techniques used to simulate radiation damage in crystalline materials. Radiation damage due to energetic particles results in the formation of defects. The subsequent evolution of these defects over multiple length and time scales requiring numerous simulations techniques to model the gamut of behaviors. This work focuses attention on current and new methodologies at the atomistic scale regarding the mechanisms of defect formation at the primary damage state.
In an x-ray driven cavity experiment, an intense flux of soft x rays on the emitting surface produces significant emission of photoelectrons having several kiloelectronvolts of kinetic energy. At the same time, rapid heating of the emitting surface occurs, resulting in the release of adsorbed surface impurities and subsequent formation of an impurity plasma. This numerical study explores a simple model for the photoelectric currents and the impurity plasma. Attention is given to the effect of varying the composition of the impurity plasma. The presence of protons or hydrogen molecular ions leads to a substantially enhanced cavity current, while heavier plasma ions are seen to have a limited effect on the cavity current due to their lower mobility. Additionally, it is demonstrated that an additional peak in the current waveform can appear due to the impurity plasma. A correlation between the impurity plasma composition and the timing of this peak is elucidated.
We present the analysis and results of the first dataset collected with the MARS neutron detector deployed at the Oak Ridge National Laboratory Spallation Neutron Source (SNS) for the purpose of monitoring and characterizing the beam-related neutron (BRN) background for the COHERENT collaboration. MARS was positioned next to the COH-CsI coherent elastic neutrino-nucleus scattering detector in the SNS basement corridor. This is the basement location of closest proximity to the SNS target and thus, of highest neutrino flux, but it is also well shielded from the BRN flux by infill concrete and gravel. These data show the detector registered roughly one BRN per day. Using MARS' measured detection efficiency, the incoming BRN flux is estimated to be 1.20 ± 0.56 neutrons/m2/MWh for neutron energies above ∼3.5 MeV and up to a few tens of MeV. We compare our results with previous BRN measurements in the SNS basement corridor reported by other neutron detectors.
With the proliferation of additive manufacturing and 3D printing technologies, a broader palette of material properties can be elicited from cellular solids, also known as metamaterials, architected foams, programmable materials, or lattice structures. Metamaterials are designed and optimized under the assumption of perfect geometry and a homogeneous underlying base material. Yet in practice real lattices contain thousands or even millions of complex features, each with imperfections in shape and material constituency. While the role of these defects on the mean properties of metamaterials has been well studied, little attention has been paid to the stochastic properties of metamaterials, a crucial next step for high reliability aerospace or biomedical applications. In this work we show that it is precisely the large quantity of features that serves to homogenize the heterogeneities of the individual features, thereby reducing the variability of the collective structure and achieving effective properties that can be even more consistent than the monolithic base material. In this first statistical study of additive lattice variability, a total of 239 strut-based lattices were mechanically tested for two pedagogical lattice topologies (body centered cubic and face centered cubic) at three different relative densities. The variability in yield strength and modulus was observed to exponentially decrease with feature count (to the power −0.5), a scaling trend that we show can be predicted using an analytic model or a finite element beam model. The latter provides an efficient pathway to extend the current concepts to arbitrary/complex geometries and loading scenarios. These results not only illustrate the homogenizing benefit of lattices, but also provide governing design principles that can be used to mitigate manufacturing inconsistencies via topological design.
Current state-of-the-art gasoline direct-injection (GDI) engines use multiple injections as one of the key technologies to improve exhaust emissions and fuel efficiency. For this technology to be successful, secured adequate control of fuel quantity for each injection is mandatory. However, nonlinearity and variations in the injection quantity can deteriorate the accuracy of fuel control, especially with small fuel injections. Therefore, it is necessary to understand the complex injection behavior and to develop a predictive model to be utilized in the development process. This study presents a methodology for rate of injection (ROI) and solenoid voltage modeling using artificial neural networks (ANNs) constructed from a set of Zeuch-style hydraulic experimental measurements conducted over a wide range of conditions. A quantitative comparison between the ANN model and the experimental data shows that the model is capable of predicting not only general features of the ROI trend, but also transient and non-linear behaviors at particular conditions. In addition, the end of injection (EOI) could be detected precisely with a virtually generated solenoid voltage signal and the signal processing method, which applies to an actual engine control unit. A correlation between the detected EOI timings calculated from the modeled signal and the measurement results showed a high coefficient of determination.
The effects of applied stress, ranging from tensile to compressive, on the atmospheric pitting corrosion behavior of 304L stainless steel (SS304L) were analyzed through accelerated atmospheric laboratory exposures and microelectrochemical cell analysis. After exposing the lateral surface of a SS304L four-point bend specimen to artificial seawater at 50°C and 35% relative humidity for 50 d, pitting characteristics were determined using optical profilometry and scanning electron microscopy. The SS304L microstructure was analyzed using electron backscatter diffraction. Additionally, localized electrochemical measurements were performed on a similar, unexposed, SS304L four-point bend bar to determine the effects of applied stress on corrosion susceptibility. Under the applied loads and the environment tested, the observed pitting characteristics showed no correlation with the applied stress (from 250 MPa to -250 MPa). Pitting depth, surface area, roundness, and distribution were found to be independent of location on the sample or applied stress. The lack of correlation between pitting statistics and applied stress was more likely due to the aggressive exposure environment, with a sea salt loading of 4 g/m2 chloride. The pitting characteristics observed were instead governed by the available cathode current and salt distribution, which are a function of sea salt loading, as well as pre-existing underlying microstructure. In microelectrochemical cell experiments performed in Cl- environments comparable to the atmospheric exposure and in environments containing orders of magnitude lower Cl- concentrations, effects of the applied stress on corrosion susceptibility were only apparent in open-circuit potential in low Cl- concentration solutions. Cl- concentration governed the current density and transpassive dissolution potential.
Structural disorder causes materials' surface electronic properties, e.g., work function (φ), to vary spatially, yet it is challenging to prove exact causal relationships to underlying ensemble disorder, e.g., roughness or granularity. For polycrystalline Pt, nanoscale resolution photoemission threshold mapping reveals a spatially varying φ = 5.70 ± 0.03 eV over a distribution of (111) vicinal grain surfaces prepared by sputter deposition and annealing. With regard to field emission and related phenomena, e.g., vacuum arc initiation, a salient feature of the φ distribution is that it is skewed with a long tail to values down to 5.4 eV, i.e., far below the mean, which is exponentially impactful to field emission via the Fowler-Nordheim relation. We show that the φ spatial variation and distribution can be explained by ensemble variations of granular tilts and surface slopes via a Smoluchowski smoothing model wherein local φ variations result from spatially varying densities of electric dipole moments, intrinsic to atomic steps, that locally modify φ. Atomic step-terrace structure is confirmed with scanning tunneling microscopy (STM) at several locations on our surfaces, and prior works showed STM evidence for atomic step dipoles at various metal surfaces. From our model, we find an atomic step edge dipole μ = 0.12 D/edge atom, which is comparable to values reported in studies that utilized other methods and materials. Our results elucidate a connection between macroscopic φ and the nanostructure that may contribute to the spread of reported φ for Pt and other surfaces and may be useful toward more complete descriptions of polycrystalline metals in the models of field emission and other related vacuum electronics phenomena, e.g., arc initiation.
This article evaluates the data retention characteristics of irradiated multilevel-cell (MLC) 3-D NAND flash memories. We irradiated the memory chips by a Co-60 gamma-ray source for up to 50 krad(Si) and then wrote a random data pattern on the irradiated chips to find their retention characteristics. The experimental results show that the data retention property of the irradiated chips is significantly degraded when compared to the un-irradiated ones. We evaluated two independent strategies to improve the data retention characteristics of the irradiated chips. The first method involves high-temperature annealing of the irradiated chips, while the second method suggests preprogramming the memory modules before deploying them into radiation-prone environments.
White dwarfs (WDs) are useful across a wide range of astrophysical contexts. The appropriate interpretation of their spectra relies on the accuracy of WD atmosphere models. One essential ingredient of atmosphere models is the theory used for the broadening of spectral lines. To date, the models have relied on Vidal et al., known as the unified theory of line broadening (VCS). There have since been advancements in the theory; however, the calculations used in model atmosphere codes have only received minor updates. Meanwhile, advances in instrumentation and data have uncovered indications of inaccuracies: spectroscopic temperatures are roughly 10% higher and spectroscopic masses are roughly 0.1 M higher than their photometric counterparts. The evidence suggests that VCS-based treatments of line profiles may be at least partly responsible. Gomez et al. developed a simulation-based line-profile code Xenomorph using an improved theoretical treatment that can be used to inform questions around the discrepancy. However, the code required revisions to sufficiently decrease noise for use in model spectra and to make it computationally tractable and physically realistic. In particular, we investigate three additional physical effects that are not captured in the VCS calculations: ion dynamics, higher-order multipole expansion, and an expanded basis set. We also implement a simulation-based approach to occupation probability. The present study limits the scope to the first three hydrogen Balmer transitions (Hα, Hβ, and Hγ). We find that screening effects and occupation probability have the largest effects on the line shapes and will likely have important consequences in stellar synthetic spectra.
Computer Methods in Applied Mechanics and Engineering
Shojaei, Arman; Hermann, Alexander; Cyron, Christian J.; Seleson, Pablo; Silling, Stewart A.
Efficient and accurate calculation of spatial integrals is of major interest in the numerical implementation of peridynamics (PD). The standard way to perform this calculation is a particle-based approach that discretizes the strong form of the PD governing equation. This approach has rapidly been adopted by the PD community since it offers some advantages. It is computationally cheaper than other available schemes, can conveniently handle material separation, and effectively deals with nonlinear PD models. Nevertheless, PD models are still computationally very expensive compared with those based on the classical continuum mechanics theory, particularly for large-scale problems in three dimensions. This results from the nonlocal nature of the PD theory which leads to interactions of each node of a discretized body with multiple surrounding nodes. Here, we propose a new approach to significantly boost the numerical efficiency of PD models. We propose a discretization scheme that employs a simple collocation procedure and is truly meshfree; i.e., it does not depend on any background integration cells. In contrast to the standard scheme, the proposed scheme requires a much smaller set of neighboring nodes (keeping the same physical length scale) to achieve a specific accuracy and is thus computationally more efficient. Our new scheme is applicable to the case of linear PD models and within neighborhoods where the solution can be approximated by smooth basis functions. Therefore, to fully exploit the advantages of both the standard and the proposed schemes, a hybrid discretization is presented that combines both approaches within an adaptive framework. The high performance of the developed framework is illustrated by several numerical examples, including brittle fracture and corrosion problems in two and three dimensions.
This report details a method to estimate the energy content of various types of seismic body waves. The method is based on the strain energy of an elastic wavefield and Hooke’s Law. We present a detailed derivation of a set of equations that explicitly partition the seismic strain energy into two parts: one for compressional (P) waves and one for shear (S) waves. We posit that the ratio of these two quantities can be used to determine the relative contribution of seismic P and S waves, possibly as a method to discriminate between earthquakes and buried explosions. We demonstrate the efficacy of our method by using it to compute the strain energy of synthetic seismograms with differing source characteristics. Specifically, we find that explosion-generated seismograms contain a preponderance of P wave strain energy when compared to earthquake-generated synthetic seismograms. Conversely, earthquake-generated synthetic seismograms contain a much greater degree of S wave strain energy when compared to explosion-generated seismograms.
We investigate the sensitivity of silicon-oxide-nitride-silicon-oxide (SONOS) charge trapping memory technology to heavy-ion induced single-event effects. Threshold voltage ( V_T ) statistics were collected across multiple test chips that contained in total 18 Mb of 40-nm SONOS memory arrays. The arrays were irradiated with Kr and Ar ion beams, and the changes in their V_T distributions were analyzed as a function of linear energy transfer (LET), beam fluence, and operating temperature. We observe that heavy ion irradiation induces a tail of disturbed devices in the 'program' state distribution, which has also been seen in the response of floating-gate (FG) flash cells. However, the V_T distribution of SONOS cells lacks a distinct secondary peak, which is generally attributed to direct ion strikes to the gate-stack of FG cells. This property, combined with the observed change in the V_T distribution with LET, suggests that SONOS cells are not particularly sensitive to direct ion strikes but cells in the proximity of an ion's absorption can still experience a V_T shift. These results shed new light on the physical mechanisms underlying the V_T shift induced by a single heavy ion in scaled charge trap memory.