The concept of a nonlocal elastic metasurface has been recently proposed and experimentally demonstrated in Zhu et al. (2020). When implemented in the form of a total-internal-reflection (TIR) interface, the metasurface can act as an elastic wave barrier that is impenetrable to deep subwavelength waves over an exceptionally wide frequency band. The underlying physical mechanism capable of delivering this broadband subwavelength performance relies on an intentionally nonlocal design that leverages long-range connections between the units forming the fundamental supercell. This paper explores the design and application of a nonlocal TIR metasurface to achieve broadband passive vibration isolation in a structural assembly made of multiple dissimilar elastic waveguides. The specific structural system comprises shell, plate, and beam waveguides, and can be seen as a prototypical structure emulating mechanical assemblies of practical interest for many engineering applications. The study also reports the results of an experimental investigation that confirms the significant vibration isolation capabilities afforded by the embedded nonlocal TIR metasurface. These results are particularly remarkable because they show that the performance of the nonlocal metasurface is preserved when applied to a complex structural assembly and under non-ideal incidence conditions of the incoming wave, hence significantly extending the validity of the results presented in Zhu et al. (2020). Results also confirm that, under proper conditions, the original concept of a planar metasurface can be morphed into a curved interface while still preserving full wave control capabilities.
We report a Bayesian framework for concurrent selection of physics-based models and (modeling) error models. We investigate the use of colored noise to capture the mismatch between the predictions of calibrated models and observational data that cannot be explained by measurement error alone within the context of Bayesian estimation for stochastic ordinary differential equations. Proposed models are characterized by the average data-fit, a measure of how well a model fits the measurements, and the model complexity measured using the Kullback–Leibler divergence. The use of a more complex error models increases the average data-fit but also increases the complexity of the combined model, possibly over-fitting the data. Bayesian model selection is used to find the optimal physical model as well as the optimal error model. The optimal model is defined using the evidence, where the average data-fit is balanced by the complexity of the model. The effect of colored noise process is illustrated using a nonlinear aeroelastic oscillator representing a rigid NACA0012 airfoil undergoing limit cycle oscillations due to complex fluid–structure interactions. Several quasi-steady and unsteady aerodynamic models are proposed with colored noise or white noise for the model error. The use of colored noise improves the predictive capabilities of simpler models.
Sierra/SolidMechanics (Sierra/SM) is a three-dimensional solid mechanics code with a versatile element library, nonlinear material models, large deformation capabilities, and contact. It is built on the SIERRA Framework. SIERRA provides a data management framework in a parallel computing environment that allows the addition of capabilities in a modular fashion. Contact capabilities are parallel and scalable. This document provides information about the functionality in Sierra/SM and the command structure required to access this functionality in a user input file. This document is divided into chapters based primarily on functionality. For example, the command structure related to the use of various element types is grouped in one chapter; descriptions of material models are grouped in another chapter. Sierra/SM provides both explicit transient dynamics and implicit quasistatics and dynamics capabilities. Both the explicit and implicit modules are highly scalable in a parallel computing environment. In the past, the explicit and implicit capabilities were provided by two separate codes, known as Presto and Adagio, respectively. These capabilities have been consolidated into a single code. The executable is named Adagio, but it provides the full suite of solid mechanics capabilities, for both implicit and explicit. The Presto executable has been disabled as a consequence of this consolidation.
This work is a comprehensive technical review of existing literature and a synthesis of current understanding of the governing physics behind the interaction of multiple fuel injections, ignition, and combustion behavior of multiple-injections in diesel engines. Multiple-injection is a widely adopted operating strategy applied in modern compression-ignition engines, which involves various combinations of small pre-injections and post-injections of fuel before and after the main injection and splitting the main injection into multiple smaller injections. This strategy has been conclusively shown to improve fuel economy in diesel engines while achieving simultaneous NOX, soot, and combustion noise reduction - in addition to a reduction in the emissions of unburned hydrocarbons (UHC) and CO by preventing fuel wetting and flame quenching at the piston wall. Despite the widespread adoption and an extensive literature documenting the effects of multiple-injection strategies in engines, little is known about the complex interplay between the underlying flow physics and combustion chemistry involved in such flows, which ultimately governs the ignition and subsequent combustion processes thereby dictating the effectiveness of this strategy. In this work, we provide a comprehensive overview of the literature on the interaction between the jets in a multiple-injection event, the resulting mixture, and finally the ignition and combustion dynamics as a function of engine operational parameters including injection duration and dwell. The understanding of the underlying processes is facilitated by a new conceptual model of multiple-injection physics. We conclude by identifying the major remaining research questions that need to be addressed to refine and help achieve a design-level understanding to optimize advanced multiple-injection strategies that can lead to higher engine efficiency and lower emissions.
Injector performance in gasoline Direct-Injection Spark-Ignition (DISI) engines is a key focus in the automotive industry as the vehicle parc transitions from Port Fuel Injected (PFI) to DISI engine technology. DISI injector deposits, which may impact the fuel delivery process in the engine, sometimes accumulate over longer time periods and greater vehicle mileages than traditional combustion chamber deposits (CCD). These higher mileages and longer timeframes make the evaluation of these deposits in a laboratory setting more challenging due to the extended test durations necessary to achieve representative in-use levels of fouling. The need to generate injector tip deposits for research purposes begs the questions, can an artificial fouling agent to speed deposit accumulation be used, and does this result in deposits similar to those formed naturally by market fuels? In this study, a collection of DISI injectors with different types of conditioning, ranging from controlled engine-stand tests with market or profould fuels, to vehicle tests run over drive cycles, to uncontrolled field use, were analyzed to understand the characteristics of their injector tip deposits and their functional impacts. The DISI injectors, both naturally and profouled, were holistically evaluated for their spray performance, deposit composition, and deposit morphology relative to one another. The testing and accompanying analysis reveals both similarities and differences among naturally fouled, fouled through long time periods with market fuel, and profouled injectors, fouled artificially through the use of a sulfur dopant. Profouled injectors were chemically distinct from naturally fouled injectors, and found to contain higher levels of sulfur dioxide. Also, profouled injectors exhibited greater volumes of deposits on the face of the injector tip. However, functionally, both naturally-fouled and profouled injectors featured similar impacts on their spray performance relative to clean injectors, with the fouled injector spray plumes remaining narrower, limiting plume-to-plume interactions, and altering the liquid-spray penetration dynamics., insights from which can guide future research into injector tip deposits.
A one-dimensional, non-equilibrium, compressible law of the wall model is proposed to increase the accuracy of heat transfer predictions from computational fluid dynamics (CFD) simulations of internal combustion engine flows on engineering grids. Our 1D model solves the transient turbulent Navier-Stokes equations for mass, momentum, energy and turbulence under the thin-layer assumption, using a finite-difference spatial scheme and a high-order implicit time integration method. A new algebraic eddy-viscosity closure, derived from the Han-Reitz equilibrium law of the wall, with enhanced Prandtl number sensitivity and compressibility effects, was developed for optimal performance. Several eddy viscosity sub-models were tested for turbulence closure, including the two-equation k-epsilon and k-omega, which gave insufficient performance. Validation against pulsating channel flow experiments highlighted the superior capability of the 1D model to capture transient near-wall velocity and temperature profiles, and the need to appropriately model the eddy viscosity using a low-Reynolds method, which could not be achieved with the standard two-equation models. The results indicate that the non-equilibrium model can capture the near-wall velocity profile dynamics (including velocity profile inversion) while equilibrium models cannot, and simultaneously reduce heat flux prediction errors by up to one order of magnitude. The proposed optimal configuration reduced heat flux error for the pulsating channel flow case from 18.4#x00025; (Launder-Spalding law of the wall) down to 1.67#x00025;.
Spray-wall interactions in diesel engines have a strong influence on turbulent flow evolution and mixing, which influences the engine's thermal efficiency and pollutant-emissions behavior. Previous optical experiments and numerical investigations of a stepped-lip diesel piston bowl focused on how spray-wall interactions influence the formation of squish-region vortices and their sensitivity to injection timing. Such vortices are stronger and longer-lived at retarded injection timings and are correlated with faster late-cycle heat release and soot reductions, but are weaker and shorter-lived as injection timing is advanced. Computational fluid dynamics (CFD) simulations predict that piston bowls with more space in the squish region can enhance the strength of these vortices at near-TDC injection timings, which is hypothesized to further improve peak thermal efficiency and reduce emissions. The dimpled stepped-lip (DSL) piston is such a design. In this study, the in-cylinder flow is simulated with a DSL piston to investigate the effects of dimple geometry parameters on squish-region vortex formation via a design sensitivity study. The rotational energy and size of the squish-region vortices are quantified. The results suggest that the DSL piston is capable of enhancing vortex formation compared to the stepped-lip piston at near-TDC injection timings. The sensitivity study led to the design of an improved DSL bowl with shallower, narrower, and steeper-curved dimples that are further out into the squish region, which enhances predicted vortex formation with 27#x00025; larger and 44#x00025; more rotationally energetic vortices compared to the baseline DSL bowl. Engine experiments with the baseline DSL piston demonstrate that it can reduce combustion duration and improve thermal efficiency by as much as 1.4#x00025; with main injection timings near TDC, due to improved rotational energy, but with 69#x00025; increased soot emissions and no penalty in NOx emissions.
To comply with increasingly stringent pollutant emissions regulations, diesel engine operation in a catalyst-heating mode is critical to achieve rapid light-off of exhaust aftertreatment catalysts during the first minutes of cold starting. Current approaches to catalyst-heating operation typically involve one or more late post injections to retard combustion phasing and increase exhaust temperatures. The ability to retard post injection timing(s) while maintaining acceptable pollutant emissions levels is pivotal for improved catalyst-heating calibrations. Higher fuel cetane number has been reported to enable later post injections with increased exhaust heat and decreased pollutant emissions, but the mechanism is not well understood. The purpose of this experimental and numerical simulation study is to provide further insight into the ways in which fuel cetane number affects combustion and pollutant formation in a medium-duty diesel engine. Three full boiling-range diesel fuels with cetane numbers of approximately 45, 50, and 55 are employed in this study with a well-controlled set of calibrations employing a five-injection strategy. The two post injections are block-shifted to increasingly retarded timings, and the effects on exhaust heat and pollutant emissions are quantified for each fuel. For a given injection strategy calibration, increasing cetane number enables increased exhaust temperature and decreased hydrocarbon and carbon monoxide emissions for a fixed load. The increase in exhaust temperature is attributed to an increased fueling requirement to compensate for additional wall heat losses caused by earlier, more robust pilot combustion with the more reactive fuels. Formaldehyde is predicted to form in the fuel-lean periphery of the first pilot injection spray and can persist until exhaust valve opening in the absence of direct interactions with subsequent injections. Unreacted fuel-air mixture in the fuel-rich interior of the first-pilot spray is likely too cool for any significant reactions, and can persist until exhaust valve opening in the absence of turbulence/chemistry interactions and/or direct heating through interactions with subsequent injections.
Understanding of structural and morphological evolution in nanomaterials is critical in tailoring their functionality for applications such as energy conversion and storage. Here, we examine irradiation effects on the morphology and structure of amorphous TiO2 nanotubes in comparison with their crystalline counterpart, anatase TiO2 nanotubes, using high-resolution transmission electron microscopy (TEM), in situ ion irradiation TEM, and molecular dynamics (MD) simulations. Anatase TiO2 nanotubes exhibit morphological and structural stability under irradiation due to their high concentration of grain boundaries and surfaces as defect sinks. On the other hand, amorphous TiO2 nanotubes undergo irradiation-induced crystallization, with some tubes remaining only partially crystallized. The partially crystalline tubes bend due to internal stresses associated with densification during crystallization as suggested by MD calculations. These results present a novel irradiation-based pathway for potentially tuning structure and morphology of energy storage materials. Graphical abstract: [Figure not available: see fulltext.]
Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
It has been demonstrated that grid cells are encoding physical locations using hexagonally spaced, periodic phase-space representations. Theories of how the brain is decoding this phase-space representation have been developed based on neuroscience data. However, theories of how sensory information is encoded into this phase space are less certain. Here we show a method on how a navigation-relevant input space such as elevation trajectories may be mapped into a phase-space coordinate system that can be decoded using previously developed theories. Just as animals can tell where they are in a local region based on where they have been, our encoding algorithm enables the localization to a position in space by integrating measurements from a trajectory over a map. In this extended abstract, we walk through our approach with simulations using a digital elevation model.
Uniaxial strain, reverse-ballistic impact experiments were performed on wrought 17-4 PH H1025 stainless steel, and the resulting Hugoniot was determined to a peak stress of 25 GPa through impedance matching to known standard materials. The measured Hugoniot showed evidence of a solid-solid phase transition, consistent with other martensitic Fe-alloys. The phase transition stress in the wrought 17-4 PH H1025 stainless steel was measured in a uniaxial strain, forward-ballistic impact experiment to be 11.4 GPa. Linear fits to the Hugoniot for both the low and high pressure phase are presented with corresponding uncertainty. The low pressure martensitic phase exhibits a shock velocity that is weakly dependent on the particle velocity, consistent with other martensitic Fe-alloys.
Fusel alcohol mixtures containing ethanol, isobutanol, isopentanol, and 2-phenylethanol have been shown to be a promising means to maximize renewable fuel yield from various biomass feedstocks and waste streams. We hypothesized that use of these fusel alcohol mixtures as a blending agent with gasoline can significantly lower the greenhouse gas emissions from the light-duty fleet. Since the composition of fusel alcohol mixtures derived from fermentation is dependent on a variety of factors such as biocatalyst selection and feedstock composition, multi-objective optimization was performed to identify optimal fusel alcohol blends in gasoline that simultaneously maximize thermodynamic efficiency gain and energy density. Pareto front analysis combined with fuel property predictions and a Merit Score-based metric led to prediction of optimal fusel alcohol-gasoline blends over a range of blending volumes. The optimal fusel blends were analyzed based on a Net Fuel Economy Improvement Potential metric for volumetric blending in a gasoline base fuel. The results demonstrate that various fusel alcohol blends provide the ability to maximize efficiency improvement while minimizing increases to blending vapor pressure and decreases to energy density compared to an ethanol-only bioblendstock. Fusel blends exhibit predicted Net Fuel Economy Improvement Potential comparable to neat ethanol when blended with gasoline in all scenarios, with increased improvement over ethanol at moderate to high bio-blendstock blending levels. The optimal fusel blend that was identified was a mixture of 90% v/v isobutanol and 10% v/v 2-phenylethanol, blended at 45% v/v with gasoline, yielding a predicted 4.67% increase in Net Fuel Economy Improvement Potential. These findings suggest that incorporation of fusel alcohols as a gasoline bioblendstock can improve both fuel performance and the net fuel yield of the bioethanol industry.
Low-Z nanocrystalline diamond (NCD) grids have been developed to reduce spurious fluorescence and avoid X-ray peak overlaps or interferences between the specimen and conventional metal grids. Here, the low-Z NCD grids are non-toxic and safe to handle, conductive, can be subjected to high-temperature heating experiments, and may be used for analytical work in lieu of metal grids. Both a half-grid geometry, which can be used for any lift-out method, or a full-grid geometry that can be used for ex situ lift-out or thin film analyses, can be fabricated and used for experiments.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
We introduce novel higher-order topological phases of matter in chiral-symmetric systems (class AIII of the tenfold classification), most of which would be misidentified as trivial by current theories. These phases are protected by "multipole chiral numbers,"bulk integer topological invariants that in 2D and 3D are built from sublattice multipole moment operators, as defined herein. The integer value of a multipole chiral number indicates how many degenerate zero-energy states localize at each corner of a system. These higher-order topological phases of matter are generally boundary-obstructed and robust in the presence of chiral-symmetry-preserving disorder.
This document presents tests from the Sierra Structural Mechanics verification test suite. Each of these tests is run nightly with the Sierra/SD code suite and the results of the test checked versus the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the Sierra/SD code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
The experiment investigates free expansion of a supercritical fluid into a two-phase liquid-vapor coexistence region. A huge molecular dynamics simulation (6 billion Lennard-Jones atoms) was run on 5760 GPUs (33% of LLNL Sierra) using LAMMPS/Kokkos software. This improved visualization workflow and started preliminary simulations of aluminum using SNAP machine learning potential.
This work investigates the role of water and oxygen on the shear-induced structural modifications of molybdenum disulfide (MoS2) coatings for space applications and the impact on friction due to oxidation from aging. We observed from transmission electron microscopy (TEM) and X-ray photoelectron spectroscopy (XPS) that sliding in both an inert environment (i.e., dry N2) or humid lab air forms basally oriented (002) running films of varying thickness and structure. Tribological testing of the basally oriented surfaces created in dry N2 and air showed lower initial friction than a coating with an amorphous or nanocrystalline microstructure. Aging of coatings with basally oriented surfaces was performed by heating samples at 250 °C for 24 h. Post aging tribological testing of the as-deposited coating showed increased initial friction and a longer transition from higher friction to lower friction (i.e., run-in) due to oxidation of the surface. Tribological testing of raster patches formed in dry N2 and air both showed an improved resistance to oxidation and reduced initial friction after aging. The results from this study have implications for the use of MoS2-coated mechanisms in aerospace and space applications and highlight the importance of preflight testing. Preflight cycling of components in inert or air environments provides an oriented surface microstructure with fewer interaction sites for oxidation and a lower shear strength, reducing the initial friction coefficient and oxidation due to aging or exposure to reactive species (i.e., atomic oxygen).
This user's guide documents capabilities in Sierra/SolidMechanics which remain "in-development" and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.6 User's Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Presented in this document are tests that exist in the Sierra/SolidMechanics example problem suite, which is a subset of the Sierra/SM regression and performance test suite. These examples showcase common and advanced code capabilities. A wide variety of other regression and verification tests exist in the Sierra/SM test suite that are not included in this manual.
Presented in this document are the theoretical aspects of capabilities contained in the Sierra/SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilities come closer to production level.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to User's Manual. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
More realistic models for infrasound signal propagation across a region can be used to improve the precision and accuracy of spatial and temporal source localization estimates. Here, motivated by incomplete infrasound event bulletins in the Western US, the location capabilities of a regional infrasonic network of stations located between 84–458 km from the Utah Test and Training Range, Utah, USA, is assessed using a series of near-surface explosive events with complementary ground truth (GT) information. Signal arrival times and backazimuth estimates are determined with an automatic F-statistic based signal detector and manually refined by an analyst. This study represents the first application of three distinct celerity-range and backazimuth models to an extensive suite of realistic signal detections for event location purposes. A singular celerity and backazimuth deviation model was previously constructed using ray tracing analysis based on an extensive archive of historical atmospheric specifications and is applied within this study to test location capabilities. Similarly, a set of multivariate, season and location specific models for celerity and backazimuth are compared to an empirical model that depends on the observations across the infrasound network and the GT events, which accounts for atmospheric propagation variations from source to receiver. Discrepancies between observed and predicted signal celerities result in locations with poor accuracy. Application of the empirical model improves both spatial localization precision and accuracy; all but one location estimates retain the true GT location within the 90 per cent confidence bounds. Average mislocation of the events is 15.49 km and average 90 per cent error ellipse areas are 4141 km2. The empirical model additionally reduces origin time residuals; origin time residuals from the other location models are in excess of 160 s while residuals produced with the empirical model are within 30 s of the true origin time. Finally, we demonstrate that event location accuracy is driven by a combination of signal propagation model and the azimuthal gap of detecting stations. A direct relationship between mislocation, error ellipse area and increased station azimuthal gaps indicate that for sparse networks, detection backazimuths may drive location biases over traveltime estimates.
The ability to perform accurate techno-economic analysis of solar photovoltaic (PV) systems is essential for bankability and investment purposes. Most energy yield models assume an almost flawless operation (i.e., no failures); however, realistically, components fail and get repaired stochastically. This package, PyPVRPM, is a Python translation and improvement of the Language Kit (LK) based PhotoVoltaic Reliability Performance Model (PVRPM), which was first developed at Sandia National Laboratories in Goldsim software (Granata et al., 2011) (Miller et al., 2012). PyPVRPM allows the user to define a PV system at a specific location and incorporate failure, repair, and detection rates and distributions to calculate energy yield and other financial metrics such as the levelized cost of energy and net present value (Klise, Lavrova, et al., 2017). Our package is a simulation tool that uses NREL’s Python interface for System Advisor Model (SAM) (National Renewable Energy Laboratory, 2020b) (National Renewable Energy Laboratory, 2020a) to evaluate the performance of a PV plant throughout its lifetime by considering component reliability metrics. Besides the numerous benefits from migrating to Python (e.g., speed, libraries, batch analyses), it also expands on the failure and repair processes from the LK version by including the ability to vary monitoring strategies. These failures, repairs, and monitoring processes are based on user-defined distributions and values, enabling a more accurate and realistic representation of cost and availability throughout a PV system’s lifetime.
This is an addendum to the Sierra/SolidMechanics 5.6 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export control requirements. The ITAR enhancements to Sierra/SM include material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. This document is an addendum only; the standard Sierra SolidMechanics 5.6 User's Guide should be referenced for most general descriptions of code capability and use.
Accurate and efficient constitutive modeling remains a cornerstone issue for solid mechanics analysis. Over the years, the LAMÉ advanced material model library has grown to address this challenge by implementing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting implementation. Therefore, to enhance confidence and enable the utilization of the LAMÉ library in application, this effort seeks to document and verify the various models in the LAMÉ library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verification tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
Capturing the dynamic response of a material under high strain-rate deformation often demands challenging and time consuming experimental effort. While shock hydrodynamic simulation methods can aid in this area, a priori characterizations of the material strength under shock loading and spall failure are needed in order to parameterize constitutive models needed for these computational tools. Moreover, parameterizations of strain-rate-dependent strength models are needed to capture the full suite of Richtmyer-Meshkov instability (RMI) behavior of shock compressed metals, creating an unrealistic demand for these training data solely on experiments. Herein, we sweep a large range of geometric, crystallographic, and shock conditions within molecular dynamics (MD) simulations and demonstrate the breadth of RMI in Cu that can be captured from the atomic scale. Yield strength measurements from jetted and arrested material from a sinusoidal surface perturbation were quantified as Y RMI = 0.787 ± 0.374 GPa, higher than strain-rate-independent models used in experimentally matched hydrodynamic simulations. Defect-free, single-crystal Cu samples used in MD will overestimate Y RMI, but the drastic scale difference between experiment and MD is highlighted by high confidence neighborhood clustering predictions of RMI characterizations, yielding incorrect classifications.
In turbulent flows, kinetic energy is transferred from the largest scales to progressively smaller scales, until it is ultimately converted into heat. The Navier-Stokes equations are almost universally used to study this process. Here, by comparing with molecular-gas-dynamics simulations, we show that the Navier-Stokes equations do not describe turbulent gas flows in the dissipation range because they neglect thermal fluctuations. We investigate decaying turbulence produced by the Taylor-Green vortex and find that in the dissipation range the molecular-gas-dynamics spectra grow quadratically with wave number due to thermal fluctuations, in agreement with previous predictions, while the Navier-Stokes spectra decay exponentially. Furthermore, the transition to quadratic growth occurs at a length scale much larger than the gas molecular mean free path, namely in a regime that the Navier-Stokes equations are widely believed to describe. In fact, our results suggest that the Navier-Stokes equations are not guaranteed to describe the smallest scales of gas turbulence for any positive Knudsen number.
With the urgent need to mitigate climate change and rising global temperatures, technological solutions that reduce atmospheric CO2 are an increasingly important part of the global solution. As a result, the nascent carbon capture, utilization, and storage (CCUS) industry is rapidly growing with a plethora of new technologies in many different sectors. There is a need to holistically evaluate these new technologies in a standardized and consistent manner to determine which technologies will be the most successful and competitive in the global marketplace to achieve decarbonization targets. Life cycle assessment (LCA) and techno-economic assessment (TEA) have been employed as rigorous methodologies for quantitatively measuring a technology's environmental impacts and techno-economic performance, respectively. However, these metrics evaluate a technology's performance in only three dimensions and do not directly incorporate stakeholder needs and values. In addition, technology developers frequently encounter trade-offs during design that increase one metric at the expense of the other. The technology performance level (TPL) combined indicator provides a comprehensive and holistic assessment of an emerging technology's potential, which is described by its techno-economic performance, environmental impacts, social impacts, safety considerations, market/deployability opportunities, use integration impacts, and general risks. TPL incorporates TEA and LCA outputs and quantifies the trade-offs between them directly using stakeholder feedback and requirements. In this article, the TPL methodology is being adapted from the marine energy domain to the CCUS domain. Adapted metrics and definitions, a stakeholder analysis, and a detailed foundation-based application of the systems engineering approach to CCUS are presented. The TPL assessment framework is couched within the internationally standardized LCA framework to improve technical rigor and acceptance. It is demonstrated how stakeholder needs and values can be directly incorporated, how LCA and TEA metrics can be balanced, and how other dimensions (listed earlier) can be integrated into a single metric that measures a technology's potential.
In this report, we assess the data recorded by a Distributed Acoustic Sensing (DAS) cable deployed during the Source Physics Experiment, Phase II (DAG) in comparison with the data recorded by nearby 4.5-Hz geophones. DAS is a novel recording method with unprecedented spatial resolution, but there are significant concerns around the data fidelity as the technology is ramped up to more common usage. Here we run a series of tests to quantify the similarity between DAS data and more conventional data and investigate cases where the higher spatial resolution of the DAS can provide new insights into the wavefield. These tests include 1D modeling with seismic refraction and bootstrap uncertainties, assessing the amplitude spectra with distance from the source, measuring the frequency dependent inter-station coherency, estimating time-dependent phase velocity with beamforming and semblance, and measuring the cross-correlation between the geophone and the particle velocity inferred from the DAS. In most cases, we find high similarity between the two datasets, but the higher spatial resolution of the DAS provides increased details and methods of estimating uncertainty.
Using chemical kinetic modeling and statistical analysis, we investigate the possibility of correlating key chemical "markers"-typically small molecules-formed during very lean (φ ∼0.001) oxidation experiments with near-stoichiometric (φ ∼1) fuel ignition properties. One goal of this work is to evaluate the feasibility of designing a fuel-screening platform, based on small laboratory reactors that operate at low temperatures and use minimal fuel volume. Buras et al. [Combust. Flame 2020, 216, 472-484] have shown that convolutional neural net (CNN) fitting can be used to correlate first-stage ignition delay times (IDTs) with OH/HO2measurements during very lean oxidation in low-T flow reactors with better than factor-of-2 accuracy. In this work, we test the limits of applying this correlation-based approach to predict the low-temperature heat release (LTHR) and total IDT, including the sensitivity of total IDT to the equivalence ratio, φ. We demonstrate that first-stage IDT can be reliably correlated with very lean oxidation measurements using compressed sensing (CS), which is simpler to implement than CNN fitting. LTHR can also be predicted via CS analysis, although the correlation quality is somewhat lower than for first-stage IDT. In contrast, the accuracy of total IDT prediction at φ = 1 is significantly lower (within a factor of 4 or worse). These results can be rationalized by the fact that the first-stage IDT and LTHR are primarily determined by low-temperature chemistry, whereas total IDT depends on low-, intermediate-, and high-temperature chemistry. Oxidation reactions are most important at low temperatures, and therefore, measurements of universal molecular markers of oxidation do not capture the full chemical complexity required to accurately predict the total IDT even at a single equivalence ratio. As a result, we find that φ-sensitivity of ignition delay cannot be predicted at all using solely correlation with lean low-T chemical speciation measurements.
Since the discovery of the laser, optical nonlinearities have been at the core of efficient light conversion sources. Typically, thick transparent crystals or quasi-phase matched waveguides are utilized in conjunction with phase-matching techniques to select a single parametric process. In recent years, due to the rapid developments in artificially structured materials, optical frequency mixing has been achieved at the nanoscale in subwavelength resonators arrayed as metasurfaces. Phase matching becomes relaxed for these wavelength-scale structures, and all allowed nonlinear processes can, in principle, occur on an equal footing. This could promote harmonic generation via a cascaded (consisting of several frequency mixing steps) process. However, so far, all reported work on dielectric metasurfaces have assumed frequency mixing from a direct (single step) nonlinear process. In this work, we prove the existence of cascaded second-order optical nonlinearities by analyzing the second- and third-wave mixing from a highly nonlinear metasurface in conjunction with polarization selection rules and crystal symmetries. We find that the third-wave mixing signal from a cascaded process can be of comparable strength to that from conventional third-harmonic generation and that surface nonlinearities are the dominant mechanism that contributes to cascaded second-order nonlinearities in our metasurface.
This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.
Digital twins are emerging as powerful tools for supporting innovation as well as optimizing the in-service performance of a broad range of complex physical machines, devices, and components. A digital twin is generally designed to provide accurate in-silico representation of the form (i.e., appearance) and the functional response of a specified (unique) physical twin. This paper offers a new perspective on how the emerging concept of digital twins could be applied to accelerate materials innovation efforts. Specifically, it is argued that the material itself can be considered as a highly complex multiscale physical system whose form (i.e., details of the material structure over a hierarchy of material length) and function (i.e., response to external stimuli typically characterized through suitably defined material properties) can be captured suitably in a digital twin. Accordingly, the digital twin can represent the evolution of structure, process, and performance of the material over time, with regard to both process history and in-service environment. This paper establishes the foundational concepts and frameworks needed to formulate and continuously update both the form and function of the digital twin of a selected material physical twin. The form of the proposed material digital twin can be captured effectively using the broadly applicable framework of n-point spatial correlations, while its function at the different length scales can be captured using homogenization and localization process-structure-property surrogate models calibrated to collections of available experimental and physics-based simulation data.
We report the formation of Al3Sc, in 100 nm Al0.8Sc0.2 films, is found to be driven by exposure to high temperature through higher deposition temperature or annealing. High film resistivity was observed in films with lower deposition temperature that exhibited a lack of crystallinity, which is anticipated to cause more electron scattering. An increase in deposition temperature allows for the nucleation and growth of crystalline Al3Sc regions that were verified by electron diffraction. The increase in crystallinity reduces electron scattering, which results in lower film resistivity. Annealing Al0.8Sc0.2 films at 600 °C in an Ar vacuum environment also allows for the formation and recrystallization of Al3Sc and Al and yields saturated resistivity values between 9.58 and 10.5 μΩ-cm regardless of sputter conditions. Al3Sc was found to nucleate and grow in a random orientation when deposited on SiO2, and highly {111} textured when deposited on 100 nm Ti and AlN films that were used as template layers. The rocking curve of the Al3Sc 111 reflection for the as-deposited films on Ti and AlN at 450 °C was 1.79° and 1.68°, respectively. Annealing the film deposited on the AlN template reduced the rocking curve substantially to 1.01° due to recrystallization of Al3Sc and Al within the film.
Fraud in the Environmental Benefit Credit (EBC) markets is pervasive. To make matters worse, the cost of creating EBCs is often higher than the market price. Consequently, a method to create, validate, and verify EBCs and their relevance is needed to mitigate fraud. The EBC market has focused on geologic (fossil fuel) CO2 sequestration projects that are often over budget and behind schedule and has failed to capture the "lowest hanging fruit" EBCs - terrestrial sequestration via the agricultural industry. This project reviews a methodology to attain possibly the least costly EBCs by tracking the reduction of inputs required to grow crops. The use of bio- stimulant products, such as humate, allows a farmer to use less nitrogen without adversely affecting crop yield. Using less nitrogen qualifies for EBCs by reducing nitrous oxide emissions and nitrate runoff from a farmer's field. A blockchain that tracks the bio-stimulant material from source to application provides a link between a tangible (bio-stimulant commodity) and the associated intangible (EBCs) assets. Covert insertion of taggants in the bio-stimulant products creates a unique barcode that allows a product to be digitally tracked from beginning to end. This process (blockchain technology) is so robust, logical, and transparent that it will enhance the value of the associated EBCs by mitigating fraud. It provides a real time method for monetizing the benefits of the material. Substantial amounts of energy are required to produce, transport, and distribute agricultural inputs including fertilizer and water. Intelligent optimization of the use of agricultural inputs can drive meaningful cost savings. Tagging and verification of product application provides a valuable understanding of the dynamics in the water/food energy nexus, a major food security and sustainability issue. As technology in agriculture evolves so to must methods to verify the Enterprise Resource Planning (ERP) potential of innovative solutions. The technology reviewed provides the ability to combine blockchain and taggants ("taggant blockchains") as the engine by which to (1) mitigate fraudulent carbon credits; (2) improve food chain security, and (3) monitor and manage sustainability. The verification of product quality and application is a requirement to validate benefits. Recent upgrades to humic and fulvic quality protocols known as ISO CD 19822 TC134 offers an analytical procedure. This work has been assisted by the Humic Products Trade Association and International Humic Substance Society. In addition, providing proof of application of these products and verification of the correct application of prescriptive humic and bio-stimulant products is required. Individual sources of humate have unique and verifiable characteristics. Additionally, methods for prescription of site- specific agricultural inputs in agricultural fields are available. (See US Patents 734867B2, US 90658633B2.) Finally, a method to assure application rate is required through the use of taggants. Sensors using organic solid to liquid phase change nanoparticles of various types and melting temperatures added to the naturally occurring materials provide a barcode. Over 100 types of nanoparticles exist ensuring numerous possible barcodes to reduce industry fraud. Taggant materials can be collected from soil samples of plant material to validate a blockchain of humic, fulvic and other soil amendment products. Other non-organic materials are also available as taggants; however, the organic tags are biodegradable and safe in the environment allowing for use during differing application timeliness.
Deep neural networks have emerged as a leading set of algorithms to infer information from a variety of data sources such as images and time series data. In their most basic form, neural networks lack the ability to adapt to new classes of information. Continual learning is a field of study attempting to give previously trained deep learning models the ability to adapt to a changing environment. Previous work developed a CL method called Neurogenesis for Deep Learning (NDL). Here, we combine NDL with a specific neural network architecture (the Ladder Network) to produce a system capable of automatically adapting a classification neural network to new classes of data. The NDL Ladder Network was evaluated against other leading CL methods. While the NDL and Ladder Network system did not match the cutting edge performance achieved by other CL methods, in most cases it performed comparably and is the only system evaluated that can learn new classes of information with no human intervention.
We develop numerical methods for computing statistics of stochastic processes on surfaces of general shape with drift-diffusion dynamics dXt=a(Xt)dt+b(Xt)dWt. We formulate descriptions of Brownian motion and general drift-diffusion processes on surfaces. We consider statistics of the form u(x)=Ex[∫0τg(Xt)dt]+Ex[f(Xτ)] for a domain Ω and the exit stopping time τ=inft{t>0|Xt∉Ω}, where f,g are general smooth functions. For computing these statistics, we develop high-order Generalized Moving Least Squares (GMLS) solvers for associated surface PDE boundary-value problems based on Backward-Kolmogorov equations. We focus particularly on the mean First Passage Times (FPTs) given by the case f=0,g=1 where u(x)=Ex[τ]. We perform studies for a variety of shapes showing our methods converge with high-order accuracy both in capturing the geometry and the surface PDE solutions. We then perform studies showing how statistics are influenced by the surface geometry, drift dynamics, and spatially dependent diffusivities.
In this report we describe the testing of a novel scheme for state preparation of trapped ions in a quantum computing setup. This technique optimally would allow for similar precision and speed of state preparation while allowing for individual addressability of single ions in a chain using technology already available in a trapped ion experiment. As quantum computing experiments become more complicated, mid-experiment measurements will become necessary to achieve algorithms such as quantum error correction. Any mid-experiment measurement then requires the measured qubit to be re-prepared to a known quantum state. Currently this involves the protected qubits to be moved a sizeable distance away from the qubit being re-prepared which can be costly in terms of experiment length as well as introducing errors. Theoretical calculations predict that a three-photon process would allow for state preparation without qubit movement with similar efficiencies to current state preparation methods.
Plasma etching of semiconductors is an essential process in the production of microchips which enable nearly every aspect of modern life. Two frequencies of applied voltage are often used to provide control of both the ion fluxes and energy distribution.
In this work, we study how a contact/impact nonlinearity interacts with a geometric cubic nonlinearity in an oscillator system. Specific focus is shown to the effects on bifurcation behavior and secondary resonances (i.e., super- and sub-harmonic resonances). The effects of the individual nonlinearities are first explored for comparison, and then the influences of the combined nonlinearities, varying one parameter at a time, are analyzed and discussed. Nonlinear characterization is then performed on an arbitrary system configuration to study super- and sub-harmonic resonances and grazing contacts or bifurcations. Both the cubic and contact nonlinearities cause a drop in amplitude and shift up in frequency for the primary resonance, and they activate high-amplitude subharmonic resonance regions. The nonlinearities seem to never destructively interfere. The contact nonlinearity generally affects the system's superharmonic resonance behavior more, particularly with regard to the occurrence of grazing contacts and the activation of many bifurcations in the system's response. The subharmonic resonance behavior is more strongly affected by the cubic nonlinearity and is prone to multistable behavior. Perturbation theory proved useful for determining when the cubic nonlinearity would be dominant compared to the contact nonlinearity. The limiting behaviors of the contact stiffness and freeplay gap size indicate the cubic nonlinearity is dominant overall. It is demonstrated that the presence of contact may result in the activation of several bifurcations. In addition, it is proved that the system's subharmonic resonance region is prone to multistable dynamical responses having distinct magnitudes.
In a quantum network, a key challenge is to minimize the direct reflection of flying qubits as they couple to stationary, resonator-based memory qubits, as the reflected amplitude represents state transfer infidelity that cannot be directly recovered. Optimizing the transfer fidelity can be accomplished by dynamically varying the resonator's coupling rate to the flying qubit field. Here, we analytically derive the optimal coupling rate profile in the presence of intrinsic loss of the quantum memory using an open quantum systems method that can account for intrinsic resonator losses. We show that, since the resonator field must be initially empty, an initial amplitude in the resonator must be generated in order to cancel reflections via destructive interference; moreover, we show that this initial amplitude can be made sufficiently small as to allow the net infidelity throughout the complete transfer process to be close to unity. We then derive the time-varying resonator coupling that maximizes the state transfer fidelity as a function of the initial population and intrinsic loss rate, providing a complete protocol for optimal quantum state transfer between the flying qubit and resonator qubit. We present analytical expressions and numerical examples of the fidelities for the complete protocol using exponential and Gaussian profiles. We show that a state transfer fidelity of around 99.9% can be reached momentarily before the quantum information is lost due to the intrinsic loss in practical resonators used as quantum memories.
Coherent anti-Stokes Raman scattering (CARS) is commonly used for thermometry and concentration measurement of major species. The quadratic scaling of CARS signal with number density has limited the use of CARS for detection of minor species, where more sensitive approaches may be more attractive. However, significant advancements in ultrafast CARS approaches have been made over the past two decades, including the development of hybrid CARS demonstrated to yield greatly increased excitation efficiencies. Yet, detailed detection limits of hybrid CARS have not been well established. In this Letter, detection limits for N 2 , H 2 , CO, and C 2 H 4 by point-wise hybrid femtosecond (fs)/picosecond (ps) CARS are determined to be of the order of 10 15 molecules/cm 3 . The possible benefit of fs/nanosecond (ns) hybrid CARS is also discussed.
Depleted uranium hexafluoride (UF6), a stockpiled byproduct of the nuclear fuel cycle, reacts readily with atmospheric humidity, but the mechanism is poorly understood. We compare several potential initiation steps at a consistent level of theory, generating underlying structures and vibrational modes using hybrid density functional theory (DFT) and computing relative energies of stationary points with double-hybrid (DH) DFT. A benchmark comparison is performed to assess the quality of DH-DFT data using reference energy differences obtained using a complete-basis-limit coupled-cluster (CC) composite method. The associated large-basis CC computations were enabled by a new general-purpose pseudopotential capability implemented as part of this work. Dispersion-corrected parameter-free DH-DFT methods, namely PBE0-DH-D3(BJ) and PBE-QIDH-D3(BJ), provided mean unsigned errors within chemical accuracy (1 kcal mol−1) for a set of barrier heights corresponding to the most energetically favorable initiation steps. The hydrolysis mechanism is found to proceed via intermolecular hydrogen transfer within van der Waals complexes involving UF6, UF5OH, and UOF4, in agreement with previous studies, followed by the formation of a previously unappreciated dihydroxide intermediate, UF4(OH)2. The dihydroxide is predicted to form under both kinetic and thermodynamic control, and, unlike the alternate pathway leading to the UO2F2 monomer, its reaction energy is exothermic, in agreement with observation. Finally, harmonic and anharmonic vibrational simulations are performed to reinterpret literature infrared spectroscopy in light of this newly identified species.
In alkaline zinc–manganese dioxide batteries, there is a need for selective polymeric separators that have good hydroxide ion conductivity but that prevent the transport of zincate (Zn(OH)4)2-. Here we investigate the nanoscale structure and hydroxide transport in two cationic polysulfones that are promising for these separators. We present the synthesis and characterization for a tetraethylammonium-functionalized polysulfone (TEA-PSU) and compare it to our previous work on an N-butylimidazolium-functionalized polysulfone (NBI-PSU). We perform atomistic molecular dynamics (MD) simulations of both polymers at experimentally relevant water contents. The MD simulations show that both polymers develop well phase separated nanoscale water domains that percolate through the polymer. Calculation of the total scattering intensity from the MD simulations reveal weak or nonexistent ionomer peaks at low wave vectors. The lack of an ionomer peak is due to a loss of contrast in the scattering. The small water domains in both polymers, with median diameters on the order of 0.5–0.7 nm, lead to hydroxide and water diffusion constants that are 1–2 orders of magnitude smaller than their values in bulk water. This confinement lowers the conductivity but also may explain the strong exclusion of zincate from the PSU membranes seen experimentally.
This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.
Concerns about the safety of lithium-ion batteries have motivated numerous studies on the response of fresh cells to abusive, off-nominal conditions, but studies on aged cells are relatively rare. This perspective considers all open literature on the thermal, electrical, and mechanical abuse response of aged lithium-ion cells and modules to identify critical changes in their behavior relative to fresh cells. We outline data gaps in aged cell safety, including electrical and mechanical testing, and module-level experiments. Understanding how the abuse response of aged cells differs from fresh cells will enable the design of more effective energy storage failure mitigation systems.
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
We report that an investigation is carried out for the purpose of simultaneously controlling a base-excited dynamical system and enhancing the effectiveness of a piezoelectric energy harvesting absorber. Amplitude absorbers are included to improve the energy harvested by the absorber with the possibility of activating broadband resonant regions to increase the operable range of the absorber. This study optimizes the stoppers’ ability for the energy harvesting absorber to generate energy by investigating asymmetric gap and stiffness configurations. Medium stiffnesses of 5 x 104 N/m and 1 x 105 N/m show significant impact on the primary system’s dynamics and improvement in the level of the harvested power for the absorber. A solo stopper configuration when the gap distance is 0.02m improves 29% in peak power and 9% in average power over the symmetrical case. Additionally, an asymmetric stiffness configuration when one of the stiffnesses is 1 x 105 N/m and a gap size of 0.02m indicates an improvement of 25% and 8% for peak and average harvested power, respectively, and the second stopper’s stiffness is 5 x 103 N/m. Hard stopper configurations shows improvements with both asymmetric cases, but not enough improvements to outperform the system without amplitude stoppers.
This paper describes a detailed understanding of how nanofillers function as radiation barriers within the polymer matrix, and how their effectiveness is impacted by factors such as composition, size, loading, surface chemistry, and dispersion. We designed a comprehensive investigation of heavy ion irradiation resistance in epoxy matrix composites loaded with surface-modified ceria nanofillers, utilizing tandem computational and experimental methods to elucidate radiolytic damage processes and relate them to chemical and structural changes observed through thermal analysis, vibrational spectroscopy, and electron microscopy. A detailed mechanistic examination supported by FTIR spectroscopy data identified the bisphenol A moiety as a primary target for degradation reactions. Results of computational modeling by the Stopping Range of Ions in Matter (SRIM) Monte Carlo simulation were in good agreement with damage analysis from surface and cross-sectional SEM imaging. All metrics indicated that ceria nanofillers reduce the damage area in polymer nanocomposites, and that nanofiller loading and homogeneity of dispersion are key to effective damage prevention. The results of this study represent a significant pathway for engineered irradiation tolerance in a diverse array of polymer nanocomposite materials. Numerous areas of materials science can benefit from utilizing this facile and effective method to extend the reliability of polymer materials.
A rapid and facile design strategy to create a highly complex optical tag with programmable, multimodal photoluminescent properties is described. This was achieved via intrinsic and DNA-fluorophore hidden signatures. As a first covert feature of the tag, an intricate novel heterometallic near-infrared (NIR)-emitting mesoporous metal-organic framework (MOF) was designed and synthesized. The material is constructed from two chemically distinct, homometallic hexanuclear clusters based on Nd and Yb. Uniquely, the Nd-based cluster is observed here for the first time in a MOF and consists of two staggered Nd μ3-oxo trimers. To generate controlled, multimodal, and tailorable emission with difficult to counterfeit features, the NIR-emissive MOF was post-synthetically modified via a fluorescent DNA oligo labeling design strategy. The surface attachment of several distinct fluorophores, including the simultaneous attachment of up to three distinct fluorescently labeled oligos was achieved, with excitation and emission properties across the visible spectrum (480-800 nm). The DNA inclusion as a secondary covert element in the tag was demonstrated via the detection of SYBR Gold dye association. Importantly, the approach implemented here serves as a rapid and tailorable way to encrypt distinct information in a facile and modular fashion and provides an innovative technology in the quest toward complex optical tags.
In an x-ray driven cavity experiment, an intense flux of soft x rays on the emitting surface produces significant emission of photoelectrons having several kiloelectronvolts of kinetic energy. At the same time, rapid heating of the emitting surface occurs, resulting in the release of adsorbed surface impurities and subsequent formation of an impurity plasma. This numerical study explores a simple model for the photoelectric currents and the impurity plasma. Attention is given to the effect of varying the composition of the impurity plasma. The presence of protons or hydrogen molecular ions leads to a substantially enhanced cavity current, while heavier plasma ions are seen to have a limited effect on the cavity current due to their lower mobility. Additionally, it is demonstrated that an additional peak in the current waveform can appear due to the impurity plasma. A correlation between the impurity plasma composition and the timing of this peak is elucidated.
Computer Methods in Applied Mechanics and Engineering
Shojaei, Arman; Hermann, Alexander; Cyron, Christian J.; Seleson, Pablo; Silling, Stewart A.
Efficient and accurate calculation of spatial integrals is of major interest in the numerical implementation of peridynamics (PD). The standard way to perform this calculation is a particle-based approach that discretizes the strong form of the PD governing equation. This approach has rapidly been adopted by the PD community since it offers some advantages. It is computationally cheaper than other available schemes, can conveniently handle material separation, and effectively deals with nonlinear PD models. Nevertheless, PD models are still computationally very expensive compared with those based on the classical continuum mechanics theory, particularly for large-scale problems in three dimensions. This results from the nonlocal nature of the PD theory which leads to interactions of each node of a discretized body with multiple surrounding nodes. Here, we propose a new approach to significantly boost the numerical efficiency of PD models. We propose a discretization scheme that employs a simple collocation procedure and is truly meshfree; i.e., it does not depend on any background integration cells. In contrast to the standard scheme, the proposed scheme requires a much smaller set of neighboring nodes (keeping the same physical length scale) to achieve a specific accuracy and is thus computationally more efficient. Our new scheme is applicable to the case of linear PD models and within neighborhoods where the solution can be approximated by smooth basis functions. Therefore, to fully exploit the advantages of both the standard and the proposed schemes, a hybrid discretization is presented that combines both approaches within an adaptive framework. The high performance of the developed framework is illustrated by several numerical examples, including brittle fracture and corrosion problems in two and three dimensions.
This article evaluates the data retention characteristics of irradiated multilevel-cell (MLC) 3-D NAND flash memories. We irradiated the memory chips by a Co-60 gamma-ray source for up to 50 krad(Si) and then wrote a random data pattern on the irradiated chips to find their retention characteristics. The experimental results show that the data retention property of the irradiated chips is significantly degraded when compared to the un-irradiated ones. We evaluated two independent strategies to improve the data retention characteristics of the irradiated chips. The first method involves high-temperature annealing of the irradiated chips, while the second method suggests preprogramming the memory modules before deploying them into radiation-prone environments.
This review discusses atomistic modeling techniques used to simulate radiation damage in crystalline materials. Radiation damage due to energetic particles results in the formation of defects. The subsequent evolution of these defects over multiple length and time scales requiring numerous simulations techniques to model the gamut of behaviors. This work focuses attention on current and new methodologies at the atomistic scale regarding the mechanisms of defect formation at the primary damage state.