In this report, we assess the data recorded by a Distributed Acoustic Sensing (DAS) cable deployed during the Source Physics Experiment, Phase II (DAG) in comparison with the data recorded by nearby 4.5-Hz geophones. DAS is a novel recording method with unprecedented spatial resolution, but there are significant concerns around the data fidelity as the technology is ramped up to more common usage. Here we run a series of tests to quantify the similarity between DAS data and more conventional data and investigate cases where the higher spatial resolution of the DAS can provide new insights into the wavefield. These tests include 1D modeling with seismic refraction and bootstrap uncertainties, assessing the amplitude spectra with distance from the source, measuring the frequency dependent inter-station coherency, estimating time-dependent phase velocity with beamforming and semblance, and measuring the cross-correlation between the geophone and the particle velocity inferred from the DAS. In most cases, we find high similarity between the two datasets, but the higher spatial resolution of the DAS provides increased details and methods of estimating uncertainty.
Using chemical kinetic modeling and statistical analysis, we investigate the possibility of correlating key chemical "markers"-typically small molecules-formed during very lean (φ ∼0.001) oxidation experiments with near-stoichiometric (φ ∼1) fuel ignition properties. One goal of this work is to evaluate the feasibility of designing a fuel-screening platform, based on small laboratory reactors that operate at low temperatures and use minimal fuel volume. Buras et al. [Combust. Flame 2020, 216, 472-484] have shown that convolutional neural net (CNN) fitting can be used to correlate first-stage ignition delay times (IDTs) with OH/HO2measurements during very lean oxidation in low-T flow reactors with better than factor-of-2 accuracy. In this work, we test the limits of applying this correlation-based approach to predict the low-temperature heat release (LTHR) and total IDT, including the sensitivity of total IDT to the equivalence ratio, φ. We demonstrate that first-stage IDT can be reliably correlated with very lean oxidation measurements using compressed sensing (CS), which is simpler to implement than CNN fitting. LTHR can also be predicted via CS analysis, although the correlation quality is somewhat lower than for first-stage IDT. In contrast, the accuracy of total IDT prediction at φ = 1 is significantly lower (within a factor of 4 or worse). These results can be rationalized by the fact that the first-stage IDT and LTHR are primarily determined by low-temperature chemistry, whereas total IDT depends on low-, intermediate-, and high-temperature chemistry. Oxidation reactions are most important at low temperatures, and therefore, measurements of universal molecular markers of oxidation do not capture the full chemical complexity required to accurately predict the total IDT even at a single equivalence ratio. As a result, we find that φ-sensitivity of ignition delay cannot be predicted at all using solely correlation with lean low-T chemical speciation measurements.
We report the formation of Al3Sc, in 100 nm Al0.8Sc0.2 films, is found to be driven by exposure to high temperature through higher deposition temperature or annealing. High film resistivity was observed in films with lower deposition temperature that exhibited a lack of crystallinity, which is anticipated to cause more electron scattering. An increase in deposition temperature allows for the nucleation and growth of crystalline Al3Sc regions that were verified by electron diffraction. The increase in crystallinity reduces electron scattering, which results in lower film resistivity. Annealing Al0.8Sc0.2 films at 600 °C in an Ar vacuum environment also allows for the formation and recrystallization of Al3Sc and Al and yields saturated resistivity values between 9.58 and 10.5 μΩ-cm regardless of sputter conditions. Al3Sc was found to nucleate and grow in a random orientation when deposited on SiO2, and highly {111} textured when deposited on 100 nm Ti and AlN films that were used as template layers. The rocking curve of the Al3Sc 111 reflection for the as-deposited films on Ti and AlN at 450 °C was 1.79° and 1.68°, respectively. Annealing the film deposited on the AlN template reduced the rocking curve substantially to 1.01° due to recrystallization of Al3Sc and Al within the film.
Since the discovery of the laser, optical nonlinearities have been at the core of efficient light conversion sources. Typically, thick transparent crystals or quasi-phase matched waveguides are utilized in conjunction with phase-matching techniques to select a single parametric process. In recent years, due to the rapid developments in artificially structured materials, optical frequency mixing has been achieved at the nanoscale in subwavelength resonators arrayed as metasurfaces. Phase matching becomes relaxed for these wavelength-scale structures, and all allowed nonlinear processes can, in principle, occur on an equal footing. This could promote harmonic generation via a cascaded (consisting of several frequency mixing steps) process. However, so far, all reported work on dielectric metasurfaces have assumed frequency mixing from a direct (single step) nonlinear process. In this work, we prove the existence of cascaded second-order optical nonlinearities by analyzing the second- and third-wave mixing from a highly nonlinear metasurface in conjunction with polarization selection rules and crystal symmetries. We find that the third-wave mixing signal from a cascaded process can be of comparable strength to that from conventional third-harmonic generation and that surface nonlinearities are the dominant mechanism that contributes to cascaded second-order nonlinearities in our metasurface.
Digital twins are emerging as powerful tools for supporting innovation as well as optimizing the in-service performance of a broad range of complex physical machines, devices, and components. A digital twin is generally designed to provide accurate in-silico representation of the form (i.e., appearance) and the functional response of a specified (unique) physical twin. This paper offers a new perspective on how the emerging concept of digital twins could be applied to accelerate materials innovation efforts. Specifically, it is argued that the material itself can be considered as a highly complex multiscale physical system whose form (i.e., details of the material structure over a hierarchy of material length) and function (i.e., response to external stimuli typically characterized through suitably defined material properties) can be captured suitably in a digital twin. Accordingly, the digital twin can represent the evolution of structure, process, and performance of the material over time, with regard to both process history and in-service environment. This paper establishes the foundational concepts and frameworks needed to formulate and continuously update both the form and function of the digital twin of a selected material physical twin. The form of the proposed material digital twin can be captured effectively using the broadly applicable framework of n-point spatial correlations, while its function at the different length scales can be captured using homogenization and localization process-structure-property surrogate models calibrated to collections of available experimental and physics-based simulation data.
This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.
Fraud in the Environmental Benefit Credit (EBC) markets is pervasive. To make matters worse, the cost of creating EBCs is often higher than the market price. Consequently, a method to create, validate, and verify EBCs and their relevance is needed to mitigate fraud. The EBC market has focused on geologic (fossil fuel) CO2 sequestration projects that are often over budget and behind schedule and has failed to capture the "lowest hanging fruit" EBCs - terrestrial sequestration via the agricultural industry. This project reviews a methodology to attain possibly the least costly EBCs by tracking the reduction of inputs required to grow crops. The use of bio- stimulant products, such as humate, allows a farmer to use less nitrogen without adversely affecting crop yield. Using less nitrogen qualifies for EBCs by reducing nitrous oxide emissions and nitrate runoff from a farmer's field. A blockchain that tracks the bio-stimulant material from source to application provides a link between a tangible (bio-stimulant commodity) and the associated intangible (EBCs) assets. Covert insertion of taggants in the bio-stimulant products creates a unique barcode that allows a product to be digitally tracked from beginning to end. This process (blockchain technology) is so robust, logical, and transparent that it will enhance the value of the associated EBCs by mitigating fraud. It provides a real time method for monetizing the benefits of the material. Substantial amounts of energy are required to produce, transport, and distribute agricultural inputs including fertilizer and water. Intelligent optimization of the use of agricultural inputs can drive meaningful cost savings. Tagging and verification of product application provides a valuable understanding of the dynamics in the water/food energy nexus, a major food security and sustainability issue. As technology in agriculture evolves so to must methods to verify the Enterprise Resource Planning (ERP) potential of innovative solutions. The technology reviewed provides the ability to combine blockchain and taggants ("taggant blockchains") as the engine by which to (1) mitigate fraudulent carbon credits; (2) improve food chain security, and (3) monitor and manage sustainability. The verification of product quality and application is a requirement to validate benefits. Recent upgrades to humic and fulvic quality protocols known as ISO CD 19822 TC134 offers an analytical procedure. This work has been assisted by the Humic Products Trade Association and International Humic Substance Society. In addition, providing proof of application of these products and verification of the correct application of prescriptive humic and bio-stimulant products is required. Individual sources of humate have unique and verifiable characteristics. Additionally, methods for prescription of site- specific agricultural inputs in agricultural fields are available. (See US Patents 734867B2, US 90658633B2.) Finally, a method to assure application rate is required through the use of taggants. Sensors using organic solid to liquid phase change nanoparticles of various types and melting temperatures added to the naturally occurring materials provide a barcode. Over 100 types of nanoparticles exist ensuring numerous possible barcodes to reduce industry fraud. Taggant materials can be collected from soil samples of plant material to validate a blockchain of humic, fulvic and other soil amendment products. Other non-organic materials are also available as taggants; however, the organic tags are biodegradable and safe in the environment allowing for use during differing application timeliness.
Deep neural networks have emerged as a leading set of algorithms to infer information from a variety of data sources such as images and time series data. In their most basic form, neural networks lack the ability to adapt to new classes of information. Continual learning is a field of study attempting to give previously trained deep learning models the ability to adapt to a changing environment. Previous work developed a CL method called Neurogenesis for Deep Learning (NDL). Here, we combine NDL with a specific neural network architecture (the Ladder Network) to produce a system capable of automatically adapting a classification neural network to new classes of data. The NDL Ladder Network was evaluated against other leading CL methods. While the NDL and Ladder Network system did not match the cutting edge performance achieved by other CL methods, in most cases it performed comparably and is the only system evaluated that can learn new classes of information with no human intervention.
We develop numerical methods for computing statistics of stochastic processes on surfaces of general shape with drift-diffusion dynamics dXt=a(Xt)dt+b(Xt)dWt. We formulate descriptions of Brownian motion and general drift-diffusion processes on surfaces. We consider statistics of the form u(x)=Ex[∫0τg(Xt)dt]+Ex[f(Xτ)] for a domain Ω and the exit stopping time τ=inft{t>0|Xt∉Ω}, where f,g are general smooth functions. For computing these statistics, we develop high-order Generalized Moving Least Squares (GMLS) solvers for associated surface PDE boundary-value problems based on Backward-Kolmogorov equations. We focus particularly on the mean First Passage Times (FPTs) given by the case f=0,g=1 where u(x)=Ex[τ]. We perform studies for a variety of shapes showing our methods converge with high-order accuracy both in capturing the geometry and the surface PDE solutions. We then perform studies showing how statistics are influenced by the surface geometry, drift dynamics, and spatially dependent diffusivities.
Plasma etching of semiconductors is an essential process in the production of microchips which enable nearly every aspect of modern life. Two frequencies of applied voltage are often used to provide control of both the ion fluxes and energy distribution.
In this report we describe the testing of a novel scheme for state preparation of trapped ions in a quantum computing setup. This technique optimally would allow for similar precision and speed of state preparation while allowing for individual addressability of single ions in a chain using technology already available in a trapped ion experiment. As quantum computing experiments become more complicated, mid-experiment measurements will become necessary to achieve algorithms such as quantum error correction. Any mid-experiment measurement then requires the measured qubit to be re-prepared to a known quantum state. Currently this involves the protected qubits to be moved a sizeable distance away from the qubit being re-prepared which can be costly in terms of experiment length as well as introducing errors. Theoretical calculations predict that a three-photon process would allow for state preparation without qubit movement with similar efficiencies to current state preparation methods.
In this work, we study how a contact/impact nonlinearity interacts with a geometric cubic nonlinearity in an oscillator system. Specific focus is shown to the effects on bifurcation behavior and secondary resonances (i.e., super- and sub-harmonic resonances). The effects of the individual nonlinearities are first explored for comparison, and then the influences of the combined nonlinearities, varying one parameter at a time, are analyzed and discussed. Nonlinear characterization is then performed on an arbitrary system configuration to study super- and sub-harmonic resonances and grazing contacts or bifurcations. Both the cubic and contact nonlinearities cause a drop in amplitude and shift up in frequency for the primary resonance, and they activate high-amplitude subharmonic resonance regions. The nonlinearities seem to never destructively interfere. The contact nonlinearity generally affects the system's superharmonic resonance behavior more, particularly with regard to the occurrence of grazing contacts and the activation of many bifurcations in the system's response. The subharmonic resonance behavior is more strongly affected by the cubic nonlinearity and is prone to multistable behavior. Perturbation theory proved useful for determining when the cubic nonlinearity would be dominant compared to the contact nonlinearity. The limiting behaviors of the contact stiffness and freeplay gap size indicate the cubic nonlinearity is dominant overall. It is demonstrated that the presence of contact may result in the activation of several bifurcations. In addition, it is proved that the system's subharmonic resonance region is prone to multistable dynamical responses having distinct magnitudes.
In a quantum network, a key challenge is to minimize the direct reflection of flying qubits as they couple to stationary, resonator-based memory qubits, as the reflected amplitude represents state transfer infidelity that cannot be directly recovered. Optimizing the transfer fidelity can be accomplished by dynamically varying the resonator's coupling rate to the flying qubit field. Here, we analytically derive the optimal coupling rate profile in the presence of intrinsic loss of the quantum memory using an open quantum systems method that can account for intrinsic resonator losses. We show that, since the resonator field must be initially empty, an initial amplitude in the resonator must be generated in order to cancel reflections via destructive interference; moreover, we show that this initial amplitude can be made sufficiently small as to allow the net infidelity throughout the complete transfer process to be close to unity. We then derive the time-varying resonator coupling that maximizes the state transfer fidelity as a function of the initial population and intrinsic loss rate, providing a complete protocol for optimal quantum state transfer between the flying qubit and resonator qubit. We present analytical expressions and numerical examples of the fidelities for the complete protocol using exponential and Gaussian profiles. We show that a state transfer fidelity of around 99.9% can be reached momentarily before the quantum information is lost due to the intrinsic loss in practical resonators used as quantum memories.
Depleted uranium hexafluoride (UF6), a stockpiled byproduct of the nuclear fuel cycle, reacts readily with atmospheric humidity, but the mechanism is poorly understood. We compare several potential initiation steps at a consistent level of theory, generating underlying structures and vibrational modes using hybrid density functional theory (DFT) and computing relative energies of stationary points with double-hybrid (DH) DFT. A benchmark comparison is performed to assess the quality of DH-DFT data using reference energy differences obtained using a complete-basis-limit coupled-cluster (CC) composite method. The associated large-basis CC computations were enabled by a new general-purpose pseudopotential capability implemented as part of this work. Dispersion-corrected parameter-free DH-DFT methods, namely PBE0-DH-D3(BJ) and PBE-QIDH-D3(BJ), provided mean unsigned errors within chemical accuracy (1 kcal mol−1) for a set of barrier heights corresponding to the most energetically favorable initiation steps. The hydrolysis mechanism is found to proceed via intermolecular hydrogen transfer within van der Waals complexes involving UF6, UF5OH, and UOF4, in agreement with previous studies, followed by the formation of a previously unappreciated dihydroxide intermediate, UF4(OH)2. The dihydroxide is predicted to form under both kinetic and thermodynamic control, and, unlike the alternate pathway leading to the UO2F2 monomer, its reaction energy is exothermic, in agreement with observation. Finally, harmonic and anharmonic vibrational simulations are performed to reinterpret literature infrared spectroscopy in light of this newly identified species.
Coherent anti-Stokes Raman scattering (CARS) is commonly used for thermometry and concentration measurement of major species. The quadratic scaling of CARS signal with number density has limited the use of CARS for detection of minor species, where more sensitive approaches may be more attractive. However, significant advancements in ultrafast CARS approaches have been made over the past two decades, including the development of hybrid CARS demonstrated to yield greatly increased excitation efficiencies. Yet, detailed detection limits of hybrid CARS have not been well established. In this Letter, detection limits for N2, H2, CO, and C2H4 by point-wise hybrid femtosecond (fs)/picosecond (ps) CARS are determined to be of the order of 1015 molecules/cm3. Here, the possible benefit of fs/nanosecond (ns) hybrid CARS is also discussed.
In alkaline zinc–manganese dioxide batteries, there is a need for selective polymeric separators that have good hydroxide ion conductivity but that prevent the transport of zincate (Zn(OH)4)2-. Here we investigate the nanoscale structure and hydroxide transport in two cationic polysulfones that are promising for these separators. We present the synthesis and characterization for a tetraethylammonium-functionalized polysulfone (TEA-PSU) and compare it to our previous work on an N-butylimidazolium-functionalized polysulfone (NBI-PSU). We perform atomistic molecular dynamics (MD) simulations of both polymers at experimentally relevant water contents. The MD simulations show that both polymers develop well phase separated nanoscale water domains that percolate through the polymer. Calculation of the total scattering intensity from the MD simulations reveal weak or nonexistent ionomer peaks at low wave vectors. The lack of an ionomer peak is due to a loss of contrast in the scattering. The small water domains in both polymers, with median diameters on the order of 0.5–0.7 nm, lead to hydroxide and water diffusion constants that are 1–2 orders of magnitude smaller than their values in bulk water. This confinement lowers the conductivity but also may explain the strong exclusion of zincate from the PSU membranes seen experimentally.
This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.
Concerns about the safety of lithium-ion batteries have motivated numerous studies on the response of fresh cells to abusive, off-nominal conditions, but studies on aged cells are relatively rare. This perspective considers all open literature on the thermal, electrical, and mechanical abuse response of aged lithium-ion cells and modules to identify critical changes in their behavior relative to fresh cells. We outline data gaps in aged cell safety, including electrical and mechanical testing, and module-level experiments. Understanding how the abuse response of aged cells differs from fresh cells will enable the design of more effective energy storage failure mitigation systems.
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
This paper describes a detailed understanding of how nanofillers function as radiation barriers within the polymer matrix, and how their effectiveness is impacted by factors such as composition, size, loading, surface chemistry, and dispersion. We designed a comprehensive investigation of heavy ion irradiation resistance in epoxy matrix composites loaded with surface-modified ceria nanofillers, utilizing tandem computational and experimental methods to elucidate radiolytic damage processes and relate them to chemical and structural changes observed through thermal analysis, vibrational spectroscopy, and electron microscopy. A detailed mechanistic examination supported by FTIR spectroscopy data identified the bisphenol A moiety as a primary target for degradation reactions. Results of computational modeling by the Stopping Range of Ions in Matter (SRIM) Monte Carlo simulation were in good agreement with damage analysis from surface and cross-sectional SEM imaging. All metrics indicated that ceria nanofillers reduce the damage area in polymer nanocomposites, and that nanofiller loading and homogeneity of dispersion are key to effective damage prevention. The results of this study represent a significant pathway for engineered irradiation tolerance in a diverse array of polymer nanocomposite materials. Numerous areas of materials science can benefit from utilizing this facile and effective method to extend the reliability of polymer materials.
A rapid and facile design strategy to create a highly complex optical tag with programmable, multimodal photoluminescent properties is described. This was achieved via intrinsic and DNA-fluorophore hidden signatures. As a first covert feature of the tag, an intricate novel heterometallic near-infrared (NIR)-emitting mesoporous metal-organic framework (MOF) was designed and synthesized. The material is constructed from two chemically distinct, homometallic hexanuclear clusters based on Nd and Yb. Uniquely, the Nd-based cluster is observed here for the first time in a MOF and consists of two staggered Nd μ3-oxo trimers. To generate controlled, multimodal, and tailorable emission with difficult to counterfeit features, the NIR-emissive MOF was post-synthetically modified via a fluorescent DNA oligo labeling design strategy. The surface attachment of several distinct fluorophores, including the simultaneous attachment of up to three distinct fluorescently labeled oligos was achieved, with excitation and emission properties across the visible spectrum (480-800 nm). The DNA inclusion as a secondary covert element in the tag was demonstrated via the detection of SYBR Gold dye association. Importantly, the approach implemented here serves as a rapid and tailorable way to encrypt distinct information in a facile and modular fashion and provides an innovative technology in the quest toward complex optical tags.
We report that an investigation is carried out for the purpose of simultaneously controlling a base-excited dynamical system and enhancing the effectiveness of a piezoelectric energy harvesting absorber. Amplitude absorbers are included to improve the energy harvested by the absorber with the possibility of activating broadband resonant regions to increase the operable range of the absorber. This study optimizes the stoppers’ ability for the energy harvesting absorber to generate energy by investigating asymmetric gap and stiffness configurations. Medium stiffnesses of 5 x 104 N/m and 1 x 105 N/m show significant impact on the primary system’s dynamics and improvement in the level of the harvested power for the absorber. A solo stopper configuration when the gap distance is 0.02m improves 29% in peak power and 9% in average power over the symmetrical case. Additionally, an asymmetric stiffness configuration when one of the stiffnesses is 1 x 105 N/m and a gap size of 0.02m indicates an improvement of 25% and 8% for peak and average harvested power, respectively, and the second stopper’s stiffness is 5 x 103 N/m. Hard stopper configurations shows improvements with both asymmetric cases, but not enough improvements to outperform the system without amplitude stoppers.
Computer Methods in Applied Mechanics and Engineering
Shojaei, Arman; Hermann, Alexander; Cyron, Christian J.; Seleson, Pablo; Silling, Stewart
Efficient and accurate calculation of spatial integrals is of major interest in the numerical implementation of peridynamics (PD). The standard way to perform this calculation is a particle-based approach that discretizes the strong form of the PD governing equation. This approach has rapidly been adopted by the PD community since it offers some advantages. It is computationally cheaper than other available schemes, can conveniently handle material separation, and effectively deals with nonlinear PD models. Nevertheless, PD models are still computationally very expensive compared with those based on the classical continuum mechanics theory, particularly for large-scale problems in three dimensions. This results from the nonlocal nature of the PD theory which leads to interactions of each node of a discretized body with multiple surrounding nodes. Here, we propose a new approach to significantly boost the numerical efficiency of PD models. We propose a discretization scheme that employs a simple collocation procedure and is truly meshfree; i.e., it does not depend on any background integration cells. In contrast to the standard scheme, the proposed scheme requires a much smaller set of neighboring nodes (keeping the same physical length scale) to achieve a specific accuracy and is thus computationally more efficient. Our new scheme is applicable to the case of linear PD models and within neighborhoods where the solution can be approximated by smooth basis functions. Therefore, to fully exploit the advantages of both the standard and the proposed schemes, a hybrid discretization is presented that combines both approaches within an adaptive framework. The high performance of the developed framework is illustrated by several numerical examples, including brittle fracture and corrosion problems in two and three dimensions.