Recent work has shown that artificial opsonins stimulate the targeted destruction of bacteria by phagocyte immune cells. Artificial opsonization has the potential to direct the innate immune system to target novel antigens, potentially even viral pathogens. Furthermore, the engagement of innate immunity presents a potential solution for the spread of pandemics in a scenario when a vaccine is unavailable or ineffective. Funded by the LDRD late start bioscience pandemic response program, we tested whether artificial opsonins can be developed to target viral pathogens using phage MS2 and a SARS-CoV-2 surrogate. To direct opsonization against these viruses we purified antibody derived viral targeting motifs and attempted the same chemical conjugation strategies that produced bacterial targeting artificial opsonins. However, the viral targeting motifs proved challenging to conjugate using these methods, frequently resulting in precipitation and loss of product. Future studies may be successful with this approach if a smaller and more soluble viral-targeting peptide could be used.
Seven generation III+ and generation IV nuclear reactor types, based on twelve reactor concepts surveyed, are examined using functional decomposition to extract relevant operational technology (OT) architecture information. This information is compared to existing nuclear power plants (NPPs) OT architectures to highlight novel and emergent cyber risks associated with next generation NPPs. These insights can help inform operational technology architecture requirements that will be unique to a given reactor type. Next generation NPPs have streamlined OT architectures relative to the current generation II commercial NPP fleet. Overall, without compensatory measures that provide sufficient and efficient cybersecurity controls, next generation NPPs will have increased cyber risk. Verification and validation of cyber-physical testbeds and cyber risk assessment methodologies may be an important next step to reduce cyber risk in the OT architecture design and testing phase. Coordination with safety requirements can result in OT architecture design being an iterative process.
The Material Protection, Accounting, and Control Technologies program utilizes modeling and simulation to assess Material Control and Accountability (MC&A) concerns for a variety of nuclear facilities. Single analyst tools allow for rapid design and evaluation of advanced approaches for new and existing nuclear facilities. A low enriched uranium (LEU) fuel conversion and fabrication facility simulator is developed to assist with MC&A for existing facilities. Measurements are added to the model (consistent with current best practices). Material balance calculations and statistical tests are also added to the model. In addition, scoping work is performed for developing a single stage aqueous reprocessing model. Preliminary results are presented and discussed, and next steps outlined.
This report provides a summary of notes for building and running the Sandia Computational Engine for Particle Transport for Radiation Effects (SCEPTRE) code. SCEPTRE is a general- purpose C++ code for solving the linear Boltzmann transport equation in serial or parallel using unstructured spatial finite elements, multigroup energy treatment, and a variety of angular treatments including discrete ordinates (Sn) and spherical harmonics (Pn). Either the first-order form of the Boltzmann equation or one of the second-order forms may be solved. SCEPTRE requires a small number of open-source Third Party Libraries (TPL) to be available, and example scripts for building these TPL are provided. The TPL needed by SCEPTRE are Trilinos, Boost, and Netcdf. SCEPTRE uses an autotools build system, and a sample configure script is provided. Running the SCEPTRE code requires that the user provide a spatial finite-elements mesh in Exodus format and a cross section library in a format that will be described. SCEPTRE uses an xml-based input, and several examples will be provided.
The aim of this project was to advance single-cell RNA-Seq methods toward the establishment of a platform that may be used to simultaneously interrogate the gene expression profiles of mammalian host cells and bacterial pathogens. Existing genetic sequencing methods that measure bulk groups of cells do not account for the heterogeneity of cell-microbe interactions that occur within a complex environment, have limited efficiency, and cannot simultaneously interrogate bacterial sequences. In order to overcome these challenges, separate biochemistry workflows were developed based on a No-So-Random hexamer priming strategy or libraries of targeted molecular probes. Computational tools were developed to facilitate these methods, and feasibility was demonstrated for single-cell RNA-Seq for both bacterial and mammalian transcriptomes. This work supports cross-agency national priorities on addressing the threat of biological pathogens and understanding the role of the microbiome in modulating immunity and susceptibility to infection.
Using an optical microscopy setup adapted to in-situ studies of ice formation at ambient pressure, we examined a specific multicomponent mineral, microcline, with the ultimate aim of gaining a more realistic understanding of ice nucleation in Earth’s atmosphere. We focused on a perthitic feldspar, microcline, to test the hypothesis that co-existence in some feldspars of K-rich and Na-rich phases are contributing to enhanced ice nucleation. On a sample deliberately chosen to contain lamella, a typical perthitic microstructure, and flat surface regions next to each other, we performed a series of ice formation experiments. We found microcline to promote ice formation, causing a large number of ice nucleation events at around - 27°C. The number of ice nuclei decreased from experimental run to experimental run, indicating surface aging upon repeated exposure to humidity. An analysis of 10 experimental runs of identical conditions did not reveal an obvious enhancement of ice formation at the lamellar microstructure. Instead, we find efficient nucleation at various surface sites that produce orientationally aligned ice crystallites with asymmetric shape. Based on this observation we propose that surface steps running along select directions produce microfacets of an orientation that is favorable to enhanced ice nucleation, similar to previously reported for K-rich feldspars.
Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia’s Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia’s ElectroMagnetic Plasma In Realistic Environments (EMPIRE) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia’s HPC users.
An exceptional set of newly-discovered advanced superalloys known as refractory high-entropy alloys (RHEAs) can provide near-term solutions for wear, erosion, corrosion, high-temperature strength, creep, and radiation issues associated with supercritical carbon dioxide (sCO2) Brayton Cycles and advanced nuclear reactors. In particular, these superalloys can significantly extend their durability, reliability, and thermal efficiency, thereby making them more cost-competitive, safer, and reliable. For this project, it was endeavored to manufacture and test certain RHEAs, to solve technical issues impacting the Brayton Cycle and advanced nuclear reactors. This was achieved by leveraging Sandia’s patents, technical advances, and previous experience working with RHEAs. Herein, three RHEA manufacturing methods were applied: laser engineered net shaping, spark plasma sintering, and spray coating. Two promising RHEAs were selected, HfNbTaZr and MoNbTaVW. To demonstrate their performance, erosion, structural, radiation, and hightemperature experiments were conducted on the RHEAs, stainless steel (SS) 316 L, SS 1020, and Inconel 718 test coupons, as well as bench-top components. The experimental data is presented, analyzed, and confirms the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS 316 L, SS 1020, and Inconel 718. In addition, to gain more insights for larger-scale RHEA applications, the erosion and structural capabilities for the two RHEAs were simulated and compared with the experimental data. The experimental data confirm the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS and Inconel. Most importantly, the erosion and the coating material experimental data show that erosion in sCO2 Brayton Cycles can be eliminated completely if RHEAs are used. The experimental suite and validations confirm that HfNbTaZr is suitable for harsh environments that do not include nuclear radiation, while MoNbTaVW is suitable for harsh environments that include radiation.
A new copper equation of state is developed utilizing the available experimental data in addition to recent theoretical calculations. Semi-empirical models are fit to the data and the results are tabulated in the SNL SESAME format. Comparison to other copper EOS tables are given, along with recommendations of which tables provide the best accuracy.
Deep neural networks (NNs) typically outperform traditional machine learning (ML) approaches for complicated, non-linear tasks. It is expected that deep learning (DL) should offer superior performance for the important non-proliferation task of predicting explosive device configuration based upon observed optical signature, a task which human experts struggle with. However, supervised machine learning is difficult to apply in this mission space because most recorded signatures are not associated with the corresponding device description, or “truth labels.” This is challenging for NNs, which traditionally require many samples for strong performance. Semi-supervised learning (SSL), low-shot learning (LSL), and uncertainty quantification (UQ) for NNs are emerging approaches that could bridge the mission gaps of few labels and rare samples of importance. NN explainability techniques are important in gaining insight into the inferential feature importance of such a complex model. In this work, SSL, LSL, and UQ are merged into a single framework, a significant technical hurdle not previously demonstrated. Exponential Average Adversarial Training (EAAT) and Pairwise Neural Networks (PNNs) are chosen as the SSL and LSL methods of choice. Permutation feature importance (PFI) for functional data is used to provide explainability via the Variable importance Explainable Elastic Shape Analysis (VEESA) pipeline. A variety of uncertainty quantification approaches are explored: Bayesian Neural Networks (BNNs), ensemble methods, concrete dropout, and evidential deep learning. Two final approaches, one utilizing ensemble methods and one utilizing evidential learning, are constructed and compared using a well-quantified synthetic 2D dataset along with the DIRSIG Megascene.
Cybersecurity for industrial control systems is an important consideration that advance reactor designers will need to consider. How cyber risk is managed is the subject of on-going research and debate in the nuclear industry. This report seeks to identify potential cyber risks for advance reactors. Identified risks are divided into absorbed risk and licensee managed risk to clearly show how cyber risks for advance reactors can potentially be transferred. Absorbed risks are risks that originate external to the licensee but may unknowingly propagate into the plant. Insights include (1) the need for unification of safety, physical security, and cybersecurity risk assessment frameworks to ensure optimal coordination of risk, (2) a quantitative risk assessment methodology in conjunction with qualitative assessments may be useful in efficiently and sufficiently managing cyber risks, and (3) cyber risk management techniques should align with a risked informed regulatory framework for advance reactors.
We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.
Downscaling of the silicon metal-oxide-semiconductor field-effect transistor technology is expected to reach a fundamental limit soon. A paradigm shift in computing is occurring. Spin field-effect transistors are considered a candidate architecture for next-generation microelectronics. Being able to leverage the existing infrastructure for silicon, a spin field-effect transistor technology based on group IV heterostructures will have unparalleled technical and economical advantages. For the same material platform reason, germanium hole quantum dots are also considered a competitive architecture for semiconductor-based quantum technology. In this project, we investigated several approaches to creating hole devices in germanium-based materials as well as injecting hole spins in such structures. We also explored the roles of hole injection in wet chemical etching of germanium. Our main results include the demonstration of germanium metal-oxide-semiconductor field-effect transistors operated at cryogenic temperatures, ohmic current-voltage characteristics in germanium/silicon-germanium heterostructures with ferromagnetic contacts at deep cryogenic temperatures and high magnetic fields, evaluation of the effects of surface preparation on carrier mobility in germanium/silicon- germanium heterostructures, and hole spin polarization through integrated permanent magnets. These results serve as essential components for fabricating next-generation germanium-based devices for microelectronics and quantum systems.
The ability to model ductile rupture in metal parts is critical in highly stressed applications. The initiation of a ductile fracture is a function of the plastic strain, the stress state, and stress history. This paper develops a ductile rupture failure surface for PH13-8Mo H950 steel using the Xue-Wierzbicki failure model. The model is developed using data from five tensile specimen tests conducted at -40⁰C and 20⁰C. The specimens are designed to cover a Lode parameter range of 0 and 1 with a stress triaxiality range from zero in pure shear to approximately 1.0 in tension. The failure surface can be implemented directly into the finite element code or used as a post processing check.
Arithmetic Coding (AC) using Prediction by Partial Matching (PPM) is a compression algorithm that can be used as a machine learning algorithm. This paper describes a new algorithm, NGram PPM. NGram PPM has all the predictive power of AC/PPM, but at a fraction of the computational cost. Unlike compression-based analytics, it is also amenable to a vector space interpretation, which creates the ability for integration with other traditional machine learning algorithms. AC/PPM is reviewed, including its application to machine learning. Then NGram PPM is described and test results are presented, comparing them to AC/PPM.
This project focused on providing a fundamental physico-chemical understanding of the coupling mechanisms of corrosion- and radiation-induced degradation at material-salt interfaces in Ni-based alloys operating in emulated Molten Salt Reactor(MSR) environments through the use of a unique suite of aging experiments, in-situ nanoscale characterization experiments on these materials, and multi-physics computational models. The technical basis and capabilities described in this report bring us a step closer to accelerate the deployment of MSRs by closing knowledge gaps related to materials degradation in harsh environments.
While computer systems, software applications, and operational technology (OT)/Industrial Control System (ICS) devices are regularly updated through automated and manual processes, there are several unique challenges associated with distributed energy resource (DER) patching. Millions of DER devices from dozens of vendors have been deployed in home, corporate, and utility network environments that may or may not be internet-connected. These devices make up a growing portion of the electric power critical infrastructure system and are expected to operate for decades. During that operational period, it is anticipated that critical and noncritical firmware patches will be regularly created to improve DER functional capabilities or repair security deficiencies in the equipment. The SunSpec/Sandia DER Cybersecurity Workgroup created a Patching Subgroup to investigate appropriate recommendations for the DER patching, holding fortnightly meetings for more than nine months. The group focused on DER equipment, but the observations and recommendations contained in this report also apply to DERMS tools and other OT equipment used in the end-to-end DER communication environment. The group found there were many standards and guides that discuss firmware lifecycles, patch and asset management, and code-signing implementations, but did not singularly cover the needs of the DER industry. This report collates best practices from these standards organizations and establishes a set of best practices that may be used as a basis for future national or international patching guides or standards.
This report summarizes work completed under the Laboratory Directed Research and Development (LDRD) project "Uncertainty Quantification of Geophysical Inversion Using Stochastic Differential Equations." Geophysical inversions often require computationally expensive algorithms to find even one solution, let alone propagating uncertainties through to the solution domain. The primary purpose of this project was to find more computationally efficient means to approximate solution uncertainty in geophysical inversions. We found multiple computationally efficient methods of propagating Earth model uncertainty into uncertainties in solutions of full waveform seismic moment tensor inversions. However, the optimum method of approximating the uncertainty in these seismic source solutions was to use the Karhunen-Love theorem with data misfit residuals. This method was orders of magnitude more computationally efficient than traditional Monte Carlo methods and yielded estimates of uncertainty that closely approximated those of Monte Carlo. We will summarize the various methods we evaluated for estimating uncertainty in seismic source inversions as well as work toward this goal in the realm of 3-D seismic tomographic inversion uncertainty.
Though the method-of-moments implementation of the electric-field integral equation plays an important role in computational electromagnetics, it provides many code-verification challenges due to the different sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. In this report, we document our research to address these issues, as well as its implementation and testing in Gemma.
The DOE-NE NWM Cloud was designed to be a generic set of tools and applications for any nuclear waste management program. As policymakers continue to consider approaches that emphasize consolidated interim storage and transportation of spent nuclear fuel, a gap analysis of the tools and applications provided for spent nuclear fuel and high-level radioactive waste disposal in comparison those needed for siting, licensing, and developing a consolidated interim storage facility and/or for a transportation campaign will help prepare DOE for implementing such potential policy direction. This report evaluates the points of alignment and potential gaps between the applications on the NWM Cloud that supported SNF disposal project, and the applications needed to address QA requirements and for other project support needs of an SNF storage project.
Virtual prototyping in engineering design rely on modern numerical models of contacting structures with accurate resolution of interface mechanics, which strongly affect the system-level stiffness and energy dissipation due to frictional losses. High-fidelity modeling within the localized interfaces is required to resolve local quantities of interest that may drive design decisions. The high-resolution finite element meshes necessary to resolve inter-component stresses tend to be computationally expensive, particularly when the analyst is interested in response time histories. The Hurty/Craig-Bampton (HCB) transformation is a widely used method in structural dynamics for reducing the interior portion of a finite element model while having the ability to retain all nonlinear contact degrees of freedom (DOF) in physical coordinates. These models may still require many DOF to adequately resolve the kinematics of the interface, leading to inadequate reduction and computational savings. This study proposes a novel interface reduction method to overcome these challenges by means of system-level characteristic constraint (SCC) modes and properly orthogonal interface modal derivatives (POIMDs) for transient dynamic analyses. Both SCC modes and POIMDs are computed using the reduced HCB mass and stiffness matrices, which can be directly computed from many commercial finite element analysis software. Comparison of time history responses to an impulse-type load in a mechanical beam assembly indicate that the interface-reduced model correlates well with the HCB truth model. Localized features like slip and contact area are well-represented in the time domain when the beam assembly is loaded with a broadband excitation. The proposed method also yields reduced-order models with greater critical timestep lengths for explicit integration schemes.
High-performance radiation detection materials are an integral part of national security, medical imaging, and nuclear physics applications. Those that offer compositional and manufacturing versatility are of particular interest. Here, we report a new family of radiological particle-discriminating scintillators containing bis(9,9-dimethyl-9H-fluoren-2-yl)diphe-nylsilane (compound 'P2') and in situ polymerized vinyltoluene (PVT) that is phase stable and mechanically robust at any blend ratio. The gamma-ray light yield increases nearly linearly across the composition range, to 16 400 photons/MeV at 75 wt.% P2. These materials are also capable of performing γ/n pulse shape discrimination (PSD), and between 20% and 50% P2 loading is competitive with the PSD quality of commercially available plastic scintillators. The 137Cs scintillation rise and decay times are sensitive to P2 loading and approach the values for 'pure' P2. Additionally, the radiation detection performance of P2-PVT blends can be made stable in 60 °C air for at least 1.5 months with the application of a thin film of poly(vinylalcohol) to the scintillator surfaces.
Thermographic phosphors have been employed for temperature sensing in challenging environments, such as on surfaces or within solid samples exposed to dynamic heating, because of the high temporal and spatial resolution that can be achieved using this approach. Typically, UV light sources are employed to induce temperature-sensitive spectral responses from the phosphors. However, it would be beneficial to explore x-rays as an alternate excitation source to facilitate simultaneous x-ray imaging of material deformation and temperature of heated samples and to reduce UV absorption within solid samples being investigated. The phosphors BaMgAl10O17:Eu (BAM), Y2SiO5:Ce, YAG:Dy, La2O2S:Eu, ZnGa2O4:Mn, Mg3F2GeO4:Mn, Gd2O2S:Tb, and ZnO were excited in this study using incident synchrotron x-ray radiation. These materials were chosen to include conventional thermographic phosphors as well as x-ray scintillators (with crossover between these two categories). X-ray-induced thermographic behavior was explored through the measurement of visible spectral response with varying temperature. The incident x-rays were observed to excite the same electronic energy level transitions in these phosphors as UV excitation. Similar shifts in the spectral response of BAM, Y2SiO5:Ce, YAG:Dy, La2O2S:Eu, ZnGa2O4:Mn, Mg3F2GeO4:Mn, and Gd2O2S:Tb were observed when compared to their response to UV excitation found in literature. Some phosphors were observed to thermally quench in the temperature ranges tested here, while the response from others did not rise above background noise levels. This may be attributed to the increased probability of non-radiative energy release from these phosphors due to the high energy of the incident x-rays. These results indicate that x-rays can serve as a viable excitation source for phosphor thermometry.
Ultra-low voltage drop tunnel junctions (TJs) were utilized to enable multi-active region blue light emitting diodes (LEDs) with up to three active regions in a single device. The multi-active region blue LEDs were grown monolithically by metal-organic chemical vapor deposition (MOCVD) without growth interruption. This is the first demonstration of a MOCVD grown triple-junction LED. Optimized TJ design enabled near-ideal voltage and EQE scaling close to the number of junctions. This work demonstrates that with proper TJ design, improvements in wall-plug efficiency at high output power operation are possible by cascading multiple III-nitride based LEDs.
U.S. critical infrastructure assets are often designed to operate for decades, and yet long-term planning practices have historically ignored climate change. With the current pace of changing operational conditions and severe weather hazards, research is needed to improve our ability to translate complex, uncertain risk assessment data into actionable inputs to improve decision-making for infrastructure planning. Decisions made today need to explicitly account for climate change – the chronic stressors, the evolution of severe weather events, and the wide-ranging uncertainties. If done well, decision making with climate in mind will result in increased resilience and decreased impacts to our lives, economies, and national security. We present a three-tier approach to create the research products needed in this space: bringing together climate projection data, severe weather event modeling, asset-level impacts, and contextspecific decision constraints and requirements. At each step, it is crucial to capture uncertainties and to communicate those uncertainties to decision-makers. While many components of the necessary research are mature (i.e., climate projection data), there has been little effort to develop proven tools for long-term planning in this space. The combination of chronic and acute stressors, spatial and temporal uncertainties, and interdependencies among infrastructure sectors coalesce into a complex decision space. By applying known methods from decision science and data analysis, we can work to demonstrate the value of an interdisciplinary approach to climate-hazard decision making for longterm infrastructure planning.
Due to their recent increases in performance, machine learning and deep learning models are being increasingly adopted across many domains for visual processing tasks. One such domain is international nuclear safeguards, which seeks to verify the peaceful use of commercial nuclear energy across the globe. Despite recent impressive performance results from machine learning and deep learning algorithms, there is always at least some small level of error. Given the significant consequences of international nuclear safeguards conclusions, we sought to characterize how incorrect responses from a machine or deep learning-assisted visual search task would cognitively impact users. We found that not only do some types of model errors have larger negative impacts on human performance than other errors, the scale of those impacts change depending on the accuracy of the model with which they are presented and they persist in scenarios of evenly distributed errors and single-error presentations. Further, we found that experiments conducted using a common visual search dataset from the psychology community has similar implications to a safeguards- relevant dataset of images containing hyperboloid cooling towers when the cooling tower images are presented to expert participants. While novice performance was considerably different (and worse) on the cooling tower task, we saw increased novice reliance on the most challenging cooling tower images compared to experts. These findings are relevant not just to the cognitive science community, but also for developers of machine and deep learning that will be implemented in multiple domains. For safeguards, this research provides key insights into how machine and deep learning projects should be implemented considering their special requirements that information not be missed.
Highlights: Battery energy storage may improve energy efficiency and reliability of hybrid energy systems composed by diesel and solar photovoltaic power generators serving isolated communities.In projects aiming update of power plants serving electrically isolated communities with redundant diesel generation, battery energy storage can improve overall economic performance of power supply system by reducing fuel usage, decreasing capital costs by replacing redundant diesel generation units, and increasing generator system life by shortening yearly runtime.Fast-acting battery energy storage systems with grid-forming inverters might have potential for improving drastically the reliability indices of isolated communities currently supplied by diesel generation. Abstract: This paper will highlight unique challenges and opportunities with regard to energy storage utilization in remote, self-sustaining communities. The energy management of such areas has unique concerns. Diesel generation is often the go-to power source in these scenarios, but these systems are not devoid of issues. Without dedicated maintenance crews as in large, interconnected network areas, minor interruptions can be frequent and invasive not only for those who lose power, but also for those in the community that must then correct any faults. Although the immediate financial benefits are perhaps not readily apparent, energy storage could be used to address concerns related to reliability, automation, fuel supply concerns, generator degradation, solar utilization, and, yes, fuel costs to name a few. These ideas are shown through a case study of the Levelock Village of Alaska. Currently, the community is faced with high diesel prices and a difficult supply chain, which makes temporary loss of power very common and reductions in fuel consumption very impactful. This study will investigate the benefits that an energy storage system could bring to the overall system life, fuel costs, and reliability of the power supply. The variable efficiency of the generators, impact of startup/shutdown process, and low-load operation concerns are considered. The technological benefits of the combined system will be explored for various scenarios of future diesel prices and technology maintenance/replacement costs as well as for the avoidance of power interruptions that are so common in the community currently. Graphic abstract: [Figure not available: see fulltext.] Discussion: In several cases, energy storage can provide a means to promote energy equity by improving remote communities’ power supply reliability to levels closer to what the average urban consumer experiences at a reduced cost compared to transmission buildout. Furthermore, energy equity represents a hard-to-quantify benefit achieved by the integration of energy storage to isolated power systems of under-served communities, which suggests that the financial aspects of such projects should be questioned as the main performance criterion. To improve battery energy storage system valuation for diesel-based power systems, integration analysis must be holistic and go beyond fuel savings to capture every value stream possible.
This project aimed to identify the performance-limiting mechanisms in mid- to far infrared (IR) sensors by probing photogenerated free carrier dynamics in model detector materials using scanning ultrafast electron microscopy (SUEM). SUEM is a recently developed method based on using ultrafast electron pulses in combination with optical excitations in a pump- probe configuration to examine charge dynamics with high spatial and temporal resolution and without the need for microfabrication. Five material systems were examined using SUEM in this project: polycrystalline lead zirconium titanate (a pyroelectric), polycrystalline vanadium dioxide (a bolometric material), GaAs (near IR), InAs (mid IR), and Si/SiO 2 system as a prototypical system for interface charge dynamics. The report provides detailed results for the Si/SiO 2 and the lead zirconium titanate systems.
Given the prevalent role of metals in a variety of industries, schemes to integrate corresponding constitutive models in finite element applications have long been studied. A number of formulations have been developed to accomplish this task; each with their own advantages and costs. Often the focus has been on ensuring the accuracy and numerical stability of these algorithms to enable robust integration. While important, emphasis on these performance metrics may often come at the cost of computational expense potentially neglecting the needs of individual problems. In the current work, the performance of two of the most common integration methods for anisotropic plasticity -- the convex cutting plane (CCP) and closest point projection (CPP) -- across a variety of metrics is assessed; including accuracy and cost. A variety of problems are considered ranging from single elements to large representative simulations including both implicit quasistatic and explicit transient dynamic type responses. The relative performance of each scheme in the different instances is presented with an eye towards guidance on when the different algorithms may be beneficial.
We describe a novel pulsed magnetic gradiometer based on the optical interference of sidebands generated using two spatially separated alkali vapor cells. In contrast to traditional magnetic gradiometers, our approach provides a direct readout of the gradient field without the intermediate step of subtracting the outputs of two spatially separated magnetometers. Operation of the gradiometer in multiple field orientations is discussed. The noise floor is measured as low as 25$\frac{fT}{\sqrt{Hz-cm}}$ in a room without magnetic shielding.
Our primary aim in this work is to understand how to efficiently obtain reliable uncertainty quantification in automatic learning algorithms with limited training datasets. Standard approaches rely on cross-validation to tune hyper parameters. Unfortunately, when our datasets are too small, holdout datasets become unreliable—albeit unbiased—measures of prediction quality due to the lack of adequate sample size. We should not place confidence in holdout estimators under conditions wherein the sample variance is both large and unknown. More poigniantly, our training experiments on limited data (Duersch and Catanach, 2021) show that even if we could improve estimator quality under these conditions, the typical training trajectory may never even encounter generalizable models.
Greater utilization of subsurface reservoirs perturbs in-situ chemical-mechanical conditions with wide ranging consequences from decreased performance to project failure. Understanding the chemical precursors to rock deformation is critical to reducing the risks of these activities. To address this need, we investigated the coupled flow-dissolution- precipitation-adsorption reactions involving calcite and environmentally-relevant solid phases. Experimentally, we quantified (1) stable isotope fractionation processes for strontium during calcite nucleation and growth, and during reactive fluid flow; (2) consolidation behavior of calcite assemblages in the common brines. Numerically, we quantified water weakening of calcite using molecular dynamics simulations; and quantified the impact of calcite dissolution rate on macroscopic fracturing using finite element models. With microfluidic experiments and modeling, we show the effect of local flow fields on the dissolution kinetics of calcite. Taken together across a wide range of scales and methods, our studies allow us to separate the effects of reaction, flow, and transport, on calcite fracturing and the evolution of strontium isotopic signatures in the reactive fluids.
The typical topology optimization workflow uses a design domain that does not change during the optimization process. Consequently, features of the design domain, such as the location of loads and constraints, must be determined in advance and are not optimizable. A method is proposed herein that allows the design domain to be optimized along with the topology. This approach uses topology and shape derivatives to guide nested optimizers to the optimal topology and design domain. The details of the method are discussed, and examples are provided that demonstrate the utility of this approach.
The ability to engineer the genome of a bacterial strain, not as an isolate, but while present among other microbes in a microbiome, would open new technological possibilities in the areas of medicine, energy and biomanufacturing. Our approach is to develop sets of phages (bacterial viruses) active on the target strain and themselves engineered to act not as killers but as vectors for gene delivery. This approach is rooted in our bioinformatic tools that map prophages accurately within bacterial genomes. We present new bioinformatic results in cross-contig search, design of phage genome assemblies, satellites that embed within prophages, alignment of large numbers of biological sequences, and improvement of reference databases for prophage discovery. We targeted a Pseudomonas putida strain within a lignin-degrading microbiome, but were unable to obtain active phages, and turned toward a defined microbiome of the mouse gut.
While it is likely practically a bad idea to shrink a transistor to the size of an atom, there is no arguing that it would be fantastic to have atomic-scale control over every aspect of a transistor – a kind of crystal ball to understand and evaluate new ideas. This project showed that it was possible to take a niche technique used to place dopants in silicon with atomic precision and apply it broadly to study opportunities and limitations in microelectronics. In addition, it laid the foundation to attaining atomic-scale control in semiconductor manufacturing more broadly.
Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.
Previous strain development efforts for cyanobacteria have failed to achieve the necessary productivities needed to support economic biofuel production. We proposed to develop CRISPR Engineering for Rapid Enhancement of Strains (CERES). We developed genetic and computational tools to enable future high-throughput screening of CRISPR interference (CRISPRi) libraries in the cyanobacterium Synechococcus sp. PCC 7002, including: (1) Operon- SEQer: an ensemble of algorithms for predicting operon pairs using RNA-seq data, (2) experimental characterization and machine learning prediction of gRNA design rules for CRISPRi, and (3) a shuttle vector for gene expression. These tools lay the foundation for CRISPR library screening to develop cyanobacterial strains that are optimized for growth or metabolite production under a wide range of environmental conditions. The optimization of cyanobacterial strains will directly advance U.S. energy and climate security by enabling domestic biofuel production while simultaneously mitigating atmospheric greenhouse gases through photoautotrophic fixation of carbon dioxide.
All disciplines that use models to predict the behavior of real-world systems need to determine the accuracy of the models’ results. Techniques for verification, validation, and uncertainty quantification (VVUQ) focus on improving the credibility of computational models and assessing their predictive capability. VVUQ emphasizes rigorous evaluation of models and how they are applied to improve understanding of model limitations and quantify the accuracy of model predictions.
Laser powder bed fusion (LPBF) Additive manufacturing (AM) has attracted interest as an agile method of building production metal parts to reduce design-build-test cycle times for systems. However, predicting part performance is difficult due to inherent process variabilities. This makes qualification challenging. Computational process models have attempted to address some of these challenges, including mesoscale, full physics models and reduced fidelity conduction models. The goal of this work is credible multi-fidelity modeling of the LPBF process by investigating methods for estimating the error between models of two different fidelities. Two methods of error estimation are investigated, adjoint-based error estimation and Bayesian calibration. Adjoint-based error estimation is found to effectively bounding the error between the two models, but with very conservative bounds, making predictions highly uncertain. Bayesian parameter calibration applied to conduction model heat source parameters is found to effectively bound the observed error between the models for melt pool morphology quantities of interest. However, the calibrations do not effectively bound the error in heat distribution.