We propose a new algorithm for the fast solution of large, sparse, symmetric positive-definite linear systems, spaND (sparsified Nested Dissection). It is based on nested dissection, sparsification, and low-rank compression. After eliminating all interiors at a given level of the elimination tree, the algorithm sparsifies all separators corresponding to the interiors. This operation reduces the size of the separators by eliminating some degrees of freedom but without introducing any fill-in. This is done at the expense of a small and controllable approximation error. The result is an approximate factorization that can be used as an efficient preconditioner. We then perform several numerical experiments to evaluate this algorithm. We demonstrate that a version using orthogonal factorization and block-diagonal scaling takes fewer CG iterations to converge than previous similar algorithms on various kinds of problems. Furthermore, this algorithm is provably guaranteed to never break down and the matrix stays symmetric positive-definite throughout the process. We evaluate the algorithm on some large problems show it exhibits near-linear scaling. The factorization time is roughly \scrO (N), and the number of iterations grows slowly with N.
The dynamic behavior of metamaterials and metastructures is often modeled using finite elements; however, these models can become quite large and therefore computationally expensive to simulate. Traditionally, large models are made smaller using any of the array of model reduction methods, such as Guyan or Craig-Bampton reduction. The regularized nature of metamaterials makes them excellent candidates for reduced-order modeling because the system is essentially comprised of a repeating pattern of unit cell components. These unit cell components can be reduced and then assembled to form a reduced-order system-level model with equivalent dynamics. The process is demonstrated using a finite element model of a 1-D axially vibrating metamaterial bar using Guyan, SEREP, and Craig-Bampton reduction methods. The process is shown to provide substantial reduction in the time needed to simulate the dynamic response of a representative metamaterial while maintaining the dynamics of the system and resonators.
Design of multiple-input/multiple-output vibration experiments, such as impedance matched multi-axis testing and multi-shaker testing, rely on a force estimation calculation which is typically executed using a direct inverse approach. Force estimation can be performed multiple ways, each method providing some different tradeoff between response accuracy and input forces. Additionally, there are ways to improve the numerics of the problem with regularization techniques which can reduce errors incurred from poor conditioning of the system frequency response matrix. This paper explores several different force estimation methods and compares several regularization approaches using a simple multiple-input/multiple-output dynamic system, demonstrating the effects on the predicted inputs and responses.
Many in the structural dynamics community are currently researching a range of multiple-input/multiple-output problems and largely rely on commercially-available closed-loop controllers to execute their experiments. Generally, these commercially-available control systems are robust and prove adequate for a wide variety of testing. However, with the development of new techniques in this field, researchers will want to exercise these new techniques in laboratory tests. For example, modifying the control or input estimation method can have benefits to the accuracy of control, or provide higher response for a given input. Modification of the control methods is not typically possible in commercially-available control systems, therefore it is desirable to have some methodology available which allows researchers to synthesize input signals for multiple-input/multiple-output experiments. Here, methods for synthesizing multiply-correlated time histories based on desired cross spectral densities are demonstrated and then explored to understand effects of various parameters on the resulting signals, their statistics, and their relation to the specified cross spectral densities. This paper aims to provide researchers with a simple, step-by-step process which can be implemented to generate input signals for open-loop multiple-input/multiple-output experiments.
The ability to extrapolate response data to unmeasured locations has obvious benefits for a range of lab and field experiments. This is typically done using an expansion process utilizing some type of transformation matrix, which typically comes from mode shapes of a finite element model. While methods exist to perform expansion, it is still not commonplace, perhaps due to a lack of experience using expansion tools or a lack of understanding of the sensitivities of the problem setup on results. To assess the applicability of expansion in a variety of real-world test scenarios, it is necessary to determine the level of perturbation or error the finite element model can sustain while maintaining accuracy in the expanded results. To this end, the structure model’s boundary conditions, joint stiffness, and material properties were altered to determine the range of discrepancies allowable before the expanded results differed significantly from the measurements. The effect of improper implementations of the expansion procedure on accuracy is also explored. This study allows for better insights on prospective use cases and possible pitfalls when implementing the expansion procedure.
Bolted interfaces are a major source of uncertainty in the dynamic behavior of built-up assemblies. Contact pressure distribution from a bolt’s preload governs the stiffness of the interface. These quantities are sensitive to the true curvature, or flatness, of the surface geometries and thus limit the predictive capability of models based on nominal drawing tolerances. Fabricated components inevitably deviate from their idealized geometry; nominally flat surfaces, for example, exhibit measurable variation about the desired level plane. This study aims to develop a predictive, high-fidelity finite element model of a bolted beam assembly to determine the modal characteristics of the preloaded assembly designed with nominally flat surfaces. The surface geometries of the beam interface are measured with an optical interferometer to reveal the amount of deviation from the nominally flat surface. These measurements are used to perturb the interface nodes in the finite element mesh to account for the true interface geometry. A nonlinear quasi-static preload analysis determines the contact area when the bolts are preloaded, and the model is linearized about this equilibrium state to estimate the modal characteristics of the assembly. The linearization assumes that nodes/faces in contact do not move relative to each other and are enforced through multi-point constraints. The structure’s natural frequencies and mode shapes predicted by the model are validated by experimental measurements of the actual structure.
Understanding the dynamic response of a structure is critical to design. This is of extreme importance in high-consequence systems on which human life can depend. Historically, these structures have been modeled as linear, where response scales proportionally with excitation amplitude. However, most structures are nonlinear to the extent that linear models are no longer sufficient to adequately capture important dynamics. Sources of nonlinearity include, but are not limited to: large deflections (so called geometric nonlinearities), complex materials, and frictional interfaces/joints in assemblies between subcomponents. Joint nonlinearities usually cause the natural frequency to decrease and the effective damping ratio to increase with response amplitude due to microslip effects. These characteristics can drastically alter the dynamics of a structure and, if not well understood, could lead to unforeseen failure or unnecessarily over-designed features. Nonlinear structural dynamics has been a subject of study for many years, and provide a summary of recent developments and discoveries in this field. One topic discussed in these papers are nonlinear normal modes (NNMs) which are periodic solutions of the underlying conservative system. They provide a theoretical framework for describing the energy-dependence of natural frequencies and mode shapes of nonlinear systems, and lead to a promising method to validate nonlinear models. In and, a force appropriation testing technique was developed which allowed for the experimental tracking of undamped NNMs by achieving phase quadrature between the excitation and response. These studies considered damping to be small to moderate, and constant. Nonlinear damping of an NNM was studied in using power-based quantities for a structure with a discrete, single-bolt interface. In this work, the force appropriation technique where phase quadrature is achieved between force and response as described in is applied to a target mode of a structure with two bolted joints, one of which comprised a large, continuous interface. This is a preliminary investigation which includes a study of nonlinear natural frequency, mode shape, and damping trends extracted from the measured data.
Engineering designers are responsible for designing parts, components, and systems that perform required functions in their intended field environment. To determine if their design will meet its requirements, the engineer must run a qualification test. For shock and vibration environments, the component or unit under test is connected to a shaker table or shock apparatus and is imparted with a load to simulate the mechanical stress from vibration. A difficulty in this approach is when the stresses in the unit under test cannot be generated by a fixed base boundary condition. A fixed base boundary condition is the approximate boundary condition when the unit under test is affixed to a stiff test fixture and shaker table. To aid in correcting for this error, a flexible fixture needs to be designed to account for the stresses that the unit under test will experience in the field. This paper will use topology optimization to design a test fixture that will minimize the difference between the mechanical impedance of the next level of assembly and the test fixture. The optimized fixture will be compared to the rigid fixture with respect to the test’s ability to produce the field stresses.
Lacayo et al. (Mechanical Systems and Signal Processing, 118: 133–157, 2019) recently proposed a fast model updating approach for finite element models that include Iwan models to represent mechanical joints. The joints are defined by using RBE3 averaging constraints or RBAR rigid constraints to tie the contact surface nodes to a single node on each side, and these nodes are then connected with discrete Iwan elements to capture tangential frictional forces that contribute to the nonlinear behavior of the mechanical interfaces between bolted joints. Linear spring elements are used in the remaining directions to capture the joint stiffness. The finite element model is reduced using a Hurty/Craig-Bampton approach such that the physical interface nodes are preserved, and the Quasi-Static Modal Analysis approach is used to quickly predict the effective natural frequency and damping ratio as a function of vibration amplitude for each mode of interest. Model updating is then used to iteratively update the model such that it reproduces the correct natural frequency and damping at each amplitude level of interest. In this paper, Lacayo’s updating approach is applied to the S4 Beam (Singh et al., IMAC XXXVI, 2018) giving special attention to the size and type of the multi-point constraints used to connect the structures, and their effect on the linear and nonlinear modal characteristics.
Multi-axis testing is growing in popularity in the testing community due to its ability to better match a complex three-dimensional excitation than a single-axis shaker test. However, with the ability to put a large number of shakers anywhere on the structure, the design space of such a test is enormous. This paper aims to investigate strategies for placement of shakers for a given test using a complex aerospace structure controlled to real environment data. Initially shakers are placed using engineering judgement, and this was found to perform reasonably well. To find shaker setups that improved upon engineering judgement, impact testing was performed at a large number of candidate excitation locations to generate frequency response functions that could be used to perform virtual control studies. In this way, a large number of shaker positions could be evaluated without needing to reposition the shakers each time. A brute force computation of all possible shaker setups was performed to find the set with the lowest error, but the computational cost of this approach is prohibitive for very large candidate shaker sets. Instead, an iterative approach was derived that found a suboptimal set that was nearly as good as the brute force calculation. Finally, an investigation into the number of shakers used for control was performed, which could help determine how many shakers might be necessary to perform a given test.
The root mean square (RMS) von Mises stress is a criterion used for assessing the reliability of structures subject to stationary random loading. This work investigates error in RMS von Mises stress and its relationship to the error in acceleration for random vibration analysis. First, a theoretical development of stress-acceleration error is introduced for a simplified problem based on modal stress analysis. Using results from the example as a basis, a similar error relationship is determined for random vibration problems. Finite element analyses of test structures subject to an input acceleration auto-spectral density are performed and results from parametric studies are used to determine error. For a given error in acceleration, a relationship to the error in RMS von Mises stress is established. The resulting relation is used to calculate a bound on the RMS von Mises stress based on the computed accelerations. This error bound is useful in vibration analysis, especially where uncertainty and variability must be thoroughly considered.
Current quantification of margin and uncertainty (QMU) guidance lacks a consistent framework for communicating the credibility of analysis results. Recent efforts at providing QMU guidance have pushed for broadening the analyses supporting QMU results beyond extrapolative statistical models to include a more holistic picture of risk, including information garnered from both experimental campaigns and computational simulations. Credibility guidance would assist in the consideration of belief-based aspects of an analysis. Such guidance exists for presenting computational simulation-based analyses and is under development for the integration of experimental data into computational simulations (calibration or validation), but is absent for the ultimate QMU product resulting from experimental or computational analyses. A QMU credibility assessment framework comprised of five elements is proposed: requirement definitions and quantity of interest selection, data quality, model uncertainty, calibration/parameter estimation, and validation. Through considering and reporting on these elements during a QMU analysis, the decision-maker will receive a more complete description of the analysis and be better positioned to understand the risks involved with using the analysis to support a decision. A molten salt battery application is used to demonstrate the proposed QMU credibility framework.
Structures are subject to many environments in the lifetime of an assembly, and mechanical environments such as vibration are particularly significant when considering structural integrity. In the early development cycle, mechanical environment test specifications are often derived from assemblies with simplified “mass mock” components. The assumptions for these simplified components generally mimic total mass and center of gravity, but do not always capture moments of inertia. Historically, environments for mass mock components are enveloped and used for future iterations of the true component’s qualification. This work aims to understand and characterize differences in dynamic response due to changes in inertial properties of a component. The FEM of a test structure for this work includes a system level model with true components that will be compared to a FEM with mass mock components. Both versions of the structure will be evaluated based on dynamic response at the component and system levels. The validity and limitations of using mass mock components with approximate inertial properties for deriving environmental specifications will be explored.
In the past year, resonant plate tests designed to excite all three axes simultaneously have become increasingly popular at Sandia National Labs. Historically, only one axis was tested at a time, but unintended off axis responses were generated. In order to control the off-axis motion so that off-axis responses were created which satisfy appropriate test specifications, the test setup has to be iteratively modified so that the coupling between axes was desired. The iterative modifications were done with modeling and simulation. To model the resonant plate test, an accurate forcing function must be specified. For resonant plate shock experiments, the input force of the projectile impacting the plate is prohibitively difficult to measure in situ. To improve on current simulation results, a method to use contact forces from an explicit simulation as an input load was implemented. This work covers an overview and background of three axes resonant plate shock tests, their design, their value in experiments, and the difficulties faced in simulating them. The work also covers a summary of contact force implementation in an explicit dynamics code and how it is used to evaluate an input force for a three axes resonant plate simulation. The results from the work show 3D finite element projectile and impact block interactions as well as simulation shock response data compared to experimental shock response data.
The Sandia National Laboratories Human Factors team designed and executed an experiment to quantify the differences between 2D and 3D reference materials with respect to task performance and cognitive workload. A between-subjects design was used where 27 participants were randomly assigned either 2D or 3D reference material condition (14 and 13 participants, respectively). The experimental tasks required participants to interpret, locate, and report dimensions on their assigned reference material. Performance was measured by accuracy of task completion and time-to-complete. After all experimental tasks were completed, cognitive workload data were collected. Response times were longer in the 3D condition vice the 2D. However, no differences were found between conditions with respect to response accuracy and cognitive workload, which may indicate no negative cognitive impacts concerning the sole use of 3D reference materials in the work-place. This paper concludes with possible future efforts to address the limitations of this experiment and to explore the mechanisms behind the findings of this work.
Many problems in engineering and sciences require the solution of large scale optimization constrained by partial differential equations (PDEs). Though PDE-constrained optimization is itself challenging, most applications pose additional complexity, namely, uncertain parameters in the PDEs. Uncertainty quantification (UQ) is necessary to characterize, prioritize, and study the influence of these uncertain parameters. Sensitivity analysis, a classical tool in UQ, is frequently used to study the sensitivity of a model to uncertain parameters. In this article, we introduce "hyper-differential sensitivity analysis" which considers the sensitivity of the solution of a PDE-constrained optimization problem to uncertain parameters. Our approach is a goal-oriented analysis which may be viewed as a tool to complement other UQ methods in the service of decision making and robust design. We formally define hyper-differential sensitivity indices and highlight their relationship to the existing optimization and sensitivity analysis literatures. Assuming the presence of low rank structure in the parameter space, computational efficiency is achieved by leveraging a generalized singular value decomposition in conjunction with a randomized solver which converts the computational bottleneck of the algorithm into an embarrassingly parallel loop. Two multi-physics examples, consisting of nonlinear steady state control and transient linear inversion, demonstrate efficient identification of the uncertain parameters which have the greatest influence on the optimal solution.
Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME
Weir, Nathan A.; Alleyne, Andrew G.
Due to the unique structure of two-input single-output (TISO) feedback systems, several closed-loop properties can be characterized using the concepts of plant and controller "directions"and "alignment."Poor plant/controller alignment indicates significant limitations in terms of closed-loop performance. In general, it is desirable to design a controller that is well aligned with the plant in order to minimize the size of the closed-loop sensitivity functions and closed-loop interactions. Although the concept of alignment can be a useful analysis tool for a given plant/controller pair, it is not obvious how a controller should be designed to achieve good alignment. We present a new controller design approach, based on the PQ method (Schroeck et al., 2001, "On Compensator Design for Linear Time invariant Dual-Input Single-Output Systems,"IEEE/ASME Trans. Mechatronics, 6(1), pp. 50-57), which explicitly incorporates knowledge of alignment into the design process. This is accomplished by providing graphical information about the alignment angle on the Bode plot of the PQ frequency response. We show the utility of this approach through a design example.
At the molecular level, resonant coupling of infrared radiation with oscillations of the electric dipole moment determines the absorption cross section, σ. The parameter σ relates the bond density to the total integrated absorption. In this work, σ was measured for the Si-N asymmetric stretch mode in SiNx thin films of varying composition and thickness. Thin films were deposited by low pressure chemical vapor deposition at 850 °C from mixtures of dichlorosilane and ammonia. σ for each film was determined from Fourier transform infrared spectroscopy and ellipsometric measurements. Increasing the silicon content from 0% to 25% volume fraction amorphous silicon led to increased optical absorption and a corresponding systematic increase in σ from 4.77 × 10-20 to 6.95 × 10-20cm2, which is consistent with literature values. The authors believe that this trend is related to charge transfer induced structural changes in the basal SiNx tetrahedron as the volume fraction of amorphous silicon increases. Experimental σ values were used to calculate the effective dipole oscillating charge, q, for four films of varying composition. The authors find that q increases with increasing amorphous silicon content, indicating that compositional factors contribute to modulation of the Si-N dipole moment. Additionally, in the composition range investigated, the authors found that σ agrees favorably with trends observed in films deposited by plasma enhanced chemical vapor deposition.
Particle-in-cell (PIC) simulation methods are attractive for representing species distribution functions in plasmas. However, as a model, they introduce uncertain parameters, and for quantifying their prediction uncertainty it is useful to be able to assess the sensitivity of a quantity-of-interest (QoI) to these parameters. Such sensitivity information is likewise useful for optimization. However, computing sensitivity for PIC methods is challenging due to the chaotic particle dynamics, and sensitivity techniques remain underdeveloped compared to those for Eulerian discretizations. This challenge is examined from a dual particle–continuum perspective that motivates a new sensitivity discretization. Two routes to sensitivity computation are presented and compared: a direct fully-Lagrangian particle-exact approach provides sensitivities of each particle trajectory, and a new particle-pdf discretization, which is formulated from a continuum perspective but discretized by particles to take the advantages of the same type of Lagrangian particle description leveraged by PIC methods. Since the sensitivity particles in this approach are only indirectly linked to the plasma-PIC particles, they can be positioned and weighted independently for efficiency and accuracy. The corresponding numerical algorithms are presented in mathematical detail. The advantage of the particle-pdf approach in avoiding the spurious chaotic sensitivity of the particle-exact approach is demonstrated for Debye shielding and sheath configurations. In essence, the continuum perspective makes implicit the distinctness of the particles, which circumvents the Lyapunov instability of the N-body PIC system. The cost of the particle-pdf approach is comparable to the baseline PIC simulation.
Accurate predictions of device performance in 14-MeV neutron environments rely upon understanding the recoil cascades that may be produced. Recoils from 14-MeV neutrons impinging on both gallium nitride (GaN) and gallium arsenide (GaAs) devices were modeled and compared to the recoil spectra of devices exposed to 14-MeV neutrons. Recoil spectra were generated using nuclear reaction modeling programs and converted into an ionizing energy loss (IEL) spectrum. We measured the recoil IEL spectra by capturing the photocurrent pulses produced by single neutron interactions with the device. Good agreement, with a factor of two, was found between the model and the experiment under strongly depleted conditions. However, this range of agreement between the model and the experiment decreased significantly when the bias was removed, indicating partial energy deposition due to cascades that escape the active volume of the device not captured by the model. Consistent event rates across multiple detectors confirm the reliability of our neutron recoil detection method.
Renewed interest in the development of molten salt reactors has created the need for analytical tools that can perform safeguards assessments on these advanced reactors. This work outlines a flexible framework to perform safeguards analyses on a wide range of advanced reactor designs. The framework consists of two parts, a process model and a safeguards tool. The process model, developed in MATLAB Simulink, simulates the flow materials through a reactor facility. These models are linked to SCALE/TRITON and SCALE/ORIGEN to approximate depletion and decay of fuel salts but are flexible enough to accommodate higher fidelity tools if needed. The safeguards tool uses the process data to calculate common statistical quantities of interest such as material unaccounted for (MUF) and Page's trend test on the standardized independent transformed MUF (SITMUF). This paper documents the development of these tools.
Recent advancements in micro-scale additive manufacturing techniques have created opportunities for design of novel electrode geometries that improve battery performance by deviating from the traditional layered battery design. These 3D batteries typically exhibit interpenetrating anode and cathode materials throughout the design space, but the existing well-established porous electrode theory models assume only one type of electrode is present in each battery layer. We therefore develop and demonstrate a multielectrode volume-averaged electrochemical transport model to simulate transient discharge performance of these new interpenetrating electrode architectures. We implement the new reduced-order model in the PETSc framework and asses its accuracy by comparing predictions to corresponding mesoscale-resolved simulations that are orders of magnitude more computationally-intensive. For simple electrode designs such as alternating plates or cylinders, the volume-averaged model predicts performance within ∼2% for electrode feature sizes comparable to traditional particle sizes (5-10μm) at discharge rates up to 3C. When considering more complex geometries such as minimal surface designs (i.e. gyroid, Schwarz P), we show that using calibrated characteristic diffusion lengths for each design results in errors below 3% for discharge rates up to 3C. These comparisons verify that this novel model has made reliable cell-scale simulations of interpenetrating electrode designs possible.
Metal additive manufacturing (AM) allows for the freeform creation of complex parts. However, AM microstructures are highly sensitive to the process parameters used. Resulting microstructures vary significantly from typical metal alloys in grain morphology distributions, defect populations and crystallographic texture. AM microstructures are often anisotropic and possess three-dimensional features. These microstructural features determine the mechanical properties of AM parts. Here, we reproduce three “canonical” AM microstructures from the literature and investigate their mechanical responses. Stochastic volume elements are generated with a kinetic Monte Carlo process simulation. A crystal plasticity-finite element model is then used to simulate plastic deformation of the AM microstructures and a reference equiaxed microstructure. Results demonstrate that AM microstructures possess significant variability in strength and plastic anisotropy compared with conventional equiaxed microstructures.
Atomistic modeling of radiation damage through displacement cascades is deceptively non-trivial. Due to the high energy and stochastic nature of atomic collisions, individual primary knock-on atom (PKA) cascade simulations are computationally expensive and ill-suited for length and dose upscaling. Here, we propose a reduced-order atomistic cascade model capable of predicting and replicating radiation events in metals across a wide range of recoil energies. Our methodology approximates cascade and displacement damage production by modeling the cascade as a core-shell atomic structure composed of two damage production estimators, namely an athermal recombination corrected displacements per atom (arc-dpa) in the shell and a replacements per atom (rpa) representing atomic mixing in the core. These estimators are calibrated from explicit PKA simulations and a standard displacement damage model that incorporates cascade defect production efficiency and mixing effects. We illustrate the predictability and accuracy of our reduced-order atomistic cascade method for the cases of copper and niobium by comparing its results with those from full PKA simulations in terms of defect production as well as the resulting cascade evolution and structure. We provide examples for simulating high energy cascade fragmentation and large dose ion-bombardment to demonstrate its possible applicability. Finally, we discuss the various practical considerations and challenges associated with this methodology especially when simulating subcascade formation and dose effects.
In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these“truck rolls” to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small “embedded” storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.
Experiments with the lower divertor of DIII-D during the Metal Rings Campaign (MRC) show that the fraction F of atomic D in the total recycling flux is material-dependent and varies through the ELM cycle, which may affect divertor fueling. Between ELMs, F C ∼ 10% and F W ∼ 40%, consistent with expectations if all atomic recycling is due to reflections. During ELMs, FC increases to 50% and F W to 60%. In contrast, the total D recycling coefficient including atoms and molecules R stays close to unity near the strike point where the surface is saturated with D. During ELMs, R can deviate from unity, increasing during high energy ELM-ion deposition (net D release) and decreasing at the end of the ELM which leads to ability of the target to trap the ELM-deposited D. The increase of R > 1 in response to an increase in ion impact energy E i has been studied with small divertor target samples using Divertor Materials Evaluation System (DiMES). An electrostatic bias was applied to DiMES to change E i by 90 eV. On all studied materials including C, Mo, uncoated and W-coated TZM (>99% Mo, Ti, and Zr alloy), W, and W fuzz, an increase of E i transiently increased the D yield (and R) by ∼10%. On C there was also an increase in the molecular D2 yield, probably due to ion-induced D2 desorption. Despite the measured increase in F on W compared to C, attached H-mode shots with OSP on W during MRC did not demonstrate a higher pedestal density. About 8% increase in the edge density could be seen only in attached L-mode scenarios. The difference can be explained by higher D trapping in the divertor and lower divertor fueling efficiency in H-versus L-mode.
We present a numerical method for synchronous and concurrent solution of transient elastodynamics problem where the computational domain is divided into subdomains that may reside on separate computational platforms. This work employs the variational multiscale discontinuous Galerkin (VMDG) method to develop interdomain transmission conditions for transient problems. The fine-scale modeling concept leads to variationally consistent coupling terms at the common interfaces. The method admits a large class of time discretization schemes, and decoupling of the solution for each subdomain is achieved by selecting any explicit algorithm. Numerical tests with a manufactured solution problem show optimal convergence rates. The energy history in a free vibration problem is in agreement with that of the solution from a monolithic computational domain.
In this study, we develop Gaussian process regression (GPR) models of isotropic hyperelastic material behavior. First, we consider the direct approach of modeling the components of the Cauchy stress tensor as a function of the components of the Finger stretch tensor in a Gaussian process. We then consider an improvement on this approach that embeds rotational invariance of the stress-stretch constitutive relation in the GPR representation. This approach requires fewer training examples and achieves higher accuracy while maintaining invariance to rotations exactly. Finally, we consider an approach that recovers the strain-energy density function and derives the stress tensor from this potential. Although the error of this model for predicting the stress tensor is higher, the strain-energy density is recovered with high accuracy from limited training data. The approaches presented here are examples of physics-informed machine learning. They go beyond purely data-driven approaches by embedding the physical system constraints directly into the Gaussian process representation of materials models.
Many applications benefit from the ability of an RFID tag to operate both on and off a conducting ground plane. This paper presents an electrically small loop antenna at 433 MHz that passively maintains its free-space tune and match when located a certain distance away from a large conducting ground plane. The design achieves this using a single radiation mechanism (that of a loop) in both environments without the use of a ground plane or EBG/AMC structure. An equivalent circuit model is developed that explains the dual-environment behavior and shows that the geometry balances inductive and capacitive parasitics introduced by the ground plane such that the free-space loop reactance, and thus resonant frequency, does not change. A design equation for balancing the inductive and capacitive parasitic effects is derived. Finally, experimental data showing the design eliminates ground plane detuning in practice is presented. The design is suitable for active, 'hard' RFID tag applications.
We investigate the spatial organization and temporal dynamics of large-scale, coherent structures in turbulent Rayleigh-Bénard convection via direct numerical simulation of a 6.3 aspect-ratio cylinder with Rayleigh and Prandtl numbers of and, respectively. Fourier modal decomposition is performed to investigate the structural organization of the coherent turbulent motions by analysing the length scales, time scales and the underlying dynamical processes that are ultimately responsible for the large-scale structure formation and evolution. We observe a high level of rotational symmetry in the large-scale structure in this study and that the structure is well described by the first four azimuthal Fourier modes. Two different large-scale organizations are observed during the duration of the simulation and these patterns are dominated spatially and energetically by azimuthal Fourier modes with frequencies of 2 and 3. Studies of the transition between these two large-scale patterns, radial and vertical variations in the azimuthal energy spectra, as well as the spatial and modal variations in the system's correlation time are conducted. Rotational dynamics are observed for individual Fourier modes and the global structure with strong similarities to the dynamics that have been reported for unit aspect-ratio domains in prior works. It is shown that the large-scale structures have very long correlation time scales, on the order of hundreds to thousands of free-fall time units, and that they are the primary source for a horizontal inhomogeneity within the system that can be observed during a finite, but a very long-time simulation or experiment.
Stochastic reaction network models are often used to explain and predict the dynamics of gene regulation in single cells. These models usually involve several parameters, such as the kinetic rates of chemical reactions, that are not directly measurable and must be inferred from experimental data. Bayesian inference provides a rigorous probabilistic framework for identifying these parameters by finding a posterior parameter distribution that captures their uncer-tainty. Traditional computational methods for solving inference problems such as Markov chain Monte Carlo methods based on the classical Metropolis-Hastings algorithm involve numerous serial evaluations of the likelihood function, which in turn requires expensive forward solutions of the chemical master equation (CME). We propose an alternate approach based on a multifidelity extension of the sequential tempered Markov chain Monte Carlo (ST-MCMC) sam-pler. This algorithm is built upon sequential Monte Carlo and solves the Bayesian inference problem by decomposing it into a sequence of efficiently solved subproblems that gradually increase both model fidelity and the influence of the observed data. We reformulate the finite state projection (FSP) algorithm, a well-known method for solving the CME, to produce a hierarchy of surrogate master equations to be used in this multifidelity scheme. To determine the appro-priate fidelity, we introduce a novel information-theoretic criterion that seeks to extract the most information about the ultimate Bayesian posterior from each model in the hierarchy without inducing significant bias. This novel sampling scheme is tested with high-performance computing resources using biologically relevant problems.
TlBr can surpass CZT as the leading semiconductor for γ- A nd X-radiation detection. Unfortunately, the optimum properties of TlBr quickly decay when an operating electrical field is applied. Quantum mechanical studies indicated that if this property degradation comes from the conventional mechanism of ionic migration of vacancies, then an unrealistically high vacancy concentration is required to account for the rapid aging of TlBr seen in experiments. In this work, we have applied large scale molecular dynamics simulations to study the effects of dislocations on ionic migration of TlBr crystals under electrical fields. We found that electrical fields can drive the motion of edge dislocations in both slip and climb directions. These combined motions eject enormous vacancies in the dislocation trail. Both dislocation motion and a high vacancy concentration can account for the rapid aging of the TlBr detectors. These findings suggest that strengthening methods to pin dislocations should be explored to increase the lifetimes of TlBr crystals.
We build upon a recently developed approach for solving stochastic inverse problems based on a combination of measure-theoretic principles and Bayes’ rule. We propose a multi-fidelity method to reduce the computational burden of performing uncertainty quantification using high-fidelity models. This approach is based on a Monte Carlo framework for uncertainty quantification that combines information from solvers of various fidelities to obtain statistics on the quantities of interest of the problem. In particular, our goal is to generate samples from a high-fidelity push-forward density at a fraction of the costs of standard Monte Carlo methods, while maintaining flexibility in the number of random model input parameters. Key to this methodology is the construction of a regression model to represent the stochastic mapping between the low-and high-fidelity models, such that most of the computations can be leveraged to the low-fidelity model. To that end, we employ Gaussian process regression and present extensions to multi-level-type hierarchies as well as to the case of multiple quantities of interest. Finally, we demonstrate the feasibility of the framework in several numerical examples.
Ju, Zhaoyang; Xiao, Weihua; Yao, Xiaoqian; Tan, Xin; Simmons, Blake A.; Sale, Kenneth L.; Sun, Ning
Keggin-type polyoxometalate derived ionic liquids (POM-ILs) have recently been presented as effective solvent systems for biomass delignification. To investigate the mechanism of lignin dissolution in POM-ILs, the system involving POM-IL ([C4C1Im]3[PW12O40]) and guaiacyl glycerol-β-guaiacyl ether (GGE), which contains a β-O-4 bond (the most dominant bond moiety in lignin), was studied using quantum mechanical calculations and molecular dynamics simulations. These studies show that more stable POM-IL structures are formed when [C4C1Im]+ is anchored in the connecting four terminal oxygen region of the [PW12O40]3- surface. The cations in POM-ILs appear to stabilize the geometry by offering strong and positively charged sites, and the POM anion is a good H-bond acceptor. Calculations of POM-IL interacting with GGE show the POM anion interacts strongly with GGE through many H-bonds and π-π interactions which are the main interactions between the POM-IL anion and GGE and are strong enough to force GGE into highly bent conformations. These simulations provide fundamental models of the dissolution mechanism of lignin by POM-IL, which is promoted by strong interactions of the POM-IL anion with lignin.
One of the primary concerns with the long-term performance of storage systems for spent nuclear fuel (SNF) is the potential for corrosion due to deliquescence of salts deposited as aerosols on the surface of the canister, which is typically made of austentic stainless steel. In regions of high residual weld stresses, this may lead to localized stress-corrosion cracking (SCC). The ability to detect and image SCC at an early stage (long before the cracks are susceptible to propagate through the thickness of the canister wall and leaks of radioactive material may occur) is essential to the performance evaluation and licensing process of the storage systems. In this paper, we explore a number of nondestructive testing techniques to detect and image SCC in austenitic stainless steel. Our attention is focused on a small rectangular sample of 1 × 2 in2 with two cracks of mm-scale sizes. The techniques explored in this paper include nonlinear resonant ultrasound spectroscopy (NRUS) for detection, Linear Elastodynamic Gradient Imaging Technique (LEGIT), ultrasonic C-scan, vibrothermography, and synchrotron X-ray diffraction for imaging. Results obtained from these techniques are compared. Cracks of mm-scale sizes can be detected and imaged with all the techniques explored in this study.
Inverters are a leading source of hardware failures and contribute to significant energy losses at photovoltaic (PV) sites. An understanding of failure modes within inverters requires evaluation of a dataset that captures insights from multiple characterization techniques (including field diagnostics, production data analysis, and current-voltage curves). One readily available dataset that can be leveraged to support such an evaluation are maintenance records, which are used to log all site-related technician activities, but vary in structuring of information. Using machine learning, this analysis evaluated a database of 55,000 maintenance records across 800+ sites to identify inverter-related records and consistently categorize them to gain insight into common failure modes within this critical asset. Communications, ground faults, heat management systems, and insulated gate bipolar transistors emerge as the most frequently discussed inverter subsystems. Further evaluation of these failure modes identified distinct variations in failure frequencies over time and across inverter types, with communication failures occurring more frequently in early years. Increased understanding of these failure patterns can inform ongoing PV system reliability activities, including simulation analyses, spare parts inventory management, cost estimates for operations and maintenance, and development of standards for inverter testing. Advanced implementations of machine learning techniques coupled with standardization of asset labels and descriptions can extend these insights into actionable information that can support development of algorithms for condition-based maintenance, which could further reduce failures and associated energy losses at PV sites.
Harris, Oliver C.; Lin, Yuxiao; Qi, Yue; Leung, Kevin L.; Tang, Maureen H.
At high operating voltages, metals like Mn, Ni, and Co dissolve from Li-ion cathodes, deposit at the anode, and interfere with the performance of the solid-electrolyte interphase (SEI) to cause constant Li loss. The mechanism by which these metals disrupt SEI processes at the anode remains poorly understood. Experiments from Part I of this work demonstrate that Mn, Ni, and Co all affect the electronic properties of the SEI much more than the morphology, and that Mn is the most aggressively disruptive of the three metals. In this work we determine how a proposed electrocatalytic mechanism can explain why Mn contamination is uniquely detrimental to SEI passivation. We develop a microkinetic model of the redox cycling mechanism and apply it to experiments from Part I. The results show that the thermodynamic metal reduction potential does not explain why Mn is the most active of the three metals. Instead, kinetic differences between the three metals are more likely to govern their reactivity in the SEI. Our results emphasize the importance of local coordination environment and proximity to the anode within the SEI for controlling electron transfer and resulting capacity fade.
Ultrathin epitaxial films of ferromagnetic insulators (FMIs) with Curie temperatures near room temperature are critically needed for use in dissipationless quantum computation and spintronic devices. However, such materials are extremely rare. Here, a room-temperature FMI is achieved in ultrathin La0.9Ba0.1MnO3 films grown on SrTiO3 substrates via an interface proximity effect. Detailed scanning transmission electron microscopy images clearly demonstrate that MnO6 octahedral rotations in La0.9Ba0.1MnO3 close to the interface are strongly suppressed. As determined from in situ X-ray photoemission spectroscopy, O K-edge X-ray absorption spectroscopy, and density functional theory, the realization of the FMI state arises from a reduction of Mn eg bandwidth caused by the quenched MnO6 octahedral rotations. The emerging FMI state in La0.9Ba0.1MnO3 together with necessary coherent interface achieved with the perovskite substrate gives very high potential for future high performance electronic devices.
The initial product specification' for the H12 Universal Cartridge Carrier (UUC) was released in October 1952 and is the twelfth piece of H-Gear (sequentially numbered) ever developed. It is the oldest piece of H-Gear currently in use. To gain perspective on the number of H-Gear since designed, the most currently developed and deployed H-Gear is the H1768, Inspection Stand. The UUC, (commonly referred to as just the "H12") has since been renamed to the H12 Adjustable Hand Truck. It was developed to support various maintenance operations for ordnance assembly and disassembly. This paper will provide evidence (where available) for the H12s current state of reliability, maintainability, and sustainability (RMA). Where documented evidence is not available, conclusions will be drawn based on its continued effective use over the past 67-years of service.
The Arroyo Seco Improvement Program is being carried out at Sandia National Laboratories, California in order to address erosion and other streambed instability issues in the Arroyo Seco as it crosses the Sandia National Laboratories, California. The work involves both repair of existing eroded areas, and habitat enhancement. This work is being carried out under the requirements of Army Corps of Engineers permit 2006-400195S and California Regional Water Quality Control Board, San Francisco Bay Region Water Quality Certification Site No. 02-01-00987.
Sandia National Laboratories worked with NERC staff to provide stakeholder guidance in responding to a May 2018 NERC alert regarding dynamic performance and modeling issues for utility-scale inverter-based resources. The NERC alert resulted from event analyses for grid disturbances that occurred in southern California in August 2016 and October 2017. Those disturbances revealed the use of momentary cessation of transmission connected inverter-based generation- a short time period when they ceased to inject current into the grid, counter to desired transmission operation. The event analyses concluded that, in many cases, the Western Interconnection system models used to determine planning and operating criteria do not reflect the actual behavior of solar plants, resulting in overly optimistic planning assessments and substandard operational responses. This technical report summarizes the gaps between the models and actual performance that were observed at those times, and the guidance that Sandia and NERC provided to owners of solar PV power plants, transmission planners, transmission operators and planning/reliability coordinators to modify existing models to reflect that actual performance
This report describes a structure to aid in evaluation of release mitigation strategies across a range of reactor technologies. The assessment performed for example reactor concepts utilizes previous studies of postulated accident sequences for each reactor concept. This simplified approach classifies release mitigation strategies based on a range of barriers, physical attenuation processes, and system performance. It is not, however, intended to develop quantitative estimates of radiological release magnitudes and compositions to the environment. Rather, this approach is intended to identify the characteristics of a reactor design concepts release mitigation strategies that are most important to different classes of accident scenarios. It uses a scoping methodology to provide an approximate, order-of-magnitude, estimate of the radiological release to the environment and associated off-site consequences. This scoping method is applied to different reactor concepts, considering the performance of barriers to fission product release for these concepts under sample accident scenarios. The accident scenarios and sensitivity evaluations are selected in this report to evaluate the role of different fission product barriers in ameliorating the source term to the environment and associated off-site consequences. This report applies this structure to characterize how release mitigation measures are integrated to define overall release mitigation strategies for High Temperature Gas Reactors (HTGRs), Sodium Fast Reactors (SFRs), and liquid fueled Molten Salt Reactors (MSRs). To support this evaluation framework, factors defining a chain of release attenuation stages, and thus an overall mitigation strategy, must be established through mechanistic source term calculations. This has typically required the application of an integral plant analysis code such as MELCOR. At present, there is insufficient evidence to support a priori evaluation of the effectiveness of a release mitigation strategy for advanced reactor concepts across the spectrum of events that could challenge the radiological containment function. While it is clear that these designs have significant margin to radiological release to the environment for the scenarios comprising the design basis, detailed studies have not yet been performed to assess the risk profile for these plants. Such studies would require extensive evaluation across a reasonably complete spectrum of accident scenarios that could lead to radiological release to the environment.
Multivariate multiple regression models are applied to simplified pyrotechnic igniters for the first time to understand how changes in manufactured parameters can affect the output gas dynamic response and the timing of ignition events. The statistical modeling technique is applied to demonstrate quantification of the effects of a set of independent variables measured in the as-fabricated igniters on a set of responses experimentally measured from the functioned igniters. Two independent process variables were intentionally varied following a full factorial experimental design while several other independent variables varied within their normal manufacturing variability range. The four igniter performance responses consisted of the timing of sequential events during igniter function and visual gas dynamic output in the form of shock wave strength observed with high-speed schlieren imaging. Linear regression models built using the measurements throughout the manufacturing processes and the output variance provide insight into the critical device parameters that dominate performance
Structural modeling and visualization of salt caverns requires three-dimensional representations. These representations are typically produced from sonar surveys conducted by companies that then produce a report of depths, distances, and volumes. There are multiple formats that are vendor dependent, and, as technology improves, there have been changes from only horizontal surveys to inclined shots for ceilings and floors to mid-cavern inclined shots. For geomechanical modeling, leaching predictions, and cavern stability visualizations, Sandia has previously written in-house software, called SONAR8, that created a consistent geometry format from the processed sonar reports. However, the increase in the need for mid-cavern inclined surveys led to the discovery of certain limitations in that code. This report describes methods used to process the multiple different formats to handle inclined shots in a consistent and accurate manner in our modeling efforts. A set of file formats and a database schema that was developed for this work is also documented in the appendices.
This document describes the requirements for a software tool that will enable FRMAC to simulate large sets of sample result data that is based realistically on simulated radionuclide deposition grids from NARAC. The user of this tool would be scientists involved in exercise and drill planning or part of the simulation cell of an exercise controller team. A key requirement is that this tool must be able to be run with a reasonable amount of training and job aids by any person within the Assessment, Laboratory Analysis, or Monitoring and Sampling divisions of the FRMAC to support any level of exercise from the small IPX to the national level full scale exercise. This tool should be relatively lean and stand-alone so that the user can run it in the field with limited IT resources. This document will describe the desired architecture, design characteristics, order of operations, and algorithms that can be given to a software development team to assist them in project scoping, costing, and eventually, development.
This report documents the results of analysis performed to investigate the impact of inverter-based resource (IBR) response to unbalanced faults on transmission system protective relay dependability and security. Electromagnetic transient (EMT) simulations were performed to simulate IBR response to these faults using existing manufacturer-developed EMT models for four separate IBRs. The study team was composed of IBR manufacturers, relay manufacturers, transmission providers, reliability coordinators and industry consultants with experience in EMT simulation and system protection. The results indicate that under certain conditions, IBR response can result in protective relay misoperations if current protection practices, which were developed based on conventional power sources, are not adapted to the characteristics of IBRs.
Griffin, Patrick J.; Trkov, A.; Simakov, S.P.; Greenwood, L.R.; Zolotarev, K.I.; Capote, R.; Destouches, C.; Kahler, A.C.; Konno, C.; Kostal, M.; Aldama, D.L.; Chechev, V.; Majerle, M.; Malambu, E.; Ohta, M.; Pronyaev, V.G.; Yashima, H.; White, M.; Wagemans, J.; Vavtar, I.; Simeckova, E.; Radulovic, V.; Sato, S.
High quality nuclear data is the most fundamental underpinning for all neutron metrology applications. This paper describes the release of version II of the International Reactor Dosimetry and Fusion File (IRDFF-II) that contains a consistent set of nuclear data for fission and fusion neutron metrology applications up to 60 MeV neutron energy. The library is intended to support: a) applications in research reactors; b) safety and regulatory applications in the nuclear power generation in commercial fission reactors; and c) material damage studies in support of the research and development of advanced fusion concepts. The paper describes the contents of the library, documents the thorough verification process used in its preparation, and provides an extensive set of validation data gathered from a wide range of neutron benchmark fields. The new IRDFF-II library includes 119 metrology reactions, four cover material reactions to support self-shielding corrections, five metrology metrics used by the dosimetry community, and cumulative fission products yields for seven fission products in three different neutron energy regions. In support of characterizing the measurement of the residual nuclei from the dosimetry reactions and the fission product decay modes, the present document lists the recommended decay data, particle emission energies and probabilities for 68 activation products. It also includes neutron spectral characterization data for 29 neutron benchmark fields for the validation of the library contents. Additional six reference fields were assessed (four from plutonium critical assemblies, two measured fields for thermal-neutron induced fission on 233U and 239Pu targets) but not used for validation due to systematic discrepancies in C/E reaction rate values or lack of reaction-rate experimental data. Another ten analytical functions are included that can be useful for calculating average cross sections, average energy, thermal spectrum average cross sections and resonance integrals. The IRDFF-II library and comprehensive documentation is available online at www-nds.iaea.org/IRDFF/. Evaluated cross sections can be compared with experimental data and other evaluations at www-nds.iaea.org/exfor/endf.htm. The new library is expected to become the international reference in neutron metrology for multiple applications.
This Environmental Restoration Operations (ER) Consolidated Quarterly Report (ER Quarterly Report) provides the status of ongoing corrective action activities being implemented at Sandia National Laboratories, New Mexico (SNL/NM) during the July - September 2019 reporting period. Table I-1 lists the Solid Waste Management Units (SWMUs) and Areas of Concern (A0Cs) currently identified for corrective action at SNL/NM. This section of the ER Quarterly Report summarizes the work completed during this quarterly reporting period at sites undergoing corrective action. Corrective action activities were conducted during this reporting period at the three groundwater AOCs (Burn Site Groundwater [BSG] AOC, Technical Area-V [TA-V] Groundwater [TAVG] AOC, and Tijeras Arroyo Groundwater [TAG] AOC). Corrective action activities are deferred at the Long Sled Track (SWMU 83), the Gun Facilities (SWMU 84), and the Short Sled Track (SWMU 240) because these three sites are active mission facilities. These three active mission sites are located in Technical Area-III. There were no SWMUs or AOCs in the corrective action complete regulatory process during this quarterly reporting period.
Each year Wind Energy Technologies Dept. 08821 submits a memo through the Sandia National Labs Review and Approval (R&A) system to facilitate the release of the Scaled Wind Farm Technology (SWiFT) Facility raw logged data. This release of data explicitly does not cover specialized instruments, or guest researcher instruments (i.e. SpiDAR, SpinnerLidar), nor processed data.
This report will describe the one test conducted during phase III of the Pipe Overpack Container (POC) test campaign, present preliminary results from these tests, and discuss implications for the Criticality Control Overpack (CCO). The goal of this test was to see if aerosol surrogate material inside the Criticality Control Container (CCC) gets released when the drum lid of the CCO comes off during a thirty-minute long, fully-engulfing, fire test. As expected from POC tests conducted in Phase I and II of this test campaign, the CCO drum lid is ejected about one minute after the drum is exposed to fully-engulfing flames. The remaining pressure inside the drum is high enough to eject the top plywood dunnage a considerable distance from the drum. Subsequently, most of the bottom plywood dunnage supporting the CCC burns off during and after the fire. High pressure buildup inside the CCC and inside two primary containers holding the surrogate powder also results in damage to the filter media of the CCC and the filter-house, thread attachment of the primary canisters. No discernable release of surrogate powder material was detected from the two primary containers when pre- and post-test average mass were compared. However, when the average masses are corrected to account for possible uncertainties in mass measurements, error overlap does not preclude the possibility that some surrogate powder mass may have been lost from these primary canisters. Still, post-test conditions of the secondary canisters enclosing these two primary canisters suggest it is very unlikely this mass loss would have escaped into the CCC.
Across many industries and engineering disciplines, physical components and systems of components are designed and deployed into their environment of intended use. It is the desire of the design agency to be able to predict whether their component or system will survive its physical environment or if it will fail due to mechanical stresses. One method to determine if the component will survive the environment is to expose the component to a simulation of the environment in a laboratory. One difficulty in doing this is that the component may not have the same boundary condition in the laboratory as is in the field configuration. This paper presents a novel method of quantifying the error in the modal domain that occurs from the impedance difference between the laboratory test fixture and the next level of assembly in the field configuration. The error is calculated from the projection from one mode shape space to the other, and the error is in terms of each mode of the field configuration. This provides insight into the effectiveness of the test fixture with respect to the ability to recreate the mode shapes of the field configuration. A case study is presented to show that the error in the modal projection between two configurations is a lower limit for the error that can be achieved by a laboratory test.
This document provides a scanned version of a 1987 SAND report that was never formally published. However, this report was referenced within the MELCOR Reference Manual and, therefore, provides historical information and technical basis for the MELCOR code. This document is being made available to permit users of the MELCOR code access to the information. The title page has been edited to prevent any confusion with regards to the possible documentation identifiers, such as the SAND report number or the intended date of publication. Beyond these modifications, a cover, distribution, and back cover are prepended and appended to the document to conform to modern SAND report style guidelines. The first four chapters of this report were updated and released under the title "Fission Product Behavior During Severe LWR Accidents: Recommendations for the MELCOR Code System. Volume I" and were made available by the U.S. NRC through the Adams database under accession number ML19227A327. No prior release of the remaining content of this report has occurred.
The Navajo Nation covers about 27,000 square miles in the Southwestern United States with approximately 270 sunny days a year. Therefore, the Navajo Nation has the potential to develop utility-scale solar photovoltaic (PV) energy for the Navajo people and export electricity to major cities to generate revenues. In April 2019, the Navajo Nation issued a proclamation to increase residential and utility-scale renewable energy development on the Navajo Nation. In response, this research assesses the potential for utility-scale solar energy development on the Navajo Nation using criteria such as access to roads, transmission lines, slope/terrain data, aspect/direction, and culturally sensitive sites. These datasets are applied as layers using ArcGIS to identify regions that have good potential for utility-scale solar PV installations. Land availability on the Navajo Nation has been an issue for developing utility-scale solar PV, so this study proposes potential locations for solar PV and how much energy these potential sites could generate. Furthermore, two coal-fired power plants, the Navajo Generating Station and the San Juan Generating Station, will close soon and impact the Navajo Nation's energy supply and economy. This study seeks to answer two main questions: whether utility- scale solar energy could be used to replace the energy generated by both coal-fired powerplants, and what percentage of the Navajo Nation's energy demands can be met by utility-scale solar energy development? Economic development is a major concern; therefore, this study also examines what utility-scale solar development will mean for the Navajo Nation economy. The results of this study show that the Navajo Nation has a potential PV capacity of 45,729 MW to 91,459 MW. Even with the lowest calculated capacity, utility-scale solar PV has the potential to generate more than 11 times the power of the NGS and SJGS combined.
Wind energy can provide renewable and sustainable electricity to Native American reservations, including rural homes, and power schools and businesses on reservations. It can also provide tribes with a source of income and economic development. The purpose of this paper is to determine the potential for deploying community and utility-scale wind renewable technologies on the Turtle Mountain Band of Chippewa tribal lands. Ideal areas for wind technology development were investigated based on annual wind resources, terrain, land usage, and other factors such as culturally sensitive sites. The result is a preliminary assessment of wind energy potential on Turtle Mountain lands, which can be used to justify further investigation and investment into determining the feasibility of future wind technology projects.
The Navajo Nation consists of about 55,000 residential homes spread across 27,000 square miles of trust land in the Southwest region of the United States. The Navajo Tribal Utility Authority (NTUA) reports that approximately 15,000 homes on the reservation do not have electricity due to the high costs of connecting rural homes located miles from utility distribution lines. In order to get these rural homeowners access electricity, NTUA and other Native owned companies are examining small-scale renewable energy systems to provide power for necessary usage such as lighting and refrigeration. The goal of this study is to evaluate the current renewable deployment efforts and provide additional considerations for photovoltaic (PV) systems that will optimize performance and improve efficiency to reduce costs. There are three case studies presented in different locations on the Navajo Nation with varying solar resource and energy load requirements. For each location, an assessment is completed that includes environmental parameters of the site- specific landscape and a system performance analysis of an off-grid residential PV system. The technical process, repeated for each location, demonstrates how the variance and uniqueness of each household can impact the system requirements after optimizations are applied. Therefore, the household variabilities and difference in locations must be considered. The differing results of each case study suggests additional analysis is needed for designing small-scale PV systems that takes a home-land-family specific approach to allow for better efficiency and more flexibility for future solar innovations to be considered for overall cost reductions.
The use of the Monte Carlo N-Particle Transport Code (MCNP) to calculate detector sensitivity for Self-Powered Neutron Detectors (SPNDs) in the Annular Core Research Reactor (ACRR) could be a vital tool in the effort to optimize the design of next-generation SPNDs. Next-generation SPND designs, which consider specific materials and geometry, may provide experimenters with capabilities for advanced mixed field dosimetry. These detectors will need to be optimized for configuration, materials, and geometries and the ability to model and iterate must be available in order to decide on the ideal. SPND design. SPNDs were modeled in MCNP which closely resembled the dimensions and location of actual detectors used in the ACRR. Tallies were used to calculate detector sensitivity. Using metrics from a previous report, oscilloscope data from pulses were manipulated in a Matrix Laboratory computing environment (MATLAB) script to calculate experimental detector sensitivity. This report outlines the process in which experimental data from ACRR pulses verified results from tallies in an MCNP ACRR model. The sensitivity values from experiments and MCNP calculations agreed within one standard deviation. Parametric studies were also performed with MCNP to investigate the effects of materials and dimensions of different SPNDs.
Reactor pulse characterization at the Annular Core Research Reactor (ACRR) at Sandia Technical Area V (TA-V) is commonly done through photo conductive detection (PCD) and calorimeter detectors. Each of these offer a mode of analyzing a digital signal with different advantages/methods for determination of integrated dose or temporal based metrology. This report outlines a method and code that takes the millions of data points from such detectors and delivers a characteristic pulse trendline through two main methods: digital signal filtration and machine learning, in particular, Support Vector Machines (SVMs). Each method's endpoint is to deliver a characteristic curve for the many bucket environments of ACRR while considering other points of interest including delayed gamma removal for prompt dose metrics. This work draws and adds on previous work detailing the delayed gamma fraction contributions from CINDER simulations of the ACRR. Results from this project show a method to determine characteristic curves in a way that has previously been limited by data set size.
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome's plausibility. Information measures based on Shannon's concept of entropy include realization information, Kullback-Leibler divergence, Lindley's information in experiment, cross entropy, and mutual information. We derive a general theory of information from first principles that accounts for evolving belief and recovers all of these measures. Rather than simply gauging uncertainty, information is understood in this theory to measure change in belief. We may then regard entropy as the information we expect to gain upon realization of a discrete latent random variable. This theory of information is compatible with the Bayesian paradigm in which rational belief is updated as evidence becomes available. Furthermore, this theory admits novel measures of information with well-defined properties, which we explored in both analysis and experiment. This view of information illuminates the study of machine learning by allowing us to quantify information captured by a predictive model and distinguish it from residual information contained in training data. We gain related insights regarding feature selection, anomaly detection, and novel Bayesian approaches.
Experiments carried out on DIII-D using a novel setup of isotopic tungsten (W) sources in the outer divertor have characterized how the W leakage from this region depends on both the exact source location and edge-localized mode (ELM) behavior. The sources are toroidally-symmetric and poloidally-localized to two regions: (1) the outer strike point (OSP) with natural abundance of W isotopes; and (2) the far-target with highly-enriched 182W isotopes. With the use of a dual-faced collector probe (CP) in the main scrape-off layer (SOL) near the outside midplane and source-rate spectroscopy, a proxy for divertor impurity leakage is developed. Using this proxy, it is found that for the OSP W location, there is a nearly linear increase of leakage with the power across the separatrix (PSEP), which is consistent with the effect of an increased upstream ion temperature parallel gradient force in the near-SOL; trends in the pedestal density and collisionality are also seen. Conversely, it is found that for the far-target W location leakage falls off rapidly as PSEP increases and ELM size decreases, which is suggestive that ELM size plays a role in the leakage from this location. Indications for main SOL W contamination is evidenced by the measurement of large deposition asymmetries on the two opposite CP faces. These measurements are coupled with interpretive modeling showing SOL W accumulation near the separatrix furthest from both targets driven by forces parallel to the magnetic field. This experimental setup, together with the target and upstream W measurements, provides information on the transport from different divertor W source locations and leakage. These studies help to elucidate the physics driving divertor impurity source rates and leakage, with and without ELMs, and provide better insight on the link in the chain connecting wall impurity sources to core impurity levels in magnetic fusion devices.
We invert infrasound signals for an equivalent seismoacoustic source function using different atmospheric models to produce the necessary Green’s functions. The infrasound signals were produced by a series of underground chemical explosions as part of the Source Physics Experiment (SPE). In a previous study, we inverted the infrasound data using so-called predictive atmospheric models, which were based on historic, regional-scaled, publicly available weather observations interpolated onto a 3D grid. For the work presented here, we invert the same infrasound data, but using atmospheric models based on weather data collected in a time window that includes the approximate time of the explosion experiments, which we term postdictive models. We build two versions of the postdictive models for each SPE event: one that is based solely on the regional scaled observations, and one that is based on both regional scaled observations combined with on-site observations obtained by a weather sonde released at the time of the SPE. We then invert the observed data set three times, once for each atmospheric model type. We find that the estimated seismoacoustic source functions are relatively similar in waveform shape regardless of which atmospheric model that we used to construct the Green’s functions. However, we find that the amplitude of the estimated source functions is systematically dependent on the atmospheric model type: using the predictive atmospheric models to invert the data generally yields estimated source functions that are larger in amplitude than those estimated using the postdictive models.
Additive manufacturing (AM) includes a diverse suite of innovative manufacturing processes for producing near-net shape components, typically from powder or wire feedstock. Reported mechanical properties of AM materials vary significantly depending on the details of the manufacturing process and the characteristics of the processing defects (namely, lack of fusion defects). However, an excellent combination of strength, ductility, and fracture resistance can be achieved in AM-type 304L and 316L austenitic stainless steels by minimizing processing defects. It is important to recognize that localized solidification processing during AM produces microstructures more analogous to weld microstructures than wrought microstructures. Consequently, the mechanical behavior of AM austenitic stainless steels in harsh environments can diverge from the performance of wrought materials. This report provides an overview of the fracture and fatigue response of type 304L materials from both directed energy deposition and powder bed fusion techniques. In particular, the mechanical performance of these materials is considered for high-pressure hydrogen applications by evaluating fatigue and fracture resistance after thermally precharging test specimens in high-pressure gaseous hydrogen. The mechanical behaviors are considered with respect to previous reports on hydrogen-assisted fracture of austenitic stainless steel welds and the unique characteristics of the AM microstructures. Fatigue crack growth can be relatively insensitive to processing defects, displaying similar behavior as wrought materials. In contrast, fracture resistance of dense AM austenitic stainless steel is more consistent with weld metal than with compositionally similar wrought materials. Hydrogen effects in the AM materials generally are more severe than in wrought materials but are comparable to measurements on welded austenitic stainless steels in hydrogen environments. Although hydrogen-assisted fracture manifests differently in welded and AM austenitic stainless steel, the fracture process appears to have a common origin in the compositional microsegregation intrinsic to solidification processes.
This paper presents an in-depth review of ongoing experimental research efforts to fundamentally understand the strong near-field enhancement of radiative heat transfer and make use of the underlying physics for various novel applications. Compared to theoretical studies on near-field radiative heat transfer (NFRHT), its experimental demonstration has not been explored as much until recently due to technical challenges in precision gap control and heat transfer measurement. However, recent advances in micro-/nanofabrication and nanoscale instrumentation/control techniques as well as unprecedented growth in materials science and engineering have created remarkable opportunities to overcome the existing challenges in the measurement and engineering of NFRHT. Beginning with the pioneering works in 1960s, this paper tracks the past and current experimental efforts of NFRHT in three different configurations (i.e., sphere-plane, plane-plane, and tip-plane). In addition, future remarks on how to address current challenges in the experimental research of NFRHT are briefly discussed.
Ohta, Taisuke; Berg, Morgann; Liu, Fangze; Smith, Sean; Copeland, R.G.; Chan, Calvin K.; Mohite, Aditya D.; Beechem, Thomas E.
Imaging of fabricated nanostructures or nanomaterials covered by dielectrics is highly sought after for diagnostics of optoelectronics components. We show imaging of atomically thin MoS2 flakes grown on SiO2-covered Si substrates and buried beneath HfO2 overlayers up to 120 nm in thickness using photoemission electron microscopy with deep-UV photoexcitation. Comparison of photoemission yield (PEY) to modeled optical absorption evinced the formation of optical standing waves in the dielectric stacks (i.e., cavity resonances of HfO2 and SiO2 layers on Si). The presence of atomically thin MoS2 flakes modifies the optical properties of the dielectric stack locally. Accordingly, the cavity resonance condition varies between the sample locations over buried MoS2 and surrounding areas, resulting in image contrast with submicron lateral resolution. This subsurface sensitivity underscores the role of optical effects in photoemission imaging with low-energy photons. This approach can be extended to nondestructive imaging of buried interfaces and subsurface features needed for analysis of microelectronic circuits and nanomaterial integration into optoelectronic devices.
Melia, Michael A.; Percival, Stephen J.; Qin, Shuang; Barrick, Erin; Spoerke, Erik; Grunlan, Jaime; Schindelholz, Eric J.
In this work, the influence of clay platelet size on the corrosion barrier performance of highly-aligned polymer clay nanocomposite (PCN) thin films was examined. Layer-by-layer (LbL) deposition of alternating branched polyethylenimine (PEI) and either laponite (LAP), montromorillonite (MMT) or vermiculite (VMT) clay platelets were assembled on mild steel plates to obtain 20 bilayer (BL) films and cross-linked using glutaraldehyde after deposition. The clay platelets were chosen based on their aspect ratio, approximately 30:1, 400:1, and 2000:1, respectively. Electrochemical impedance spectroscopy of the coated steel plates during immersion showed corrosion rates and coating permeability followed LAP > MMT > VMT for up to 7 days of exposure in 0.6 M NaCl. The PEI/VMT films, ~250 nm thick, slowed corrosion by a factor of >1000 compared to bare steel. The results support the premise that high aspect ratio clay platelets can improve the corrosion barrier efficacy of LbL PCN films by decreasing film permeability and provide exceptional protection to steel in saline environments compared to other thin multilayer coatings and pretreatments.
This report is a condensed version of previous reports identifying technical gaps that, if addressed, could be used to ensure the continued safe storage of SNF for extended periods and support licensing activities. This report includes updated gap priority assessments because the previous gap priorities were based on R&D performed through 2017. Much important work has been done since 2017 that requires a change in a few of the priority rankings to better focus the near-term R&D program. Background material, regulatory positions, operational and inventory status, and prioritization schemes are discussed in detail in Hanson et al. (2012) and Hanson and Alsaed (2019) and are not repeated in this report. One exception is an overview of the prioritization criteria for reference. This is meant to give the reader an appreciation of the framework for prioritization of the identified gaps. A complete discussion of the prioritization scheme is provided in Hanson and Alsaed (2019).
Oxidative decomposition of organic-solvent-based liquid electrolytes at cathode material interfaces has been identified as the main reason for rapid capacity fade in high-voltage lithium ion batteries. The evolution of "cathode electrolyte interphase" (CEI) films, partly or completely consisting of electrolyte decomposition products, has also recently been demonstrated to correlate with battery cycling behavior at high potentials. Using density functional theory calculations, the hybrid PBE0 functional, and the (001) surfaces of spinel oxides as models, we examine these two interrelated processes. Consistent with previous calculations, ethylene carbonate (EC) solvent molecules are predicted to be readily oxidized on the LixMn2O4 (001) surface at modest operational voltages, forming adsorbed organic fragments. Further oxidative decomposition of such CEI fragments to release CO2 gas is however predicted to require higher voltages consistent with LixNi0.5Mn1.5O4 (LNMO) at smaller x values. We argue that multistep reactions, involving first formation of CEI films and then further oxidization of CEI at higher potentials, are most relevant to capacity fade. Mechanisms associated with dissolution or oxidation of native Li2CO3 films, which are removed before the electrolyte is in contact with oxide surfaces, are also explored.
The data from the multi-modal transportation test conducted in 2017 demonstrated that the inputs from the shock events during all transport modes (truck, rail, and ship) were amplified from the cask to the spent commercial nuclear fuel surrogate assemblies. These data do not support common assumption that the cask content experiences the same accelerations as the cask itself. This was one of the motivations for conducting 30 cm drop tests. The goal of the 30 cm drop test is to measure accelerations and strains on the surrogate spent nuclear fuel assembly and to determine whether the fuel rods can maintain their integrity inside a transportation cask when dropped from a height of 30 cm. The 30 cm drop is the remaining NRC normal conditions of transportation regulatory requirement (10 CFR 71.71) for which there are no data on the actual surrogate fuel. Because the full-scale cask and impact limiters were not available (and their cost was prohibitive), it was proposed to achieve this goal by conducting three separate tests. This report describes the first two tests — the 30 cm drop test of the 1/3 scale cask (conducted in December 2018) and the 30 cm drop of the full-scale dummy assembly (conducted in June 2019). The dummy assembly represents the mass of a real spent nuclear fuel assembly. The third test (to be conducted in the spring of 2020) will be the 30 cm drop of the full-scale surrogate assembly. The surrogate assembly represents a real full-scale assembly in physical, material, and mechanical characteristics, as well as in mass.
A tethered-balloon system (TBS) has been developed and is being operated by Sandia National Laboratories (SNL) on behalf of the U.S. Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) User Facility in order to collect in situ atmospheric measurements within mixed-phase Arctic clouds. Periodic tethered-balloon flights have been conducted since 2015 within restricted airspace at ARM's Advanced Mobile Facility 3 (AMF3) in Oliktok Point, Alaska, as part of the AALCO (Aerial Assessment of Liquid in Clouds at Oliktok), ERASMUS (Evaluation of Routine Atmospheric Sounding Measurements using Unmanned Systems), and POPEYE (Profiling at Oliktok Point to Enhance YOPP Experiments) field campaigns. The tethered-balloon system uses helium-filled 34 m3 helikites and 79 and 104 m3 aerostats to suspend instrumentation that is used to measure aerosol particle size distributions, temperature, horizontal wind, pressure, relative humidity, turbulence, and cloud particle properties and to calibrate ground-based remote sensing instruments.
Supercooled liquid water content (SLWC) sondes using the vibrating-wire principle, developed by Anasphere Inc., were operated at Oliktok Point at multiple altitudes on the TBS within mixed-phase clouds for over 200 h. Sonde-collected SLWC data were compared with liquid water content derived from a microwave radiometer, Ka-band ARM zenith radar, and ceilometer at the AMF3, as well as liquid water content derived from AMF3 radiosonde flights. The in situ data collected by the Anasphere sensors were also compared with data collected simultaneously by an alternative SLWC sensor developed at the University of Reading, UK; both vibrating-wire instruments were typically observed to shed their ice quickly upon exiting the cloud or reaching maximum ice loading. Temperature sensing measurements distributed with fiber optic tethered balloons were also compared with AMF3 radiosonde temperature measurements. Combined, the results indicate that TBS-distributed temperature sensing and supercooled liquid water measurements are in reasonably good agreement with remote sensing and radiosonde-based measurements of both properties. From these measurements and sensor evaluations, tethered-balloon flights are shown to offer an effective method of collecting data to inform and constrain numerical models, calibrate and validate remote sensing instruments, and characterize the flight environment of unmanned aircraft, circumventing the difficulties of in-cloud unmanned aircraft flights such as limited flight time and in-flight icing.
Here, we propose a dislocation adsorption-based mechanism for void growth in metals, wherein a void grows as dislocations from the bulk annihilate at its surface. The basic process is governed by glide and cross-slip of dislocations at the surface of a void. Using molecular dynamics simulations we show that when dislocations are present around a void, growth occurs more quickly and at much lower stresses than when the crystal is initially dislocation-free. Finally, we show that adsorption-mediated growth predicts an exponential dependence on the hydrostatic stress, consistent with the well-known Rice-Tracey equation.
The Waste Isolation Pilot Plant (WIPP) is a geologic repository for defense-related nuclear waste. If left undisturbed, the virtually impermeable rock salt surrounding the repository will isolate the nuclear waste from the biosphere. If humans accidentally intrude into the repository in the future, then the likelihood of a radionuclide release to the biosphere will depend significantly on the porosity and permeability of the repository itself. Room ceilings and walls at the WIPP tend to collapse over time, causing rubble piles to form on floors of empty rooms. The surrounding rock formation will gradually compact these rubble piles until they eventually become solid salt, but the length of time for a rubble pile to reach a certain porosity and permeability is unknown. This report details the first efforts to build models to predict the porosity and permeability evolution of an empty room as it closes. Conventional geomechanical numerical methods would struggle to model empty room collapse and rubble pile consolidation, so three different meshless methods, the Immersed Isogeometric Analysis Meshfree, Reproducing Kernel Particle Method (RKPM), and the Conformal Reproducing Kernel method, were assessed. First, the meshless methods and the finite element method each simulated gradual room closure, without ceiling or wall collapse. All three methods produced equivalent room closure predictions with comparable computational speed. Second, the Immersed Isogeometric Analysis Meshfree method and RKPM simulated two-dimensional empty room collapse and rubble pile consolidation. Both methods successfully simulated large viscoplastic deformations, fracture, and rubble pile rearrangement to produce qualitatively realistic results. In addition to geomechanical simulations, the flow channels in damaged salt and crushed salt were measured using micro-computed tomography, and input into a computational fluid dynamics simulation to predict the salt's permeability. Although room for improvement exists, the current simulation approaches appear promising.
Gas ingested into the sac of a fuel injector after the injector needle valve closes is known to have crucial impacts on initial spray formation and plume growth in a following injection cycle. Yet little research has been attempted to understand the fate sac gases during pressure expansion and compression typical of an engine. This study investigated cavitation and bubble processes in the sac including the effect of chamber pressure decrease and increase consistent with an engine cycle. A single axial-hole transparent nozzle based on the Engine Combustion Network (ECN) Spray D nozzle geometry was mounted in a vessel filled with nitrogen, and the nitrogen gas pressure was cycled after the end of injection. Interior nozzle phenomena were visualized by high-speed longdistance microscopy with a nanosecond pulsed LED back-illumination. Experimental results showed that the volume of gas in the sac after the needle closes depends upon the vessel gas pressure. Higher back pressure results in less cavitation and a smaller volume of non-condensable gas in the sac. But a pressure decrease mimicking the expansion stroke causes the gas within the sac to expand significantly, proportional to the pressure decrease, while also evacuating liquid in front of the bubble. The volume of the gas in the sac increases during the expansion cycle due both to isothermal expansion as well as desorption of inherent dissolved gas in the fuel. During the compression cycle, the volume of bubbles decreases and additional non-condensable ambient gas is ingested into the sac. As the liquid fuel is nearly incompressible, the volume of both liquid and gas essentially remains constant during compression.
At the molecular level, resonant coupling of infrared radiation with oscillations of the electric dipole moment determines the absorption cross section, $σ$. The parameter σ relates the bond density to the total integrated absorption. In this work, $σ$ was measured for the Si–N asymmetric stretch mode in SiNx thin films of varying composition and thickness. Thin films were deposited by low pressure chemical vapor deposition at 850 °C from mixtures of dichlorosilane and ammonia. σ for each film was determined from Fourier transform infrared spectroscopy and ellipsometric measurements. Increasing the silicon content from 0% to 25% volume fraction amorphous silicon led to increased optical absorption and a corresponding systematic increase in σ from 4.77 × 10–20 to 6.95 × 10–20 cm2, which is consistent with literature values. The authors believe that this trend is related to charge transfer induced structural changes in the basal SiNx tetrahedron as the volume fraction of amorphous silicon increases. Furthermore, experimental $σ$ values were used to calculate the effective dipole oscillating charge, q, for four films of varying composition. The authors find that q increases with increasing amorphous silicon content, indicating that compositional factors contribute to modulation of the Si–N dipole moment. Additionally, in the composition range investigated, the authors found that $σ$ agrees favorably with trends observed in films deposited by plasma enhanced chemical vapor deposition.
The Chemkin-Pro Advanced Programming Interface (API) was used to implement surface-kinetics user-routines to expand current aerosol dynamics models. Phase change mechanisms were expanded to include homogeneous nucleation in super-saturated environments, and particle size dependent vapor condensation and evaporation. Homogeneous nucleation of water droplets was modeled with Classical Nucleation Theory (CNT) and a modified form of nucleation theory published by Dillmann and Meier. The Chemkin-Pro homogeneous nucleation module, developed in this work, was validated against published data for nucleation fluxes at varying pressures, temperatures, and vapor concentrations. A newly released feature in Chemkin-Pro enabled particle-size-dependent surface reaction rates. A Chemkin-Pro vapor condensation and evaporation module was written and verified with the formulation published in Hinds. Lastly, Chemkin-Pro results for coagulation in the transition regime were verified with the semi-implicit method developed by Jacobson. We report good performance was observed for all three Chemkin-Pro modules. This work illustrates the utility of the Chemkin-Pro API, and the flexibility with which models can be developed using surface-kinetics user-routines. This paper illustrates that Chemkin-Pro can be developed to include more physically representative aerosol dynamics processes where rates are defined based on physical and chemical parameters rather than Arrhenius rates. The methods and modules developed in this work can be applied to industrial problems like material synthesis (e.g., powder production), processes involving phase change like heat exchangers, as well as more fundamental scientific processes like cloud physics.
High‐temperature optical analysis of three different InGaN/GaN multiple quantum well (MQW) light‐emitting diode (LED) structures (peak wavelength λp = 448, 467, and 515 nm) is conducted for possible integration as an optocoupler emitter in high‐density power electronic modules. The commercially available LEDs, primarily used in the display ( λp = 467 and 515 nm) and lighting ( λp = 448 nm) applications, are studied and compared to evaluate if they can satisfy the light output requirements in the optocouplers at high temperatures. The temperature‐ and intensity‐dependent electroluminescence (T‐IDEL) measurement technique is used to study the internal quantum efficiency (IQE) of the LEDs. All three LEDs exhibit above 70% IQE at 500 K and stable operation at 800 K without flickering or failure. At 800 K, a promising IQE of above 40% is observed for blue for display (BD) ( λp = 467 nm) and green for display (GD) ( λp = 515 nm) samples. The blue for light (BL) ( λp = 448 nm) sample shows 24% IQE at 800 K.
The realization of metamaterials or metasurfaces with simultaneous electric and magnetic response and low loss is generally very difficult at optical frequencies. Traditional approaches using nanoresonators made of noble metals, while suitable for the microwave and terahertz regimes, fail at frequencies above the near-infrared, due to prohibitive high dissipative losses and the breakdown of scaling resulting from the electron mass contribution (kinetic inductance) to the effective reactance of these plasmonic meta-atoms. The alternative route based on Mie resonances of high-index dielectric particles normally leads to structure sizes that tend to break the effective-medium approximation. Here, we propose a subwavelength dark-state-based metasurface, which enables configurable simultaneous electric and magnetic responses with low loss. Proof-of-concept metasurface samples, specifically designed around telecommunication wavelengths (i.e., λ ≈ 1.5 μm), were fabricated and investigated experimentally to validate our theoretical concept. Because the electromagnetic field energy is localized and stored predominantly inside a dark resonant dielectric bound state, the proposed metasurfaces can overcome the loss issue associated with plasmonic resonators made of noble metals and enable scaling to very high operation frequency without suffering from saturation of the resonance frequency due to the kinetic inductance of the electrons
Fies, Whitney A.; First, Jeremy T.; Dugger, Jason W.; Doucet, Mathieu; Browning, James F.; Webb, Lauren J.
Establishing how water, or the absence of water, affects the structure, dynamics, and function of proteins in contact with inorganic surfaces is critical to developing successful protein immobilization strategies. In this work, the quantity of water hydrating a monolayer of helical peptides covalently attached to self-assembled monolayers (SAMs) of alkyl thiols on Au was measured using neutron reflectometry (NR). The peptide sequence was composed of repeating LLKK units in which the leucines were aligned to face the SAM. When immersed in water, NR measured 2.7 ± 0.9 water molecules per thiol in the SAM layer and between 75 ± 13 and 111 ± 13 waters around each peptide. The quantity of water in the SAM was nearly twice that measured prior to peptide functionalization, suggesting that the peptide disrupted the structure of the SAM. To identify the location of water molecules around the peptide, we compared our NR data to previously published molecular dynamics simulations of the same peptide on a hydrophobic SAM in water, revealing that 49 ± 5 of 95 ± 8 total nearby water molecules were directly hydrogen-bound to the peptide. Finally, we show that immersing the peptide in water compressed its structure into the SAM surface. Together, these results demonstrate that there is sufficient water to fully hydrate a surface-bound peptide even at hydrophobic interfaces. Given the critical role that water plays in biomolecular structure and function, these results are expected to be informative for a broad array of applications involving proteins at the bio/abio interface.
In this work, we have characterized the calcium carbonate (CaCO3) precipitates over time caused by reaction-driven precipitation and dissolution in a micromodel. Reactive solutions were continuously injected through two separate inlets, resulting in transverse-mixing induced precipitation during the precipitation phase. Subsequently, a dissolution phase was conducted by injecting clean water (pH = 4). The evolution of precipitates was imaged in two and three dimensions (2-, 3-D) at selected times using optical and confocal microscopy. With estimated reactive surface area, effective precipitation and dissolution rates can be quantitatively compared to results in the previous works. Our comparison indicates that we can evaluate the spatial and temporal variations of effective reactive areas more mechanistically in the microfluidic system only with the knowledge of local hydrodynamics, polymorphs, and comprehensive image analysis. Our analysis clearly highlights the feedback mechanisms between reactions and hydrodynamics. Pore-scale modeling results during the dissolution phase were used to account for experimental observations of dissolved CaCO3 plumes with dissolution of the unstable phase of CaCO3. Mineral precipitation and dissolution induce complex dynamic pore structures, thereby impacting pore-scale fluid dynamics. Pore-scale analysis of the evolution of precipitates can reveal the significance of chemical and pore structural controls on reaction and fluid migration.
A series of titanium alkoxides ([Ti(OR)4] (OR = OCH(CH3)2 (OPri), OC(CH3)3 (OBut), and OCH2C(CH3)3 (ONep)) were modified with a set of substituted hydroxyl-benzaldehydes [HO-BzA-Lx: x = 1, 2-hydroxybenzaldehyde (L = H), 2-hydroxy-3-methoxybenzaldehyde (OMe-3), 5-bromo-2-hydroxybenzaldehyde (Br-5), 2-hydroxy-5-nitrobenzaldehyde (NO2-5); x = 2, 3,5-di-tert-butyl-2-hydroxybenzaldehyde (But-3,5), 2-hydroxy-3,5-diiodobenzaldehyde (I-3,5)] in pyridine (py). Instead of the expected simple substitution, each of the HO-BzA-Lx modifiers were reduced to their respective diol [(py)(OR)2Ti(κ2(O,μ-O')(OC6H4–x(CH2O)-2)(L)x] (OR = OPri, x = 1, L = H (1a), OMe-3 (2a), Br-5 (3a·py), NO2-5 (4a·4py); x = 2, But-3,5 (5a), I-3,5 (6a), ONep; x = 1, L = H (1b), OMe-3 (2b), Br-5 (3b·py), NO2-5 (4b); x = 2, But-3,5 (5b), I-3,5 (6b·py)), as identified by single crystal X-ray studies. The 1H NMR spectral data were complex at room temperature but simplified at high temperatures (70 °C). Diffusion ordered spectroscopy (DOSY) NMR experiments indicated that 2a maintained the dinuclear structure in a solution independent of the temperature, whereas 2b appears to be monomeric over the same temperature range. On the basis of additional NMR studies, the mechanism of the reduction of the HO-BzA-Lx to the dioxide ligand was thought to occur by a Meerwein–Pondorf–Verley (MPV) mechanism. The structures of 1a–6b appear to be the intermediate dioxide products of the MPV reduction, which became “trapped” by the Lewis basic solvate.
We demonstrate, on a scramjet combustion problem, a constrained probabilistic learning approach that augments physics-based datasets with realizations that adhere to underlying constraints and scatter. The constraints are captured and delineated through diffusion maps, while the scatter is captured and sampled through a projected stochastic differential equation. The objective function and constraints of the optimization problem are then efficiently framed as non-parametric conditional expectations. Different spatial resolutions of a large-eddy simulation filter are used to explore the robustness of the model to the training dataset and to gain insight into the significance of spatial resolution on optimal design.
Since the landmark development of the Scherrer method a century ago, multiple generations of width methods for X-ray diffraction originated to non-invasively and rapidly characterize the property-controlling sizes of nanoparticles, nanowires, and nanocrystalline materials. However, the predictive power of this approach suffers from inconsistencies among numerous methods and from misinterpretations of the results. Therefore, we systematically evaluated twenty-two width methods on a representative nanomaterial subjected to thermal and mechanical loads. To bypass experimental complications and enable a 1:1 comparison between ground truths and the results of width methods, we produced virtual X-ray diffractograms from atomistic simulations. These simulations realistically captured the trends that we observed in experimental synchrotron diffraction. To comprehensively survey the width methods and to guide future investigations, we introduced a consistent, descriptive nomenclature. Alarmingly, our results demonstrated that popular width methods, especially the Williamson-Hall methods, can produce dramatically incorrect trends. We also showed that the simple Scherrer methods and the rare Energy methods can well characterize unloaded and loaded states, respectively. Overall, this work improved the utility of X-ray diffraction in experimentally evaluating a variety of nanomaterials by guiding the selection and interpretation of width methods.
Hermetic microcircuit packaging was the dominant method of protecting semiconductor devices in the 1960s and 1970s. After losing majority market sectors to plastic encapsulated microelectronics over the last a few decades, hermetic packaging remains the preferred method of protecting semiconductor devices for critical applications such as in military, space, and medical fields, where components and systems are required to serve for several decades. MEMS devices impose additional challenges to packaging by requiring specific internal cavity pressures to function properly or deliver the needed quality (Q) factors. In MEMS multichip modules, internal pressure requirement conflicts arise when different MEMS devices require different internal gases and pressures. The authors developed a closed-formed equation to model pressure changes of hermetic enclosures due to gas ingression. This article expands the authors mathematical model to calculate gas pressure of a MEMS multichip module package as well as those of MEMS devices inside the multichip module package. These equations are not only capable of calculating service lifetimes of MEMS devices and multi-chip modules but can also help develop MEMS device packaging strategies to extend the service life of MEMS multi-chip modules.
Hypersonic Vehicle (HV) development has been pursued since the late 1950s. These vehicles could significantly cut the cost of accessing space, lessen flight time anywhere on the planet to two to three hours, and be used as weapons that would be extremely difficult to intercept. Although considerable progress has been made, hypersonic flight remains in the development and testing phase.
The failure of 304L laser welds is of interest to system and component designers due to nuclear safety requirements for abnormal environments. Accurately modeling laser weld behavior in full system and component models has proven especially challenging due to three factors: the large variability observed in laser weld characterization tests; the difficulty in isolating the weld material for material characterization and modeling the weld material behavior; and the disparate scales associated with modeling laser welds in large systems. Recent work has shown that meso-scale geometric features of laser welds such as pores and weld root tortuosity are critical to accurately predicting the structural performance of welds. The challenge with modeling these welds is that the geometric features driving their structural performance are generally on the order of ten to hundreds of microns, but can affect the responses of interest in systems and components on the order of centimeters to meters.
Cytoskeletal filaments and motor proteins are critical components in the transport and reorganization of membrane-based organelles in eukaryotic cells. Previous studies have recapitulated the microtubule-kinesin transport system in vitro to dynamically assemble large-scale nanotube networks from multilamellar liposomes and polymersomes. Moving toward more biologically relevant systems, the present work examines whether lipid nanotube (LNT) networks can be generated from giant unilamellar vesicles (GUVs) and subsequently characterizes how the lipid composition may be tuned to alter the dynamics, structure, and fluidity of networks. Here, we describe a two-step process in which microtubule motility (i) drives the transport and aggregation of GUVs to form structures with a decreased energy barrier for LNT formation and (ii) extrudes LNTs without destroying parent GUVs, allowing for the formation of large LNT networks. We further show that the lipid composition of the GUV influences formation and morphology of the extruded LNTs and associated networks. For example, LNTs formed from phase-separated GUVs (e.g., liquid-solid phase-separated and coexisting liquid-ordered and liquid-disordered phase-separated) display morphologies related to the specific phase behavior reflective of the parent GUVs. Overall, the ability to form nanotubes from compositionally complex vesicles opens the door to generating lipid networks that more closely mimic the structure and function of those found in cellular systems.
Soft matter has historically been an unlikely candidate for investigation by electron microscopy techniques due to damage by the electron beam as well as inherent instability under a high vacuum environment. Characterization of soft matter has often relied on ensemble-scattering techniques. The recent development of cryogenic transmission electron microscopy (cryo-TEM) provides the soft matter community with an exciting opportunity to probe the structure of soft materials in real space. Cryo-TEM reduces beam damage and allows for characterization in a native, frozen-hydrated state, providing direct visual representation of soft structure. This article reviews cryo-TEM in soft materials characterization and illustrates how it has provided unique insights not possible by traditional ensemble techniques. Soft matter systems that have benefited from the use of cryo-TEM include biological-based “soft” nanoparticles (e.g., viruses and conjugates), synthetic polymers, supramolecular materials as well as the organic–inorganic interface of colloidal nanoparticles. We conclude that while many challenges remain, such as combining structural and chemical analyses; the opportunity for soft matter research to leverage newly developed cryo-TEM techniques continues to excite.
Bennett, Nicole; Cuneo, Michael E.; Yu, Edmund; Jennings, Christopher A.; Laity, George; Hutsel, Brian T.; Peterson, Kyle; Welch, Dale R.; Rose, David V.; Hess, Mark H.; Moore, James M.
A challenge for the TW-class accelerators driving Z-pinch experiments, such as Sandia National Laboratories’ Z machine, is to efficiently couple power from multiple storage banks into a single multi-MA transmission line. The physical processes that lead to current loss are identified in new large-scale, multidimensional simulations of the Z machine. Kinetic models follow the range of physics occurring during a pulse, from vacuum pulse propagation to charged-particle emission and magnetically-insulated current flow to electrode plasma expansion. Simulations demonstrate that current is diverted from the load through a combination of standard transport (uninsulated charged-particle flows) and anomalous transport. Standard transport occurs in regions where the electrode current density is a few 104–105 A/cm2 and current is diverted from the load via transport without magnetic insulation. In regions with electrode current density >106 A/cm2, electrode surface plasmas develop velocity-shear instabilities and a Hall-field-related transport which scales with electron density and may, therefore, lead to increased current loss.
Impacts of silicon, carbon, and oxygen interfacial impurities on the performance of high-voltage vertical GaN-based p–n diodes are investigated. The results indicate that moderate levels (≈5 × 1017 cm-3) of all interfacial impurities lead to reverse blocking voltages (Vb) greater than 200 V at 1 μA cm-2 and forward leakage of less than 1 µA cm-2 at 1.7 V. At higher interfacial impurity levels, the performance of the diodes becomes compromised. Herein, it is concluded that each impurity has a different effect on the device performance. For example, a high carbon spike at the junction correlates with high off-state leakage current in forward bias (≈100× higher forward leakage current compared with a reference diode), whereas the reverse bias behavior is not severely affected (> 200 V at 1 μA cm-2). High silicon and oxygen spikes at the junction strongly affect the reverse leakage currents (≈ 1–10 V at 1 μA cm-2). Regrown diodes with impurity (silicon, oxygen, and carbon) levels below 5 × 1017 cm-3 show comparable forward and reverse results with the reference continuously grown diodes. The effect of the regrowth interface position relative to the metallurgical junction on the diode performance is also discussed.
X.509 certificate revocation defends against man-in-the-middle attacks involving a compromised certificate. Certificate revocation strategies face scalability, effectiveness, and deployment challenges as HTTPS adoption rates have soared. We propose Certificate Revocation Table (CRT), a new revocation strategy that is competitive with or exceeds alternative state-of-the-art solutions in effectiveness, efficiency, certificate growth scalability, mass revocation event scalability, revocation timeliness, privacy, and deployment requirements. The CRT design assumes that locality of reference applies to the certificates accessed by an organization. The CRT periodically checks the revocation status of X.509 certificates recently used by the organization. Pre-checking the revocation status of certificates the clients are likely to use avoids the security problems of on-demand certificate revocation checking. To validate both the effectiveness and efficiency of our approach, we simulated a CRT using 60 days of TLS traffic logs from Brigham Young University to measure the effects of actively refreshing revocation status information for various certificate working set window lengths. A working set window size of 45 days resulted in an average of 99.86% of the TLS handshakes having revocation information cached in advance. The CRT storage requirements are small. The initial revocation status information requires downloading a 6.7 MB file, and subsequent updates require only 205.1 KB of bandwidth daily. Updates that include only revoked certificates require just 215 bytes of bandwidth per day.
Herein is presented the synthesis and characterization of copper-intercalated zirconium pentatelluride (ZrTe5). ZrTe5:Cu0.05 crystals are synthesized by the chemical vapor transport method in a vacuum. X-ray diffraction and elemental analysis techniques are utilized to validate the synthesis. The results indicate that the intercalation of the layered Zr/Te structure with copper atoms causes the contraction of the unit cell along all three crystalline directions, the shrinkage of the overall volume of the unit cell, and the distortion of the unit cell. A single crystal was isolated, mechanically exfoliated, and used for the measurements of intercalation strains in a Hall bar device. Electronic transport studies indicate that an anomalous resistance drop is observed at T = 19 K. Furthermore, Rxx and Rxy results, respectively, indicate a probable disorder-induced localization effect and electron-type carriers.
Coherent elastic neutrino-nucleus scattering (CEvNS) is calculated to be the dominant neutrino scattering channel for neutrinos of energy Eν < 100 MeV. We report a limit for this process from data collected in an engineering run of the 29 kg CENNS-10 liquid argon detector located 27.5 m from the pion decay-at-rest neutrino source at the Oak Ridge National Laboratory Spallation Neutron Source (SNS) with 4.2 × 1022 protons on target. The dataset provided constraints on beam-related backgrounds critical for future measurements and yielded < 7.4 candidate CEvNS events which implies a cross section for the process, averaged over the SNS pion decay-at-rest flux, of < 3.4 × 10–39 cm2, a limit within twice the Standard Model prediction. This is the first limit on CEvNS from an argon nucleus and confirms the earlier CsI[Na] nonstandard neutrino interaction constraints from the collaboration. This run demonstrated the feasibility of the ongoing experimental effort to detect CEvNS with liquid argon.
The objective of this project was to increase the rate at which video data is processed using temporal frequency analysis. A common solution to increasing the speed of data processing is to increase the computing power of the system however size, weight and power (SWAP) constraints require computing power to be limited. This project focused on increasing the processing speed by reducing the expense of computing the Fourier Transform (FT).
In this technical note, we present the analysis and results of neutron data collected in 2018 at the Spallation Neutron Source (SNS) by the MARS neutron detector and spectrometer. MARS has been deployed at the SNS "neutrino alley' basement with the purposed of monitoring and characterizing the neutron backgrounds for the COHERENT collaboration. The measured beam neutron rates at the MARS deployment location near some of the COHERENT neutrinos detectors are presented and we discuss what the measured rate and spectra can tell us about the incoming beam neutron flux and energy distribution.
Li, Qiang; Xue, Sichuang; Price, Patrick M.; Sun, Xing; Ding, Jie; Shang, Zhongxia; Fan, Zhe; Wang, Han; Zhang, Yifan; Chen, Youxing; Wang, Haiyan; Hattar, Khalid M.; Zhang, Xinghang
High-density growth nanotwins enable high-strength and good ductility in metallic materials. However, twinning propensity is greatly reduced in metals with high stacking fault energy. In this study, we adopted a hybrid technique coupled with template-directed heteroepitaxial growth method to fabricate single-crystal-like, nanotwinned (nt) Ni. The nt Ni primarily contains hierarchical twin structures that consist of coherent and incoherent twin boundary segments with few conventional grain boundaries. In situ compression studies show the nt Ni has a high flow strength of ~2 GPa and good deformability. Moreover, the nt Ni has superb corrosion behavior due to the unique twin structure in comparison to coarse grained and nanocrystalline counterparts. The hybrid technique opens the door for the fabrication of a wide variety of single-crystal-like nt metals with unique mechanical and chemical properties.
Plastic deformations in metals are dissipative. Some fraction of the dissipated mechanical energy (plastic work) is converted into thermal energy and serves as a heat source. In cases where the heat cannot be readily transferred to the environment, the local temperature will increase thereby producing variations in mechanical behaviors associated with temperature-dependent properties (e.g. thermal softening due to decreasing yield strengths). This issue is often referred to as "adiabatic heating as an adiabatic temperature condition corresponds to the limiting case where no heat transfer takes place. The impact of converting plastic work into heat on the mechanical response of metals has been long studied. Nonetheless, it still remains an issue. For instance, with respect to ductile failure, the second Sandia Fracture Challenge noted that accounting for plastic heat generation was necessary for predictions under dynamic loading conditions. Furthermore, both experimental and modeling efforts continue to be pursued to better describe and understand the effect of plastic work conversion into heat on structural responses. Noting the need for capturing plastic work conversion into heat in structural analyses, a simple and fairly traditional representation of these responses has been added into existing modular plasticity models in the Library of Advanced Materials for Engineering (LAME). Here, these capabilities are briefly described with the underlying theory and numerical implementation discussed in Sections 2 and 3, respectively. Examples of syntax are given in Section 4 and some verification exercises are found in Section 5. Simple structural analyses are presented in Section 6 to briefly highlight the impact of these features and concluding thoughts are given.
Sandia National Laboratories will, as time and budget allow, perform the following tasks as part of a New Mexico Small Business Assistance (NMSBA) Program project for Management Sciences, Inc. (MSI): 1. Set up a thermal radar in Sandia's Technical Area (TA) III test bed. 2. Collect alarm data from the thermal radar and a radio frequency (RF) radar during simulated intrusion tests to estimate detection performance, based on the Department of Energy (DOE) threat definition. 3. Collect nuisance alarm data caused by weather and non-intruder-related stimuli, to estimate nuisance alarm rate performance. 4. Provide data to the requester, allowing them to process the data. 5. Provide the requester, as a stretch goal, with live radar and thermal radar sensor feeds, allowing real-time processing of the RF radar and the thermal radar. A key technical issue that will influence the success of this activity is the range accuracy of the thermal radar. If issues are encountered, Sandia will work with the requester to correct the range issues.
Through the use of advanced control techniques, wave energy converters (WECs) can achieve substantial increases in energy absorption. The motion of the WEC device is a significant contribution to the energy absorbed by the device. Reactive (complex conjugate) control maximizes the energy absorption due to the impedance matching. The issue with complex conjugate control is that, in general, the controller is noncausal, which requires prediction of the incoming waves. This article explores the potential of employing system identification techniques to build a causal transfer function that approximates the complex conjugate controller over a finite frequency band of interest. This approach is quite viable given the band-limited nature of ocean waves. The resulting controller is stable, and the average efficiency of the power captured by the causal controller in realistic ocean waves is 99%, when compared to the noncausal complex conjugate.
Low energy ion scattering (LEIS) and direct recoil spectroscopy (DRS) are among the few experimental techniques that allow for the direct detection of hydrogen on a surface. The interpretation of LEIS and DRS measurements, however, is often made difficult by complexities that can arise from complicated scattering processes. Previously, these complexities were successfully navigated to identify the exact binding configurations of hydrogen on a few surfaces using a simple channeling model for the projectile ion along the surface. For the W(111) surface structure, this simple channeling model breaks down due to the large lateral atomic spacing on the surface and small interlayer spacing. Instead, our observed hydrogen recoil signal can only be explained by considering not just channeling along the surface but also scattering from subsurface atoms. Using this more complete model, together with molecular dynamics (MD) simulations, we determine that hydrogen adsorbs to the bond-centered site for the W(111)+H(ads) system. Additional MD simulations were performed to further constrain the adsorption site to a height h=1.0±0.1Å and a position dBC=1.6±0.1Å along the bond between neighbors in first and second layers. Our determination of the hydrogen adsorption site is consistent with density functional theory simulation results in the literature.
Conventional electrolytes made by mixing simple Mg2+ salts and aprotic solvents, analogous to those in Li-ion batteries, are incompatible with Mg anodes because Mg metal readily reacts with such electrolytes, producing a passivation layer that blocks Mg2+ transport. In this paper, we report that, through tuning a conventional electrolyte—Mg(TFSI)2 (TFSI– is N(SO2CF3)2–)—with an Mg(BH4)2 cosalt, highly reversible Mg plating/stripping with a high Coulombic efficiency is achieved by neutralizing the first solvation shell of Mg cationic clusters between Mg2+ and TFSI– and enhanced reductive stability of free TFSI–. A critical adsorption step between Mg0 atoms and active Mg cation clusters involving BH4– anions is identified to be the key enabler for reversible Mg plating/stripping through analysis of the distribution of relaxation times (DRT) from operando electrochemical impedance spectroscopy (EIS), operando electrochemical X-ray absorption spectroscopy (XAS), nuclear magnetic resonance (NMR), and density functional theory (DFT) calculations.
An increased demand for privacy in Internet communications has resulted in privacy-centric enhancements to the Domain Name System (DNS), including the use of Transport Layer Security (TLS) and Hypertext Transfer Protocol Secure (HTTPS) for DNS queries. In this paper, we seek to answer questions about their deployment, including their prevalence and their characteristics. Our work includes an analysis of DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH) availability at open resolvers and authoritative DNS servers. We find that DoT and DoH services exist on just a fraction of open resolvers, but among them are the major vendors of public DNS services. We also analyze the state of TCP Fast Open (TFO), which is considered key to reducing the latency associated with TCP-based DNS queries, required by DoT and DoH. The uptake of TFO is extremely low, both on the server side and the client side, and it must be improved to avoid performance degradation with continued adoption of DNS Privacy enhancements.
Trusting simulation output is crucial for Sandia’s mission objectives. Here, we rely on these simulations to perform our high-consequence mission tasks given national treaty obligations. Other science and modeling applications, while they may have high-consequence results, still require the strongest levels of trust to enable using the result as the foundation for both practical applications and future research. To this end, the computing community has developed workflow and provenance systems to aid in both automating simulation and modeling execution as well as determining exactly how was some output was created so that conclusions can be drawn from the data. Current approaches for workflows and provenance systems are all at the user level and have little to no system level support making them fragile, difficult to use, and incomplete solutions. The introduction of container technology is a first step towards encapsulating and tracking artifacts used in creating data and resulting insights, but their current implementation is focused solely on making it easy to deploy an application in an isolated “sandbox” and maintaining a strictly read-only mode to avoid any potential changes to the application. All storage activities are still using the system-level shared storage. This project explores extending the container concept to include storage as a new container type we call data pallets. Data Pallets are potentially writeable, auto generated by the system based on IO activities, and usable as a way to link the contained data back to the application and input deck used to create it.
When designing or analyzing a mechanical system, energy quantities provide insight into the severity of shock and vibration environments; however, the energy methods in the literature do not address localized behavior because energy quantities are usually computed for an entire structure. The main objective of this paper is to show how to compute the energy in the components of a mechanical system. The motivation for this work is that most systems fail functionally due to component failure, not because the primary structure was overloaded, and the ability to easily compute the spatial distribution of energy helps identify failure sensitive components. The quantity of interest is input energy. That input energy can be decoupled modally is well known. What is less appreciated is that input energy can be computed at the component level exactly, using the component effective modal mass. We show the steady state input energy can be decomposed both spatially and modally and computed using input power spectra. A numerical example illustrates the spatial and modal decomposition of input energy and its utility in identifying components at risk of damage in random vibration and shock environments. Our work shows that the modal properties of the structure and the spectral content of the input must be considered together to assess damage risk. Because input energy includes absorbed energy as well as relative kinetic energy and dissipated energy, it is the recommended energy quantity for assessing the severity for both random vibration and shock environments on a structure.
Subsidence monitoring is a crucial component to understanding cavern integrity of salt storage caverns. This report looks at the historical and current subsidence monitoring program and includes interpretation of the data from the West Hackberry Strategic Petroleum Reserve and LA Storage sites. Given data from current level-and-rod surveys, GPS, and tiltmeter, we do not believe there are any structural integrity issues at the West Hackberry DOE and LA Storage sites.
Sandia National Laboratories (also known as Sandia Labs) is a Government owned contractor operated facility. Sandia's mission is to develop advanced technologies to ensure global peace. The laboratory first began in 1945 as a division of Los Alamos National Laboratory and did not become its own laboratory until 1948. The labs was a descendant of the Manhattan Project and about 20 years later, Sandia National Laboratory became part of the Department of Energy (DOE) laboratories.
The populations of flaws in individual layers of microelectromechanical systems (MEMS) structures are determined and verified using a combination of specialized specimen geometry, recent probabilistic analysis, and topographic mapping. Strength distributions of notched and tensile bar specimens are analyzed assuming a single flaw population set by fabrication and common to both specimen geometries. Both the average spatial density of flaws and the flaw size distribution are determined and used to generate quantitative visualizations of specimens. Scanning probe-based topographic measurements are used to verify the flaw spacings determined from strength tests and support the idea that grain boundary grooves on sidewalls control MEMS failure. The findings here suggest that strength controlling features in MEMS devices increase in separation, i.e., become less spatially dense, and decrease in size, i.e., become less potent flaws, as processing proceeds up through the layer stack. The method demonstrated for flaw population determination is directly applicable to strength prediction for MEMS reliability and design.
The Oxford MinION, the first commercial nanopore sequencer, is also the first to implement molecule-by-molecule real-time selective sequencing or “Read Until”. As DNA transits a MinION nanopore, real-time pore current data can be accessed and analyzed to provide active feedback to that pore. Fragments of interest are sequenced by default, while DNA deemed non-informative is rejected by reversing the pore bias to eject the strand, providing a novel means of background depletion and/or target enrichment. In contrast to the previously published pattern-matching Read Until approach, our RUBRIC method is the first example of real-time selective sequencing where on-line basecalling enables alignment against conventional nucleic acid references to provide the basis for sequence/reject decisions. We evaluate RUBRIC performance across a range of optimizable parameters, apply it to mixed human/bacteria and CRISPR/Cas9-cut samples, and present a generalized model for estimating real-time selection performance as a function of sample composition and computing configuration.
Cryogenic transmission electron microscopy is simply transmission electron microscopy conducted on specimens that are cooled in the microscope. The target temperature of the specimen might range from just below ambient temperature to less than 4 K. In general, as the temperature decreases, cost increases, especially below -77°C when liquid He is required. We have two reasons for wanting to cool the specimen - improving stability of the material or observing a material whose properties change at lower temperatures. Both types of study have a long history. The cause of excitement in this field today is that we have a perfect storm of research activity - electron microscopes are almost stable with minimal drift (we can correct what drift there is), we can prepare specimens from the bulk or build them up, we have spherical-aberration-corrected lenses and monochromated beams, we have direct-electron-detector cameras, and computers are becoming powerful enough to handle all the data we produce.
Charge noise can be detrimental to the operation of quantum dot (QD) based semiconductor qubits. We study the low-frequency charge noise by charge offset drift measurements for Si-MOS devices with intentionally implanted donors near the QDs. We show that the MOS system exhibits non-equilibrium drift characteristics, in the form of transients and discrete jumps, that are not dependent on the properties of the donor implants. The equilibrium charge noise indicates a 1/f noise dependence, and a noise strength as low as 1μeV/Hz, comparable to that reported in more model GaAs and Si/SiGe systems (which have also not been implanted). We demonstrate that implanted qubits, therefore, can be fabricated without detrimental effects on long-term drift or 1/f noise for devices with less than 50 implanted donors near the qubit.
Algae ponds used in industrial biomass production are susceptible to pathogen or grazer infestation, resulting in pond crashes with high economic costs. Current methods to monitor and mitigate unhealthy ponds are hindered by a lack of early indicators that precede culture crash. We used solid-phase microextraction (SPME) coupled with gas chromatography-mass spectrometry (GC-MS) to identify volatiles emitted from healthy and rotifer infested cultures of Microchloropsis salina. After 48 hours of algal growth, marine rotifers, Brachionus plicatilis, were added to the algae cultures and volatile organic compounds (VOC) were sampled from the headspace using SPME fibers. A GC-MS approach was used in an untargeted analysis of VOCs, followed by preliminary identification. The addition of B. plicatilis to healthy cultures of M. salina resulted in decreased algal cell numbers, relative to uninfected controls, and generated trans-β-ionone and β-cyclocitral, which were attributed to carotenoid degradation. The abundances of the carotenoid-derived VOCs increased with rotifer consumption of algae. Our results indicate that specific VOCs released by infected algae cultures may be early indicators for impending pond crashes, providing a useful tool to monitor algal biomass production and pond crash prevention.
Caldwell, Peter M.; Mametjanov, Azamat; Tang, Qi; Van Roekel, Luke P.; Golaz, Jean C.; Lin, Wuyin; Bader, David C.; Keen, Noel D.; Feng, Yan; Jacob, Robert; Maltrud, Mathew E.; Roberts, Andrew F.; Taylor, Mark A.; Veneziani, Milena; Wang, Hailong; Wolfe, Jonathan D.; Balaguru, Karthik; Cameron-Smith, Philip; Dong, Lu; Klein, Stephen A.; Leung, L.R.; Li, Hong Y.; Li, Qing; Liu, Xiaohong; Neale, Richard B.; Pinheiro, Marielle; Qian, Yun; Ullrich, Paul A.; Xie, Shaocheng; Yang, Yang; Zhang, Kai; Zhou, Tian
This study provides an overview of the coupled high-resolution Version 1 of the Energy Exascale Earth System Model (E3SMv1) and documents the characteristics of a 50-year-long high-resolution control simulation with time-invariant 1950 forcings following the HighResMIP protocol. In terms of global root-mean-squared error metrics, this high-resolution simulation is generally superior to results from the low-resolution configuration of E3SMv1 (due to resolution, tuning changes, and possibly initialization procedure) and compares favorably to models in the CMIP5 ensemble. Ocean and sea ice simulation is particularly improved, due to better resolution of bathymetry, the ability to capture more variability and extremes in winds and currents, and the ability to resolve mesoscale ocean eddies. The largest improvement in this regard is an ice-free Labrador Sea, which is a major problem at low resolution. Interestingly, several features found to improve with resolution in previous studies are insensitive to resolution or even degrade in E3SMv1. Most notable in this regard are warm bias and associated stratocumulus deficiency in eastern subtropical oceans and lack of improvement in El Niño. Another major finding of this study is that resolution increase had negligible impact on climate sensitivity (measured by net feedback determined through uniform +4K prescribed sea surface temperature increase) and aerosol sensitivity. Cloud response to resolution increase consisted of very minor decrease at all levels. Large-scale patterns of precipitation bias were also relatively unaffected by grid spacing.
The SPARC and SPARC V&V teams successfully presented their work at the Dec 11-12 L1 milestone mid-year review. The teams received overwhelmingly positive feedback at the review.
Twitchell, Jeremy B.; O'Neil, Rebecca S.; Cooke, A.L.; Passell, Howard D.
On July 17-18, 2019, the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), Pacific Northwest National Laboratory (PNNL), and Southern Research (SR) conducted the Southeastern Energy Storage Symposium and Workshop, a two-day event on energy storage technologies in Birmingham, AL. The first day of the event (Symposium) was open to all interested parties; the second day (Workshop) was open only to employees of state energy regulatory agencies. The event was conducted as part of the Energy Storage Program within the DOE's Office of Electricity.
The Gamma Detector Response and Analysis Software Detector Response Function (GADRAS-DRF) application computes the response of gamma-ray and neutron detectors to incoming radiation and provides analysis on measured spectra. This manual provides step-by-step procedures to acquaint new users with the use of the application. The capabilities include characterization of detector response parameters, plotting and viewing measured and computed spectra, analyzing spectra to identify isotopes, estimating source energy distributions from measured spectra, and creating inject data. GADRAS-DRF can compute and provide detector responses quickly and accurately, giving users the ability to obtain usable results in a timely manner (a matter of seconds or minutes).
The early contributions of female researchers such as Marie Curie and Lisa Meitner to physics—and ultimately to the Manhattan Project—have been widely recognized and documented. In addition, numerous historical accounts have revealed the significant impacts of other female scientists, engineers, and technologists during the Manhattan Project. Despite the strong role of women in the Manhattan Project, the momentum has not continued into the present day, as reflected by the current demographics of the Department of Energy (DOE) National Laboratories. Although the overall U.S. workforce is about 50% female, the workforce at the DOE National Labs is only about 30% female. The statistics for technical management and research staff at the DOE National Labs are even more dire; women make up only about 18% of these ranks in contrast to the percentages of women in computer science (25%) and physical science (39%) in the U.S. workforce. These current statistics are not the desired state for the DOE National Labs and contrast sharply with the long history of accomplishments by women at the Labs. We believe the DOE National Labs should lead the charge on diversity and inclusion (D&I) and serve as a model enterprise for bringing women into our scientific and technical workforce.
United States Department of Energy (DOE) O 436.1, Departmental Sustainability, requires each DOE site to develop and commit to implementing an annual Site Sustainability Plan (SSP) that identifies that site's contributions toward meeting DOE sustainability goals. These sustainability goals are reinforced by Executive Order (EO) 13834, Efficient Federal Operations. Sandia personnel conduct mission activities at four primary locations: Sandia National Laboratories/New Mexico (SNL/NM); SNL/California (SNL/CA), SNL/ Tonopah Test Range (SNL/TTR) in Nevada; and SNL/Kauai Test Facility (SNL/KTF) in Hawaii. Sandia personnel also conduct mission activities at other locations, Carlsbad, New Mexico; and Amarillo, Texas. Each location has unique energy, water, and transportation fuel resource management challenges. SNL/NM and SNL/CA account for most of Sandia's total energy, water, and transportation fuel use and building square footage. Therefore, although the goals and targets of this plan include all locations, sustainability activities focus predominantly on the SNL/NM and SNL/CA locations.
Hoffman, Matthew J.; Asay-Davis, Xylar; Price, Stephen F.; Fyke, Jeremy; Perego, Mauro P.
Modeling and observations suggest that Thwaites Glacier, West Antarctica, has begun unstable retreat. Concurrently, oceanographic observations have revealed substantial multiyear variability in the temperature of the ocean water driving retreat through melting of the ice shelf that restrains inland glacier flow. Using an ensemble of 72 ice-sheet model simulations that include an idealized representation of ocean temperature variability, we find that variable ice-shelf melting causes delays in grounding line retreat, mass loss, and sea level contribution relative to steady forcing. Modeled delays are up to 43 years after 500 years of simulation, corresponding to a 10% reduction in glacier mass loss. Delays are primarily caused by asymmetric melt forcing in the presence of variability. For the “warm cavity” conditions beneath Thwaites Ice Shelf, increases in access of warm, deeper water are unable to raise water temperatures in the cavity by much, whereas increases in access of significantly colder, shallow water reduce cavity water temperatures substantially. This leads to lowered mean melt rates under variable ocean temperature forcing. Additionally, about one quarter of the mass loss delay is caused by a nonlinear ice dynamic response to varying ice-shelf thinning rate, which is amplified during the initial phases of unstable, bed-topography-driven retreat. Mass loss rates under variability differ by up to 50% from ensemble mean values at any given time. Our results underscore the need for taking climate variability into account when modeling ice sheet evolution and for continued efforts toward the coupling of ice sheet models to ocean and climate models.
Helicity plays a unique role as an integral invariant of a dynamical system. In this paper, the concept of helicity in the general setting of Hamiltonian dynamics is discussed. It is shown, through examples, how the conservation of overall helicity can imply a bound on other quantities of the motion in a nontrivial way.
This report describes an adhesively bonded, Asymmetric Double Cantilever Beam (ADCB) fracture specimen that has been expressly developed to measure the toughness of an alumina (Al203)/epoxy interface. The measured interfacial fracture toughness quantifies resistance to crack growth along an interface with the stipulation that crack-tip yielding is limited and localized to the crack-tip. An ADCB specimen is a variant of the well-known double cantilever beam specimen, but in the ADCB specimen the two beams have different bending stiffnesses. This report begins with a brief overview of how crack-tip mode mixity (i.e., a measure of shear-to- normal stress at the crack-tip) is a distinguishing feature of interfacial fracture. Which is then followed by a detailed description of relevant design, fabrication, testing, and associated data analysis techniques. The report then concludes by presenting illustrative results that compare the measured interfacial toughness of an alumina/epoxy interface when the alumina is silane-coated and when the alumina is not silane coated. This page left blank
Hsieh, Mei L.; Bur, James A.; Wang, Xuanjie; Narayanan, Shankar; Luk, Ting S.
In this paper, we report a direct imaging of narrow-band super Planckian thermal radiation in the far field, emitted from a resonant-cavity/tungsten photonic crystal (cavity/W-PC). A spectroscopic study of the cavity/W-PC shows a distinct resonant peak at λ ∼ 1.7 μm. Furthermore, an infrared CCD camera was used to record radiation image of the cavity/W-PC and a carbon-nanotube (CNT) black reference at λ ∼ 1.7 μm emitted from the same sample. The recorded image displays a higher brightness emitted from the cavity/W-PC region than from the blackbody region for all temperatures tested, T = 530-650 K. This observation is in sharp contrast to the common understanding of equilibrium thermal radiation, namely, a blackbody has a unit absorptance, a unity emittance and should emits the strongest radiation. Since the image was taken from the same sample and the temperature difference across the W-PC/ CNT boundary is less than 0.1 K, the observed image contrast gives a truly convincing evidence of super Planckian behavior in our sample. The discovery of a super-intense, narrow band radiation from a heated W-PC could open up a new door for realizing narrow band infrared emitters. The W-PC filament could also be very useful for efficient energy applications such as thermo-photovoltaics, waste heat recycling and radiative cooling.
Hole spins have recently emerged as attractive candidates for solid-state qubits for quantum computing. Their state can be manipulated electrically by taking advantage of the strong spin-orbit interaction (SOI). Crucially, these systems promise longer spin coherence lifetimes owing to their weak interactions with nuclear spins as compared to electron spin qubits. Here we measure the spin relaxation time T1 of a single hole in a GaAs gated lateral double quantum dot device. We propose a protocol converting the spin state into long-lived charge configurations by the SOI-assisted spin-flip tunneling between dots. By interrogating the system with a charge detector we extract the magnetic-field dependence of T1 ∝ B−5 for fields larger than B = 0.5 T, suggesting the phonon-assisted Dresselhaus SOI as the relaxation channel. This coupling limits the measured values of T1 from ~400 ns at B = 1.5 T up to ~60 μs at B = 0.5 T.
The historic city of Saint Petersburg is full of memorial plaques—ballet dancers, literary giants, composers, war heroes, and even mathematicians. Here, if you go to the metro station Petrogradskaya, cross the bridge over the tiny Karpovka River, and reach ulitsa Professora Popova—Professor Popov Street—then almost surely you are going to one of two destinations. First, perhaps you are going to the Saint Petersburg Electrotechnical University, colloquially known as LETI. Second, you may be going for a stroll in the botanical garden of the V. L. Komarov Institute of the Russian Academy of Sciences.
Flame detectors provide an important layer of protection for personnel in petrochemical plants, but effective placement can be challenging. A mixed-integer nonlinear programming formulation is proposed for optimal placement of flame detectors while considering non-uniform probabilities of detection failure. We show that this approach allows for the placement of fire detectors using a fixed sensor budget and outperforms models that do not account for imperfect detection. We develop a linear relaxation to the formulation and an efficient solution algorithm that achieves global optimality with reasonable computational effort. We integrate this problem formulation into the Python package, Chama, and demonstrate the effectiveness of this formulation on a small test case and on two real-world case studies using the fire and gas mapping software, Kenexis Effigy.
Zachman, Michael J.; De Jonge, Niels; Fischer, Robert; Perea, Daniel E.; Jungjohann, Katherine L.
We report new cryogenic characterization techniques for exploring the nanoscale structure and chemistry of intact solid–liquid interfaces have recently been developed. These techniques provide high-resolution information about buried interfaces from large samples or devices that cannot be obtained by other means. These advancements were enabled by the development of instrumentation for cryogenic focused ion beam liftout, which allows intact solid–liquid interfaces to be extracted from large samples and thinned to electron-transparent thicknesses for characterization by cryogenic scanning transmission electron microscopy or atom probe tomography. Future implementation of these techniques will complement current strides in imaging of materials in fluid environments by in situ liquid-phase electron microscopy, providing a more complete understanding of the morphology, surface chemistry, and dynamic processes that occur at solid–liquid interfaces.
The energy grid becomes more complex with increasing penetration of renewable resources, distributed energy storage, distributed generators, and more diverse loads such as electric vehicle charging stations. The presence of distributed energy resources (DERs) requires directional protection due to the added potential for energy to flow in both directions down the line. Additionally, contingency requirements for critical loads within a microgrid may result in looped or meshed systems. Computation speeds of iterative methods required to coordinate loops are improved by starting with a minimum breakpoint set (MBPS) of relays. A breakpoint set (BPS) is a set of breakers such that, when opened, breaks all loops in a mesh grid creating a radial system. A MBPS is a BPS that consists of the minimum possible number of relays required to accomplish this goal. In this paper, a method is proposed in which a minimum spanning tree is computed to indirectly break all loops in the system, and a set difference is used to identify the MBPS. The proposed method is found to minimize the cardinality of the BPS to achieve a MBPS.
Dev, Sukrith; Wang, Yinan; Kim, Kyounghwan; Zamiri, Marziyeh; Kadlec, Clark; Goldflam, Michael; Hawkins, Samuel; Shaner, Eric; Kim, Jin; Krishna, Sanjay; Allen, Monica; Allen, Jeffery; Tutuc, Emanuel; Wasserman, Daniel
The measurement of minority carrier lifetimes is vital to determining the material quality and operational bandwidth of a broad range of optoelectronic devices. Typically, these measurements are made by recording the temporal decay of a carrier-concentration-dependent material property following pulsed optical excitation. Such approaches require some combination of efficient emission from the material under test, specialized collection optics, large sample areas, spatially uniform excitation, and/or the fabrication of ohmic contacts, depending on the technique used. In contrast, here we introduce a technique that provides electrical readout of minority carrier lifetimes using a passive microwave resonator circuit. We demonstrate >105 improvement in sensitivity, compared with traditional photoemission decay experiments and the ability to measure carrier dynamics in micron-scale volumes, much smaller than is possible with other techniques. The approach presented is applicable to a wide range of 2D, micro-, or nano-scaled materials, as well as weak emitters or non-radiative materials.
Foteinopoulou, Stavroula; Devarapu, Ganga C.R.; Subramania, Ganapathi S.; Krishna, Sanjay; Wasserman, Daniel
Here, we review the progress and most recent advances in phonon-polaritonics, an emerging and growing field that has brought about a range of powerful possibilities for mid- to far-infrared (IR) light. These extraordinary capabilities are enabled by the resonant coupling between the impinging light and the vibrations of the material lattice, known as phonon-polaritons (PhPs). These PhPs yield a characteristic optical response in certain materials, occurring within an IR spectral window known as the reststrahlen band. In particular, these materials transition in the reststrahlen band from a high refractive-index behavior, to a near-perfect metal behavior, to a plasmonic behavior - typical of metals at optical frequencies. When anisotropic they may also possess unconventional photonic constitutive properties thought of as possible only with metamaterials. The recent surge in two-dimensional (2D) material research has also enabled PhP responses with atomically-thin materials. Such vast and extraordinary photonic responses can be utilized for a plethora of unusual effects for IR light. Examples include sub-diffraction surface wave guiding, artificial magnetism, exotic photonic dispersions, thermal emission enhancement, perfect absorption and enhanced near-field heat transfer. Finally, we discuss the tremendous potential impact of these IR functionalities for the advancement of IR sources and sensors, as well as for thermal management and THz-diagnostic imaging.
Knowledge graph embedding (KGE) learns latent vector representations of named entities (i.e., vertices) and relations (i.e., edge labels) of knowledge graphs. Herein, we address two problems in KGE. First, relations may belong to one or multiple categories, such as functional, symmetric, transitive, reflexive, and so forth; thus, relation categories are not exclusive. Some relation categories cause non-trivial challenges for KGE. Second, we found that zero gradients happen frequently in many translation based embedding methods such as TransE and its variations. To solve these problems, we propose i) converting a knowledge graph into a bipartite graph, although we do not physically convert the graph but rather use an equivalent trick; ii) using multiple vector representations for a relation; and iii) using a new hinge loss based on energy ratio(rather than energy gap) that does not cause zero gradients. We show that our method significantly improves the quality of embedding.
Electrical tunability of the g-factor of a confined spin is a long-time goal of the spin qubit field. Here we utilize the electric dipole spin resonance (EDSR) to demonstrate it in a gated GaAs double-dot device confining a hole. This tunability is a consequence of the strong spin-orbit interaction (SOI) in the GaAs valence band. The SOI enables a spin-flip interdot tunneling, which, in combination with the simple spin-conserving charge transport leads to the formation of tunable hybrid spin-orbit molecular states. EDSR is used to demonstrate that the gap separating the two lowest energy states changes its character from a charge-like to a spin-like excitation as a function of interdot detuning or magnetic field. In the spin-like regime, the gap can be characterized by the effective g-factor, which differs from the bulk value owing to spin-charge hybridization, and can be tuned smoothly and sensitively by gate voltages.
We present an approach to uncoupling the pair of transient governing equations used in electrokinetics (i.e., streaming potential and electroosmosis). This approach allows for the solution of two uncoupled "intermediate" equations, then the physical solution is found by recombination of these intermediate potentials through a matrix multiplication. We present numerically stable expressions for the coefficients, and an example showing electrokinetics arising from pumping a fully penetrating well in a confined aquifer, surrounded by insulating aquicludes. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. (SAND2019-8712 A)