Numerically modeling chatter behavior of small electrical components embedded within larger components is challenging. Reduced order models (ROMs) have been developed to assess these components’ chatter behavior in vibration and shock environments. These ROMs require experimental validation to instill confidence that these components meet their performance requirements. While achieving conservative results, experimental validation is required, especially considering that the ROMs neglect the viscous damping effects of the fluid that surrounds these particular components within their system. Dynamic ring-down data of the electrical receptacles in air will be explored and will be assessed as to whether that data provides a validation data set for this ROM. Additional data will be examined in which dynamic ring-down data was taken on the receptacle while submerged in an oil, resulting in a unique experimental setup that should prove as a proof of concept for this type of testing on small components in unique environments.
Historically, control systems have primarily depended upon their isolation from the Internet and from traditional information technology (IT) networks as a means of maintaining secure operation in the face of potential remote attacks over computer networks. However, these networks are incrementally being upgraded and are becoming more interconnected with external networks so they can be effectively managed and configured remotely. Examples of control systems include the electrical power grid, smart grid networks, microgrid networks, oil and natural gas refineries, water pipelines, and nuclear power plants. Given that these systems are becoming increasingly connected, computer security is an essential requirement as compromises can result in consequences that translate into physical actions and significant economic impacts that threaten public health and safety. Moreover, because the potential consequences are so great and these systems are remotely accessible due to increased interconnectivity, they become attractive targets for adversaries to exploit via computer networks. Several examples of attacks on such systems that have received a significant amount of attention include the Stuxnet attack, the US-Canadian blackout of 2003, the Ukraine blackout in 2015, and attacks that target control system data itself. Improving the cybersecurity of electrical power grids is the focus of our research.
Experimental modal analysis via shaker testing introduces errors in the measured structural response that can be attributed to the force transducer assembly fixed on the vibrating structure. Previous studies developed transducer mass-cancellation techniques for systems with translational degrees of freedom; however, studies addressing this problem when rotations cannot be neglected are sparse. In situations where rotations cannot be neglected, the apparent mass of the transducer is dependent on its geometry and is not the same in all directions. This paper investigates a method for correcting the measured system response that is contaminated with the effects of the attached force transducer mass and inertia. Experimental modal substructuring facilitated estimations of the translational and rotational mode shapes at the transducer connection point, thus enabling removal of an analytical transducer model from the measured test structure resulting in the corrected response. A numerical analysis showed the feasibility of the proposed approach in estimating the correct modal frequencies and forced response. To provide further validation, an experimental analysis showed the proposed approach applied to results obtained from a shaker test more accurately reflected results obtained from a hammer test.
Joining technologies such as welds, adhesives, and bolts are nearly ubiquitous and often lead to concentrated stresses, making them key in analyzing failure of a structure. While high-fidelity models for fasteners have been developed, they are impractical for use in a full system or component analyses, which may involve hundreds of fasteners undergoing mixed loading. Other failure models for fasteners which use specialized boundary conditions, e.g., spot welds, do well in replicating the load-displacement response of a fastener in a mesh independent manner, but are limited in their ability to transmit a bending moment and require constitutive assumptions when there is a lack of experimental data. A reduced-order finite element model using cohesive surface elements to model fastener failure is developed. A cohesive zone allows for more explicitly representing the fracture of the fastener, rather than simply specifying a load-displacement relationship between two surfaces as in the spot weld. This fastener model is assessed and calibrated against tensile and shear loading data and compared to a traditional spot weld approach. The cohesive zone model can reproduce the experimental data, demonstrating its viability as a reduced-order model of fastener behavior.
Qualification of complex systems often involves shock and vibration testing at the component level to ensure each component is robust enough to survive the specified environments. In order for the component testing to adequately satisfy the system requirements, the component must exhibit a similar dynamic response between the laboratory component test and system test. There are several aspects of conventional testing techniques that may impair this objective. Modal substructuring provides a framework to accurately assess the level of impairment introduced in the laboratory setup. If the component response is described in terms of fixed-base modes in both the laboratory and system configurations, we can gain insight into whether the laboratory test is exercising the appropriate damage potential. Further, the fixed-base component response in the system can be used to determine the correct rigid body laboratory fixture input to overcome the errors seen in the standard component test. In this paper, we investigate the effectiveness of reproducing a system shock environment on a simple beam model with an essentially rigid fixture.
This work introduces a new method to efficiently solve optimization problems constrained by partial differential equations (PDEs) with uncertain coefficients. The method leverages two sources of inexactness that trade accuracy for speed: (1) stochastic collocation based on dimension-Adaptive sparse grids (SGs), which approximates the stochastic objective function with a limited number of quadrature nodes, and (2) projection-based reduced-order models (ROMs), which generate efficient approximations to PDE solutions. These two sources of inexactness lead to inexact objective function and gradient evaluations, which are managed by a trust-region method that guarantees global convergence by adaptively refining the SG and ROM until a proposed error indicator drops below a tolerance specified by trust-region convergence theory. A key feature of the proposed method is that the error indicator|which accounts for errors incurred by both the SG and ROM|must be only an asymptotic error bound, i.e., a bound that holds up to an arbitrary constant that need not be computed. This enables the method to be applicable to a wide range of problems, including those where sharp, computable error bounds are not available; this distinguishes the proposed method from previous works. Numerical experiments performed on a model problem from optimal ow control under uncertainty verify global convergence of the method and demonstrate the method's ability to outperform previously proposed alternatives.
A new apparatus – “Dropkinson Bar” – has been successfully developed for material property characterization at intermediate strain rates. This Dropkinson bar combines a drop table and a Hopkinson bar. The drop table was used to generate a relatively long and stable low-speed impact to the specimen, whereas the Hopkinson bar principle was applied to measure the load history with accounting for inertia effect in the system. Pulse shaping technique was also applied to the Dropkinson bar to facilitate uniform stress and strain as well as constant strain rate in the specimen. The Dropkinson bar was then used to characterize 304L stainless steel and 6061-T6 aluminum at a strain rate of ∼600 s−1. The experimental data obtained from the Dropkinson bar tests were compared with the data obtained from conventional Kolsky tensile bar tests of the same material at similar strain rates. Both sets of experimental results were consistent, showing the newly developed Dropkinson bar apparatus is reliable and repeatable.
There are several methodologies for modeling fasteners in finite element analyses. This work examines the effect of four predominant fastener modeling methods regarding the fatigue of mock hardware that requires fasteners. Typical fastener modeling methods explored in this work consist of a spring method with no preload, a beam method with no preload, a beam method with a preload, and a solid model representation of the fastener with preload. It is found that the different fastener modeling methods produce slightly different fatigue damage predictions, and that this uncertainty in modeling is insignificant as compared to uncertainty in input. Consequently, any of these methods are considered appropriate. In order to make this assertion, multiaxial fatigue methods are investigated and a proportional method is used because of a biaxiality metric.
U. S. Nuclear Power Plants are seeking to implement wireless communications for cost-effective operations. New technology introduced into power plants must not introduce security concerns into critical plant functions. This paper describes the potential for new security concerns with proposed nuclear power plant wireless system implementations and methods of evaluation. While two aspects of concern are introduced, only one (cyber attack vulnerability) is expanded with a description of test setup and methods. A novel method of cyber vulnerability discovery is also described. The goal of this research is to establish wireless technology as a part of a secure operations architecture that brings increased efficiency without introducing new security concerns.
We use Bayesian data analysis to predict dengue fever outbreaks and quantify the link between outbreaks and meteorological precursors tied to the breeding conditions of vector mosquitos. We use Hamiltonian Monte Carlo sampling to estimate a seasonal Gaussian process modeling infection rate, and aperiodic basis coefficients for the rate of an “outbreak level” of infection beyond seasonal trends across two separate regions. We use this outbreak level to estimate an autoregressive moving average (ARMA) model from which we extrapolate a forecast. We show that the resulting model has useful forecasting power in the 6–8 week range. The forecasts are not significantly more accurate with the inclusion of meteorological covariates than with infection trends alone.
Some of the primary barriers to widespread adoption of metal additive manufacturing (AM) are persistent defect formation in built components, high material costs, and lack of consistency in powder feedstock. To generate more reliable, complex-shaped metal parts, it is crucial to understand how feedstock properties change with reuse and how that affects build mechanical performance. Powder particles interacting with the energy source, yet not consolidated into an AM part can undergo a range of dynamic thermal interactions, resulting in variable particle behavior if reused. In this work, we present a systematic study of 316L powder properties from the virgin state through thirty powder reuses in the laser powder bed fusion process. Thirteen powder characteristics and the resulting AM build mechanical properties were investigated for both powder states. Results show greater variability in part ductility for the virgin state. The feedstock exhibited minor changes to size distribution, bulk composition, and hardness with reuse, but significant changes to particle morphology, microstructure, magnetic properties, surface composition, and oxide thickness. Additionally, sieved powder, along with resulting fume/condensate and recoil ejecta (spatter) properties were characterized. Formation mechanisms are proposed. It was discovered that spatter leads to formation of single crystal ferrite through large degrees of supercooling and massive solidification. Ferrite content and consequently magnetic susceptibility of the powder also increases with reuse, suggesting potential for magnetic separation as a refining technique for altered feedstock.
Many test articles exhibit slight nonlinearities which result in natural frequencies shifting between data from different references. This shifting can confound mode fitting algorithms because a single mode can appear as multiple modes when the data from multiple references are combined in a single data set. For this reason, modal test engineers at Sandia National Laboratories often fit data from each reference separately. However, this creates complexity when selecting a final set of modes, because a given mode may be fit from a number of reference data sets. The color-coded complex mode indicator function was developed as a tool that could be used to reduce a complex data set into a manageable figure that displays the number of modes in a given frequency range and also the reference that best excites the mode. The tool is wrapped in a graphical user interface that allows the test engineer to easily iterate on the selected set of modes, visualize the MAC matrix, quickly resynthesize data to check fits, and export the modes to a report-ready table. This tool has proven valuable, and has been used on very complex modal tests with hundreds of response channels and a handful of reference locations.
Small components are becoming increasingly prevalent in today’s society. Springs are a commonly found piece-part in many mechanisms, and as these components become smaller, so do the springs inside of them. Because of their size, small manufacturing defects or other damage to the spring may become significant: a tiny gouge might end up being a significant portion of the cross-sectional area of the wire. However, their small size also makes it difficult to detect such flaws and defects in an efficient manner. This work aims to investigate the effectiveness of using dynamic measurements to detect damage to a miniature spring. Due to their small size, traditional instrumentation cannot be used to take measurements on the spring. Instead, the non-contact Laser Doppler Vibrometry technique is investigated. Natural frequencies and operating shapes are measured for a number of springs. These results are compared against springs that have been intentionally flawed to determine if the change in dynamic properties is a reasonable metric for damage detection.
Many methods have been proposed for updating finite element matrices using experimentally derived modal parameters. By using these methods, a finite element model can be made to exactly match the experiment. These techniques have not achieved widespread use in finite element modeling because they introduce non-physical matrices. Recently, Scanning Laser Doppler Vibrometery (SLDV) has enabled finer measurement point resolution and more accurate measurement point placement with no mass loading compared to traditional accelerometer or roving hammer tests. Therefore, it is worth reinvestigating these updating procedures with high-resolution data inputs to determine if they are able to produce finite element models that are suitable for substructuring. A rough finite element model of an Ampair Wind Turbine Blade was created, and a SLDV measurement was performed that measured three-dimensional data at every node on one surface of the blade. This data was used to update the finite element model so that it exactly matched test data. A simple substructuring example of fixing the base of the blade was performed and compared to previously measured fixed-base data.
Experiments are a critical part of the model validation process, and the credibility of the resulting simulations are themselves dependent on the credibility of the experiments. The impact of experimental credibility on model validation occurs at several points through the model validation and uncertainty quantification (MVUQ) process. Many aspects of experiments involved in the development and verification and validation (V&V) of computational simulations will impact the overall simulation credibility. In this document, we define experimental credibility in the context of model validation and decision making. We summarize possible elements for evaluating experimental credibility, sometimes drawing from existing and preliminary frameworks developed for evaluation of computational simulation credibility. The proposed framework is an expert elicitation tool for planning, assessing, and communicating the completeness and correctness of an experiment (“test”) in the context of its intended use—validation. The goals of the assessment are (1) to encourage early communication and planning between the experimentalist, computational analyst, and customer, and (2) the communication of experimental credibility. This assessment tool could also be used to decide between potential existing data sets to be used for validation. The evidence and story of experimental credibility will support the communication of overall simulation credibility.
Differences in impedance are usually observed when components are tested in fixtures at lower levels of assembly from those in which they are fielded. In this work, the Kansas City National Security Campus (KCNSC) test bed hardware geometry is used to explore the sensitivity of the form of the objective function on the adequate reproduction of relevant response characteristics at the next level of assembly. Inverse methods within Sandia National Laboratories’ Sierra/SD code suite along with the Rapid Optimization Library (ROL) are used for identifying an unknown material (variable shear and bulk modulus) distributed across a predefined fixture volume. Comparisons of the results between time-domain based objective functions are presented. The development of the objective functions, solution sensitivity, and solution convergence will be discussed in the context of the practical considerations required for creating a realizable set of test hardware based on the variable-modulus optimized solutions.
In the past decade, multi-axis vibration testing has progressed from its early research stages towards becoming a viable technology which can be used to simulate more realistic environmental conditions. The benefits of multi-axis vibration simulation over traditional uniaxial testing methods have been demonstrated by numerous authors. However, many challenges still exist to best utilize this new technology. Specifically, methods to obtain accurate and reliable multi-axis vibration specifications based on data acquired from field tests is of great interest. Traditional single axis derivation approaches may be inadequate for multi-axis vibration as they may not constrain profiles to adhere to proper cross-axis relationships—they may introduce behavior that is neither controllable nor representative of the field environment. A variety of numerical procedures have been developed and studied by previous authors. The intent of this research is to benchmark the performance of these different methods in a well-controlled lab setting to provide guidance for their usage in a general context. Through a combination of experimental and analytical work, the primary questions investigated are as follows: (1) In the absence of part-to-part variability and changes to the boundary condition, which specification derivation method performs the best? (2) Is it possible to optimize the sensor selection from field data to maximize the quality/accuracy of derived multi-axis vibration specifications? (3) Does the presence of response energy in field data which did not originate due to rigid body motion degrade the accuracy of multi-axis vibration specifications obtained via these derivation methods?
Random vibration tests have been conducted for over 5 decades using vibration machines which excite a test item in uniaxial motion. With the advent of multi shaker test systems, excitation in multiple axes and/or at multiple locations is feasible. For random vibration testing, both the auto spectrum of the individual controls and the cross spectrum, which defines the relationship between the controls, define the test environment. This is a striking contrast to uniaxial testing where only the control auto spectrum is defined. In a vibration test the energy flow proceeds from drive excitation voltages to control acceleration auto and cross spectral densities and finally, to response auto and cross spectral densities. This paper examines these relationships, which are encoded in the frequency response function. Following the presentation of a complete system diagram, examination of the relationships between the excitation and control spectral density matrices is clarified. It is generally assumed that the control auto spectra are known from field measurements, but the control cross spectra may be unknown or uncertain. Given these constraints, control algorithms often prioritize replication of the field auto spectrum. The system dynamics determine the cross spectrum. The Nearly Independent Drive Algorithm, described herein, is one approach. A further issue in Multi Input Multi Response testing is the link between cross spectrum at one set of locations and auto spectra at a second set of locations. The effect of excitation cross spectra on control auto spectra is one important case, encountered in every test. The effect of control cross spectra on response auto spectra is important since we may desire to adjust control cross spectra to achieve some desired response auto spectra. The relationships between cross spectra at one set of locations and auto spectra at another set of locations is examined with the goal of elucidating the advantages and limitations of using control cross spectra to define response auto spectra.
Surmick, David M.; Dagel, Daryl; Parigger, Christian G.
Spatially resolved, line-of-sight measurements of aluminum monoxide emission spectra in laser ablation plasma are used with Abel inversion techniques to extract radial plasma temperatures. Contour mapping of the radially deconvolved signal intensity shows a ring of AlO formation near the plasma boundary with the ambient atmosphere. Simulations of the molecular spectra were coupled with the line profile fitting routines. Temperature results are presented with simultaneous inferences from lateral, asymmetric radial, and symmetric radial AlO spectral intensity profiles. This analysis indicates that shockwave phenomena in the radial profiles, including a temperature drop behind the blast wave created during plasma initiation were measured.
Trajectories of unique particles were tracked using spatially and temporally interlaced single-shot images from multiple views. Synthetic data were investigated to verify the ability of the technique to track particles in three-dimensions and time. The synthetic data was composed of four images from unique perspectives at four instances in time. The analysis presented verifies that under certain circumstances particle trajectories can be mapped in three dimensions from a minimal amount of information, i.e. one image per viewing angle. These results can enable four-dimensional measurements where they may otherwise prove unfeasible.
A collaborative testing and analysis effort investigating the effects of threaded fastener size on load-displacement behavior and failure was conducted to inform the modeling of threaded connections. A series of quasistatic tension tests were performed on #00, #02, #04, #06 and #4 (1/4”) A286 stainless steel fasteners (NAS1351N00-4, NAS1352N02-6, NAS1352N04-8, NAS1352N06-10, and NAS1352N4-24, respectively) to provide calibration and validation data for the analysis portion of the study. The data obtained from the testing series reveals that the size of the fastener may influence the characteristic stress-strain response, as the failure strains and ultimate loads varied between the smaller (#00 and #02) and larger (#04, #06, and #4) fasteners. These results motivated the construction of high-fidelity finite element models to investigate the underlying mechanics of these responses. Two threaded fastener models, one with axisymmetric threads and the other with full 3D helical threads, were calibrated to subsets of the data to compare modeling approaches, analyze fastener material properties, and assess how well these calibrated properties extend to fasteners of varying sizes and if trends exist that can inform future best modeling practices. The modeling results are complemented with a microstructural analysis to further investigate the root cause of size effects observed in the experimentally obtained load-displacement curves. These analyses are intended to inform and guide reduced-order modeling approaches that can be incorporated in system level analyses of abnormal environments where modeling fidelity is limited and each component is not always testable, but models must still capture fastener behavior up to and including failure. This complimentary testing and analysis study identifies differences in the characteristic stress-strain response of varying sized fasteners, provides microstructural evidence to support these variations, evaluates our ability to extrapolate calibrated properties to different sized fasteners, and ultimately further educates the analysis community on the robustness of fastener modeling.
Mazumdar, Yi C.; Heyborne, Jeffery D.; Guildenbecher, Daniel R.; Smyser, Michael E.; Slipchenko, Mikhail N.
Digital in-line holography techniques for coherent imaging are important for object sizing and tracking applications in multiphase flows and combustion systems. In explosive, supersonic, or hypersonic environments, however, gas-phase shocks impart phase distortions that obscure objects. In this work, we implement phase-conjugate digital in-line holography (PCDIH) with both a picosecond laser and a nanosecond pulse-burst laser for reducing the phase distortions caused by shock-waves. The technique operates by first passing a forward beam of coherent light through the shock-wave phase-distortion. The light then enters a phase-conjugate mirror, created via a degenerate four-wave-mixing process, to produce a return beam with the opposite phase-delay as the forward beam. By passing the return beam back through the phase-distortion, phase delays are canceled producing phase-distortion-free images. This technique enables the measurement of the three-dimensional position and velocity of objects through shock-wave distortions at rates up to 500 kHz. This method is demonstrated in a variety of experiments including imaging supersonic shock-waves, sizing objects through laser-spark plasma-generated shock-waves, and tracking explosively-generated hypersonic fragments.
The simulation of various structural systems often requires accounting for the fasteners holding the distinct parts together. When fasteners are not expected to yield, simple reduced representations like linear springs can be used. However, in analyses of abnormal environments where fastener failure must be accounted for, fasteners are often represented with more detail. One common approach is to mesh the head and the shank of the fastener as smooth cylinders, neglecting the threads (referred to as a plug model). The plug can elicit a nonlinear mechanical response by using an elastic-plastic material model, which can be calibrated to experimental load-displacement curves, typically in pure tension. Fasteners rarely fail exclusively in pure tension, so the study presented here considers current plug modeling practice at multiaxial loadings. Comparisons of this plug model are made to experimental data as well as a higher fidelity model that includes the threads of the fastener. For both models, a multilinear elastic-plastic constitutive model is used, and two different failure models are explored to capture the ultimate failure of the fastener. The load-displacement curves of all three sets of data (the plug model, threaded model, and the experiments) are compared. The comparisons between simulations and experiments contribute to understanding the role of multiaxial loading on fastener response, and motivate future work on improving fastener models that can accurately capture multiaxial failure.
In computational structural dynamics problems, the ability to calibrate numerical models to physical test data often depends on determining the correct constraints within a structure with mechanical interfaces. These interfaces are defined as the locations within a built-up assembly where two or more disjointed structures are connected. In reality, the normal and tangential forces arising from friction and contact, respectively, are the only means of transferring loads between structures. In linear structural dynamics, a typical modeling approach is to linearize the interface using springs and dampers to connect the disjoint structures, then tune the coefficients to obtain sufficient accuracy between numerically predicted and experimentally measured results. This work explores the use of a numerical inverse method to predict the area of the contact patch located within a bolted interface by defining multi-point constraints. The presented model updating procedure assigns contact definitions (fully stuck, slipping, or no contact) in a finite element model of a jointed structure as a function of contact pressure computed from a nonlinear static analysis. The contact definitions are adjusted until the computed modes agree with experimental test data. The methodology is demonstrated on a C-shape beam system with two bolted interfaces, and the calibrated model predicts modal frequencies with <3% total error summed across the first six elastic modes.
Reza, Shahed; Klein, Brianna A.; Baca, Albert G.; Armstrong, Andrew M.; Allerman, Andrew A.; Douglas, Erica A.; Kaplar, Robert J.
The emerging Al-rich AlGaN-channel Al x Ga1-xN/Al y Ga1-yN high-electron-mobility transistors (HEMTs) with 0.7 ≤ y < x ≤ 1.0 have the potential to greatly exceed the power handling capabilities of today's GaN HEMTs, possibly by five times. This projection is based on the expected 4× enhancement of the critical electric field, the 2× enhancement of sheet carrier density, and the parity of the electron saturation velocity for Al-rich AlGaN-channel HEMTs relative to GaN-channel HEMTs. In this paper, the expected increased RF power density in Al-rich AlGaN-channel HEMTs is calculated by theoretical analysis and computer simulations, based on existing data on long-channel AlGaN devices. It is shown that a saturated power density of 18 W mm-1, a power-added efficiency of 55% and an output third-order intercept point over 40 dB can be achieved for this technology. The method for large-signal RF performance estimation presented in this paper is generic and can be applied to other novel high-power device technologies at the early stages of development.
This paper details a joint numerical and experimental investigation of transition-delaying roughness. A numerical simulation was undertaken to design a surface roughness configuration that would suppress Mack’s 2nd mode instability in order to maintain laminar flow over a Mach 8 hypersonic blunt cone. Following the design process the roughness configuration was implemented on a hypersonic cone test article. Multiple experimental runs at the Mach 8 condition with different Reynolds numbers were run, as well as an off-design Mach 5 condition. The roughness did appear to delay transition in the Mach 8 case as intended, but did not appear to delay transition in the Mach 5 case. Concurrently, simulations of the roughness configuration were also computed for both Mach cases utilizing the experimental conditions. Linear stability theory was applied to the simulations in order to determine their boundary layer stability characteristics. This investigation of multiple cases helps to validate the numerical code with real experimental results as well as provide physical evidence for the transition-delaying roughness phenomenon.
Al-rich AlGaN-channel high electron mobility transistors with 80-nm long gates and 85% (70%) Al in the barrier (channel) were evaluated for RF performance. The dc characteristics include a maximum current of 160 mA/mm with a transconductance of 24 mS/mm, limited by source and drain contacts, and an on/off current ratio of 109. fT of 28.4 GHz and fMAX of 18.5 GHz were determined from small-signal S-parameter measurements. Output power density of 0.38 W/mm was realized at 3 GHz in a power sweep using on-wafer load pull techniques.
A controlled between-groups experiment was conducted to demonstrate the value of human factors for process design. Twenty-four Sandia National Laboratories employees completed a simple visual inspection task simulating receipt inspection. The experimental group process was designed to conform to human factors and visual inspection principles, whereas the control group process was designed without consideration of such principles. Results indicated the experimental group exhibited superior performance accuracy, lower workload, and more favorable usability ratings as compared to the control group. The study provides evidence to help human factors experts revitalize the critical message regarding the benefits of human factors involvement for a new generation of systems engineers.
With the growth of light field imaging as an emerging diagnostic tool for the measurement of 3D particle fields, various algorithms for 3D particle measurements have been developed. These methods have exploited both the computational refocusing and perspective-shift capabilities of plenoptic imaging. This work continues the development of a 3D particle location method based on perspective-shifted plenoptic images. Specific focus is placed on adaptations that provide increased robustness for variations in and measurement of size and shape characteristics, thus allowing measurements of fragment fields. An experimental data set of non-spherical fragment simulants is studied to examine the dependency of the uncertainty of this perspective-shift based processing method on particle shape and the uncertainty of size measurements of fragments. Synthetic data sets are examined to provide metrics of the relationship between measurement uncertainty that can be achieved using this method, particle density, and processing time requirements.
Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratio of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. The Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.
Understanding the dynamic behavior of geomaterials is critical for refining modeling and simulation of applications that involve impacts or explosions. Obtaining material properties of geomaterials is challenging, particularly in tension, due to the brittle and low-strength nature of such materials. Dynamic split tension technique (also called dynamic Brazilian test) has been employed in recent decades to determine the dynamic tensile strength of geomaterials. This is primarily because the split tension method is relatively straightforward to implement in a Kolsky compression bar. Typically, investigators use the peak load reached by the specimen to calculate the tensile strength of the specimen material, which is valid when the specimen is compressed at quasi-static strain rate. However, the same assumption cannot be safely made at dynamic strain rates due to wave propagation effects. In this study, the dynamic split tension (or Brazilian) test technique is revisited. High-speed cameras and digital image correlation (DIC) were used to image the failure of the Brazilian-disk specimen to discover when the first crack occurred relative to the measured peak load during the experiment. Differences of first crack location and time on either side of the sample were compared. The strain rate when the first crack is initiated was also compared to the traditional estimation method of strain rate using the specimen stress history.
To better understand the factors contributing to electromagentic (EM) observables in developed field sites, we examine in detail through finite element analysis the specific effects of casing completion design. The presense of steel casing has long been exploited for improved subsurface interrogation and there is growing interest in remote methods for assessing casing integrity accross a range of geophysical scenarios related to resource development and sequestration/storage activities. Accurate modeling of the casing response to EM stimulation is recognized as relevant, and a difficult computational challenge because of the casing's high conductivity contrast with geomaterials and its relatively small volume fraction over the field scale. We find that casing completion design can have a significant effect on the observed EM fields, especially at zero frequency. This effect appears to originate in the capacitive coupling between inner production casing and the outer surface casing. Furthermore we show that an equivalent “effective conductivity” for the combined surface/production casing system is inadequate for replicating this effect, regardless of whether the casings are grounded to one another or not. Lastly, we show that in situations where this coupling can be ignored and knowledge of casing currents is not required, simplifying the casing as a perfectly conducting line can be an effective strategy for reducing the computational burden in modeling field-scale response.
We seek to develop a fundamental understanding of dynamic strain aging through discovery experiments to inform the development of a dislocation based micromechanical constitutive model that can tie to existing continuum level plasticity and failure analysis tools. Dynamic strain aging (DSA) occurs when dislocation motion is hindered by the repetitive interaction of solute atoms, most frequently interstitials, with dislocation cores. At temperatures where the interstitials are mobile enough, the atmospheres can repeatedly reform, lock, and release dislocations producing a characteristic serrated flow curve. This phenomenon can produce reversals in the expected mechanical behavior of materials with varying strain rate or temperature. Loss of ductility can also occur. Experiments were conducted on various forms of 304L stainless steel over a range of temperatures and strain rates, along with temporally extreme measurements to capture information from the data signals during serrated flow. The experimental approach and observations for some of the test conditions are described herein.
Polymeric foams have been extensively used in shock isolation applications because of their superior shock or impact energy absorption capability. In order to meet the shock isolation requirements, the polymeric foams need to be experimentally characterized and numerically modeled in terms of material response under shock/impact loading and then evaluated with experimental, analytical, and/or numerical efforts. Measurement of the dynamic compressive stress-strain response of polymeric foams has become fundamental to the shock isolation performance. However, radial inertia has become a severe issue when characterizing soft materials. It is even much more complicated and difficult to address the radial inertia effect in soft polymeric foams. In this study, we developed an analytical method to calculate the additional stress induced by radial inertia in a polymeric foam specimen. The effect of changing profile of Poisson’s ratio during deformation on radial inertia was investigated. The analytical results were also compared with experimental results obtained from Kolsky compression bar tests on a silicone foam.
Delaminations are of great concern to any fiber reinforced polymer composite (FRPC) structure. In order to develop the most efficient structure, designers may incorporate hybrid composites to either mitigate the weaknesses in one material or take advantage #of the strengths of another. When these hybrid structures are used at service temperatures outside of the cure temperature, residual stresses can develop at the dissimilar interfaces. These residual stresses impact the initial stress state at the crack tip of any flaw in the structure and govern whether microcracks, or other defects, grow into large scale delaminations. Recent experiments have shown that for certain hybrid layups which are used to determine the strain energy release rate, G, there may be significant temperature dependence on the apparent toughness. While Nairn and Yokozeki believe that this effect may solely be attributed to the release of stored strain energy in the specimen as the crack grows, others point to a change in the inherent mode mixity of the test, like in the classic interface crack between two elastic layers solution given by Suo and Hutchinson. When a crack is formed at the interface of two dissimilar materials, while the external loading, in the case of a double cantilever beam (DCB), is pure mode I, the stress field at the crack tip produces a mixed-mode failure. Perhaps a change in apparent toughness with temperature can be the result of an increase in mode mixity. This study serves to investigate whether the residual stress formed at the bimaterial interface produces a noticeable shift in the strain energy release rate-mode mixing curve.
The concept of progressive failure modeling is an ongoing concern within the composite community. A common approach is to employ a building block approach where constitutive material properties lead to lamina level predictions which then lead to laminate predictions and then up to structural predictions. There are advantages to such an approach, developments can be made within each step and the whole workflow can be updated. However, advancements made at higher length scales can be hampered by insufficient modeling at lower length scales. This can make industry wide evaluations of methodologies more complicated. For instance, significant advances have been made in recent years to strain rate independent failure theories on the lamina level. However, since the Northwestern Theory is stress dependent, for adequate use in a progressive damage model, a similarly robust constitutive model must also be employed to calculate these lamina level stresses. An improper constitutive model could easily cause a valid failure model to produce incorrect results. Also, any global strain rate applied to a multi-directional laminate will produce a spectrum of local lamina level strain rates so it is important for the constitutive law to account for strain rate dependent deformation.
As the complexity of composite laminates rises, the use of hybrid structures and multi-directional laminates, large operating temperature ranges, the process induced residual stresses become a significant factor in the design. In order to properly model the initial stress state of a structure, a solid understanding of the stress free temperature, the temperature at which the initial crosslinks are formed, as well as the contribution of cure shrinkage, must be measured. Many in industry have moved towards using complex cure kinetics models with the assistance of commercial software packages such as COMPRO. However, in this study a simplified residual stress model using the coefficient of thermal expansion (CTE) mismatch and change in temperature from the stress free temperature are used. The limits of this simplified model can only be adequately tested using an accurate measure of the stress free temperature. Only once that is determined can the validity of the simplified model be determined. Various methods were used in this study to test for the stress free temperature and their results are used to validate each method. Two approaches were taken, both involving either cobonded carbon fiber reinforced polymer (CFRP) or glass fiber reinforced polymer (GFRP) to aluminum. The first method used a composite-aluminum plate which was allowed to warp due to the residual stress. The other involved producing a geometrical stable hybrid composite-aluminum cylinder which was then cut open to allow it to spring in. Both methods placed the specimens within an environmental chamber and tracked the residual stress induced deformation as the temperature was ramped beyond the stress free temperature. Both methods revealed a similar stress free temperature that could then be used in future cure modeling simulations.
A new apparatus – “Dropkinson Bar” – has been successfully developed for material property characterization at intermediate strain rates. This Dropkinson bar combines a drop table and a Hopkinson bar. The drop table was used to generate a relatively long and stable low-speed impact to the specimen, whereas the Hopkinson bar principle was applied to measure the load history with accounting for inertia effect in the system. Pulse shaping technique was also applied to the Dropkinson bar to facilitate uniform stress and strain as well as constant strain rate in the specimen. The Dropkinson bar was then used to characterize 304L stainless steel and 6061-T6 aluminum at a strain rate of ∼600 s−1. The experimental data obtained from the Dropkinson bar tests were compared with the data obtained from conventional Kolsky tensile bar tests of the same material at similar strain rates. Both sets of experimental results were consistent, showing the newly developed Dropkinson bar apparatus is reliable and repeatable.
Poisson’s ratio of soft, hyperelastic foam materials such as silicone foam is typically assumed to be both a constant and a small number near zero. However, when the silicone foam is subjected to large deformation into densification, the Poisson’s ratio may significantly change, which warrants careful and appropriate consideration in modeling and simulation of impact/shock mitigation scenarios. The evolution of the Poisson’s ratio of foam materials has not yet been characterized. In this study, radial and axial measurements of specimen strain are made simultaneously during quasi-static and dynamic compression test on a silicone foam. The Poisson’s ratio was found to exhibit a transition from compressible to nearly incompressible based on strain level and reached different values at quasi-static and dynamic rates.
Understanding the dynamic behavior of geomaterials is critical for refining modeling and simulation of applications that involve impacts or explosions. Obtaining material properties of geomaterials is challenging, particularly in tension, due to the brittle and low-strength nature of such materials. Dynamic split tension technique (also called dynamic Brazilian test) has been employed in recent decades to determine the dynamic tensile strength of geomaterials. This is primarily because the split tension method is relatively straightforward to implement in a Kolsky compression bar. Typically, investigators use the peak load reached by the specimen to calculate the tensile strength of the specimen material, which is valid when the specimen is compressed at quasi-static strain rate. However, the same assumption cannot be safely made at dynamic strain rates due to wave propagation effects. In this study, the dynamic split tension (or Brazilian) test technique is revisited. High-speed cameras and digital image correlation (DIC) were used to image the failure of the Brazilian-disk specimen to discover when the first crack occurred relative to the measured peak load during the experiment. Differences of first crack location and time on either side of the sample were compared. The strain rate when the first crack is initiated was also compared to the traditional estimation method of strain rate using the specimen stress history.
The Sandia Fracture Challenges provide the mechanics community a forum for assessing its ability to predict ductile fracture through a blind, round-robin format where computationalists are asked to predict the deformation and failure of an arbitrary geometry given experimental calibration data. This presentation will cover the three Sandia Fracture Challenges, with emphasis on the third. The third Challenge, issued in 2017, consisted of an additively manufactured 316L stainless steel tensile bar with through holes and internal cavities that could not have been conventionally machined. The volunteer prediction teams were provided extensive materials data from tensile tests of specimens printed on the same build tray to electron backscatter diffraction microstructural maps and micro-computed tomography scans of the Challenge geometry. The teams were asked a variety of questions, including predictions of variability in the resulting fracture response, as the basis for assessment of their predictive capabilities. This presentation will describe the Challenges and compare the experimental results to the predictions, identifying gaps in capabilities, both experimentally and computationally, to inform future investments. The Sandia Fracture Challenge has evolved into the Structural Reliability Partnership, where researchers will create several blind challenges covering a wider variety of topics in structural reliability. This presentation will also describe this new venture.
The Tularosa study was designed to understand how defensive deception-including both cyber and psychological-affects cyber attackers. Over 130 red teamers participated in a network penetration task over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a “typical” red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, data, population characteristics, and begins to examine preliminary results.
This report examines the role of interfaces in electronic packaging applications with the focus placed on soldering technology. Materials and processes are described with respect to their roles on the performance and reliability of associated interfaces. The discussion will also include interface microstructures created by coatings and finishes that are frequently used in packaging applications. Numerous examples are cited to illustrate the importance of interfaces in physical and mechanical metallurgy as well as the engineering function of interconnections. Regardless of the specific application, interfaces are non-equilibrium structures, which has important ramifications for the long-term reliability of electronic packaging.
A procedure for determining the joint uncertainty of Arrhenius parameters across multiple combustion reactions of interest is demonstrated. This approach is capable of constructing the joint distribution of the Arrhenius parameters arising from the uncertain measurements performed in specific target experiments without having direct access to the underlying experimental data. The method involves constructing an ensemble of hypothetical data sets with summary statistics consistent with the available information reported by the experimentalists, followed by a fitting procedure that learns the structure of the joint parameter density across reactions using this consistent hypothetical data as evidence. The procedure is formalized in a Bayesian statistical framework, employing maximum-entropy and approximate Bayesian computation methods and utilizing efficient Markov chain Monte Carlo techniques to explore data and parameter spaces in a nested algorithm. We demonstrate the application of the method in the context of experiments designed to measure the rates of selected chain reactions in the H2-O2 system and highlight the utility of this approach for revealing the critical correlations between the parameters within a single reaction and across reactions, as well as for maximizing consistency when utilizing rate parameter information in predictive combustion modeling of systems of interest.
A general formulation of silicon damage metrics and associated energy-dependent response functions relevant to the radiation effects community is provided. Using this formulation, a rigorous quantitative treatment of the energy-dependent uncertainty contributors is performed. This resulted in the generation of a covariance matrix for the displacement kerma, the Norgett-Robinson-Torrens-based damage energy, and the 1-MeV(Si)-equivalent damage function. When a careful methodology is used to apply a reference 1-MeV damage value, the systematic uncertainty in the fast fission region is seen to be removed, and the uncertainty for integral metrics in broad-based fission-based neutron fields is demonstrated to be significantly reduced.
Silicon-on-insulator latch designs and layouts that are robust to multiple-node charge collection are introduced. A general Monte Carlo radiative energy deposition (MRED) approach is used to identify potential single-event susceptibilities associated with different layouts prior to fabrication. MRED is also applied to bound single-event testing responses of standard and dual interlocked cell latch designs. Heavy ion single-event testing results validate new latch designs and demonstrate bounds for standard latch layouts.
Silicon-on-insulator latch designs and layouts that are robust to multiple-node charge collection are introduced. A general Monte Carlo radiative energy deposition (MRED) approach is used to identify potential single-event susceptibilities associated with different layouts prior to fabrication. MRED is also applied to bound single-event testing responses of standard and dual interlocked cell latch designs. Heavy ion single-event testing results validate new latch designs and demonstrate bounds for standard latch layouts.
With the growing interest to explore Jupiter's moons, technologies with +10 Mrad(Si) tolerance are now needed, to survive the Jovian environment. Conductive-bridging random access memory (CBRAM) is a nonvolatile memory that has shown a high tolerance to total ionizing dose (TID). However, it is not well understood how CBRAM behaves in an energetic ion environment where displacement damage (DD) effects may also be an issue. In this paper, the response of CBRAM to 100-keV Li, 1-MeV Ta, and 200-keV Si ion irradiations is examined. Ion bombardment was performed with increasing fluence steps until the CBRAM devices failed to hold their programed state. The TID and DD dose (DDD) at the fluence of failure were calculated and compared against tested ion species. Results indicate that failures are more highly correlated with TID than DDD. DC cycling tests were performed during 100-keV Li irradiations and evidence was found that the mobile Ag ion supply diminished with increasing fluence. The cycling results, in addition to prior 14-MeV neutron work, suggest that DD may play a role in the eventual failure of a CBRAM device in a combined radiation environment.
Sampling of drinking water distribution systems is performed to ensure good water quality and protect public health. Sampling also satisfies regulatory requirements and is done to respond to customer complaints or emergency situations. Water distribution system modeling techniques can be used to plan and inform sampling strategies. However, a high degree of accuracy and confidence in the hydraulic and water quality models is required to support real-time response. One source of error in these models is related to uncertainty in model input parameters. Effective characterization of these uncertainties and their effect on contaminant transport during a contamination incident is critical for providing confidence estimates in model-based design and evaluation of different sampling strategies. In this paper, the effects of uncertainty in customer demand, isolation valve status, bulk reaction rate coefficient, contaminant injection location, start time, duration, and rate on the size and location of the contaminant plume are quantified for two example water distribution systems. Results show that the most important parameter was the injection location. The size of the plume was also affected by the reaction rate coefficient, injection rate, and injection duration, whereas the exact location of the plume was additionally affected by the isolation valve status. Uncertainty quantification provides a more complete picture of how contaminants move within a water distribution system and more information when using modeling results to select sampling locations.
We present an approach to the development and evaluation of environmental stress screening (ESS) for a dormant-storage, multi-shot component. The ESS is developed to precipitate and detect latent manufacturing defects without significantly degrading the component's probability of successful function under normal operating environments. The evaluation of the ESS is achieved by using an additional strength of screen (SOS) operation to test for escapes from the screen. The resulting data are pass/fail data only, because the characteristics of this type of component do not allow a standard 'time to failure' analysis. The calculated SOS efficiency f is then used to estimate initial field 'reliability.' We illustrate the use of the methodology with a case study involving an electrical component manufactured within the Nuclear Security Enterprise (NSE). In development and qualification, twelve failures were detected by the ESS, and the SOS operations detected one escape. The resulting analysis showed the SOS efficiency to be approximately 92%, adequate for the component reliability goal. The resulting initial field reliability was estimated to be 99.3%, acceptable for this electrical component. Failure investigations were conducted to determine the root cause of each of these failures. Information from these investigations resulted in changes to the manufacturing process to eliminate or minimize the reoccurrence of these failures. The number of ESS failures have been reduced, and no additional failures have been observed at the SOS operation.
We present an approach to the development and evaluation of environmental stress screening (ESS) for a dormant-storage, multi-shot component. The ESS is developed to precipitate and detect latent manufacturing defects without significantly degrading the component's probability of successful function under normal operating environments. The evaluation of the ESS is achieved by using an additional strength of screen (SOS) operation to test for escapes from the screen. The resulting data are pass/fail data only, because the characteristics of this type of component do not allow a standard 'time to failure' analysis. The calculated SOS efficiency f is then used to estimate initial field 'reliability.' We illustrate the use of the methodology with a case study involving an electrical component manufactured within the Nuclear Security Enterprise (NSE). In development and qualification, twelve failures were detected by the ESS, and the SOS operations detected one escape. The resulting analysis showed the SOS efficiency to be approximately 92%, adequate for the component reliability goal. The resulting initial field reliability was estimated to be 99.3%, acceptable for this electrical component. Failure investigations were conducted to determine the root cause of each of these failures. Information from these investigations resulted in changes to the manufacturing process to eliminate or minimize the reoccurrence of these failures. The number of ESS failures have been reduced, and no additional failures have been observed at the SOS operation.
Apparent char kinetic rates are commonly used to predict pulverized coal char burning rates. These kinetic rates quantify the char burning rate based on the temperature of the particle and the oxygen concentration at the external particle surface, inherently neglecting the impact of variations in the internal diffusion rate and penetration of oxygen. To investigate the impact of bulk gas diffusivity on these phenomena during Zone II burning conditions, experimental measurements were performed of char particle combustion temperature and burnout for a subbituminous coal burning in an optical entrained flow reactor with helium and nitrogen diluents. The combination of much higher thermal conductivity and mass diffusivity in the helium environments resulted in cooler char combustion temperatures than in equivalent N2 environments. Measured char burnout was similar in the two environments for a given bulk oxygen concentration but was approximately 60% higher in helium environments for a given char combustion temperature. To augment the experimental measurements, detailed particle simulations of the experimental conditions were conducted with the SKIPPY code. These simulations also showed a 60% higher burning rate in the helium environments for a given char particle combustion temperature. To differentiate the effect of enhanced diffusion through the external boundary layer from the effect of enhanced diffusion through the particle, additional SKIPPY simulations were conducted under selected conditions in N2 and He environments for which the temperature and concentrations of reactants (oxygen and steam) were identical on the external char surface. Under these conditions, which yield matching apparent char burning rates, the computed char burning rate for He was 50% larger, demonstrating the potential for significant errors with the apparent kinetics approach. However, for specific application to oxy-fuel combustion in CO2 environments, these results suggest the error to be as low as 3% when applying apparent char burning rates from nitrogen environments.
In the context of the DARPA funded project SEQUOIA we are interested in the design under uncertainty of a jet engine nozzle subject to the performance requirements of a reconnaissance mission for a small unmanned military aircraft. This design task involves complex and expensive aero-thermo-structural computational analyses where it is of a paramount importance to also include the effect of the uncertain variables to obtain reliable predictions of the device’s performance. In this work we focus on the forward propagation analysis which is a key part of the design under uncertainty workflow. This task cannot be tackled directly by means of single fidelity approaches due to the prohibitive computational cost associated to each realization. We report here a summary of our latest advancement regarding several multilevel and multifidelity strategies designed to alleviate these challenges. The overall goal of these techniques is to reduce the computational cost of analyzing a high-fidelity model by resorting to less accurate, but less computationally demanding, lower fidelity models. The features of these multifidelity UQ approaches are initially illustrated and demonstrated on several model problems and afterward for the aero-thermo-structural analysis of the jet engine nozzle.
The generation of optimal trajectories for test flights of hypersonic vehicles with highly nonlinear dynamics and complicated physical and path constraints is often time consuming and sometimes intractable for high-fidelity, software-in-the-loop vehicle models. Practical use of hypersonic vehicles requires the ability to rapidly generate a feasible and robust optimal trajectory. We propose a solution that involves interaction between an optimizer using a low fidelity 3-DOF vehicle model and feedback from vehicle simulations of varying fidelities, with the goal of rapidly converging to a solution trajectory for a hypersonic vehicle mission. Further computational efficiency is sought using aerodynamic surrogate models in place of aerodynamic coefficient look-up tables. We address the need for rapidly converging optimization by analyzing how model fidelity choice impacts the quality and speed of the resulting guidance solution.
Chhantyal-Pun, Rabi; Shannon, Robin J.; Tew, David P.; Caravan, Rebecca L.; Duchi, Marta; Wong, Callum; Ingham, Aidan; Feldman, Charlotte; Mcgillen, Max R.; Khan, M.U.; Antonov, Ivan O.; Rotavera, Brandon; Ramasesha, Krupa; Osborn, David L.; Taatjes, Craig A.; Percival, Carl J.; Shallcross, Dudley E.; Orr-Ewing, Andrew J.
Ammonia and amines are emitted into the troposphere by various natural and anthropogenic sources, where they have a significant role in aerosol formation. Here, we explore the significance of their removal by reaction with Criegee intermediates, which are produced in the troposphere by ozonolysis of alkenes. Rate coefficients for the reactions of two representative Criegee intermediates, formaldehyde oxide (CH2OO) and acetone oxide ((CH3)2COO) with NH3 and CH3NH2 were measured using cavity ring-down spectroscopy. Temperature-dependent rate coefficients, k(CH2OO + NH3) = (3.1 ± 0.5) × 10-20T2exp(1011 ± 48/T) cm3 s-1 and k(CH2OO + CH3NH2) = (5 ± 2) × 10-19T2exp(1384 ± 96/T) cm3 s-1 were obtained in the 240 to 320 K range. Both the reactions of CH2OO were found to be independent of pressure in the 10 to 100 Torr (N2) range, and average rate coefficients k(CH2OO + NH3) = (8.4 ± 1.2) × 10-14 cm3 s-1 and k(CH2OO + CH3NH2) = (5.6 ± 0.4) × 10-12 cm3 s-1 were deduced at 293 K. An upper limit of ≤2.7 × 10-15 cm3 s-1 was estimated for the rate coefficient of the (CH3)2COO + NH3 reaction. Complementary measurements were performed with mass spectrometry using synchrotron radiation photoionization giving k(CH2OO + CH3NH2) = (4.3 ± 0.5) × 10-12 cm3 s-1 at 298 K and 4 Torr (He). Photoionization mass spectra indicated production of NH2CH2OOH and CH3N(H)CH2OOH functionalized organic hydroperoxide adducts from CH2OO + NH3 and CH2OO + CH3NH2 reactions, respectively. Ab initio calculations performed at the CCSD(T)(F12∗)/cc-pVQZ-F12//CCSD(T)(F12∗)/cc-pVDZ-F12 level of theory predicted pre-reactive complex formation, consistent with previous studies. Master equation simulations of the experimental data using the ab initio computed structures identified submerged barrier heights of -2.1 ± 0.1 kJ mol-1 and -22.4 ± 0.2 kJ mol-1 for the CH2OO + NH3 and CH2OO + CH3NH2 reactions, respectively. The reactions of NH3 and CH3NH2 with CH2OO are not expected to compete with its removal by reaction with (H2O)2 in the troposphere. Similarly, losses of NH3 and CH3NH2 by reaction with Criegee intermediates will be insignificant compared with reactions with OH radicals.
The excitation of Mg3F2GeO4:Mn thermographic phosphors using a UV LED centered at 365 nm is explored. Two different LED drivers, one available commercially and one built at Sandia National Laboratories (SNL), were used and assessed for their viability for phosphor thermometry utilizing LED excitation and intensified, high-speed, CMOS camera data collection. The SNL-driven LED was then utilized as an excitation source for Mg3F2GeO4:Mn-phosphor calibration and demonstration experiments measuring the temperature of a silicon carbide heating rod using the time-decay method. The results presented here serve as a step toward determining the application space, wherein SNL driven LED excitation would be preferable over the use of laser systems for thermographic phosphor measurements.
Montmorillonite with an empirical formula of Na0.2Ca0.1Al2Si4O10(OH)2(H2O)10 is a di-octahedral smectite. Montmorillonite-rich bentonite is a primary buffer candidate for high level nuclear waste (HLW) and used nuclear fuel to be disposed in mild environments. In such environments, temperatures are expected to be ≤ 90oC, the solutions are of low ionic strengths, and pH is close to neutral. Under the conditions outside the above parameters, the performance of montmorillonite-rich bentonite is deteriorated because of collapse of swelling particles as a result of illitization, and dissolution of the swelling clay minerals followed by precipitation of non-swelling minerals. It has been well known that tri-octahedral smectites such as saponite, with an ideal formula of Mg3(Si, Al)4O10(OH)2•4H2O for an Mg-end member (saponite-15A), are less susceptible to alteration under harsh conditions. Recently, Mg-bearing saponite has been favorably considered as a preferable engineered buffer material for the Swedish very deep holes (VDH) disposal concept in crystalline rock formations. In the VDH, HLW is disposed in deep holes at depth between 2,000 m and 4,000 m. At such deployment depths, the temperatures are expected to be between 100oC and 150oC, and the groundwater is of high ionic strength. The harsh chemical conditions of high pH are also introduced by the repository designs in which concretes and cements are used as plugs and buffers. In addition, harsh chemical conditions introduced by high ionic strength solutions are also present in repository designs in salt formations and sedimentary basins. For instance, the two brines associated with the salt formations for the Waste Isolation Pilot Plant (WIPP) in USA have ionic strengths of 5.82 mol•kg-1 (ERDA-6) and 8.26 mol•kg-1 (GWB). In the Asse site proposed for a geological repository in salt formations in Germany, the Q-brine has an ionic strength of ~13 mol•kg-1. In this work, we present our investigations regarding the stability of saponite under hydrothermal conditions in harsh environments.
Geogenic noble gases are contained in crustal rocks at inter- and intracrystalline sites. In this study, bedded rock salt from southern New Mexico was deformed in a variety of triaxial compression states while measuring the release of naturally contained helium and argon utilizing mass spectrometry. Noble gas release is empirically correlated to volumetric strain and acoustic emissions. At low confining pressures, rock salt deforms primarily by microfracturing, rupturing crystal grains, and releasing helium and argon with a large amount of acoustic emissions, both measured real-time. At higher confining pressure, microfracturing is reduced and the rock salt is presumed to deform more by intracrystalline flow, releasing less amounts of noble gases with fewer acoustic emissions. Our work implies that geogenic gas release during deformation may provide an additional signal which contains information on the type and amount of deformation occurring in a variety of earth systems.
Falling particle receivers (FPRs) are an important component of future falling particle concentrating solar power plants to enable next-generation energy generation. High thermal efficiencies in a FPR are required to high thermodynamic efficiencies of the system. External winds can significantly impact the thermal performance of cavity-type FPRs primarily through changing the air flow in and out of the aperture. A numerical parametric study is performed in this paper to quantify the effect of wind on the thermal performance of a FPR. Wind direction was found to be a significant parameter that can affect the receiver thermal efficiency. The particle mass flow rate did not significantly change the overall effect of wind on the receiver. The receiver efficiency was strong function of the particle diameter, but this was primarily a result of varying curtain opacity with different diameters and not from varying effects with wind. Finally, the model was used to demonstrate that receiver efficiencies of 90% were achievable under the assumption that the effect of wind/advective losses were mitigated.
33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019
We examine the problem of crawling the community structure of a multiplex network containing multiple layers of edge relationships. While there has been a great deal of work examining community structure in general, and some work on the problem of sampling a network to preserve its community structure, to the best of our knowledge, this is the first work to consider this problem on multiplex networks. We consider the specific case in which the layers of a multiplex network have different query (collection) costs and reliabilities; and a data collector is interested in identifying the community structure of the most expensive layer. We propose MultiComSample (MCS), a novel algorithm for crawling a multiplex network. MCS uses multiple levels of multi-armed bandits to determine the best layers, communities and node roles for selecting nodes to query. We test MCS against six baseline algorithms on real-world multiplex networks, and achieved large gains in performance. For example, after consuming a budget equivalent to sampling 20% of the nodes in the expensive layer, we observe that MCS outperforms the best baseline by up to 49%.
In order to improve the accuracy of subsurface target classification with ground penetrating radar (GPR) systems, it is desired to transmit and receive ultra-wide band pulses with varying combinations of polarization (a technique referred to as polarimetry). The sinuous antenna exhibits such desirable properties as ultra-wide bandwidth, polarization diversity, and low-profile form factor, making it an excellent candidate for the radiating element of such systems. However, sinuous antennas are dispersive since the active region moves with frequency along the structure, resulting in the distortion of radiated pulses. This distortion may be compensated in signal processing with accurately simulated or measured antenna phase information. However, in a practical GPR, the antenna performance may deviate from that simulated, accurate measurements may be impractical, and/or the dielectric loading of the environment may cause deviations. In such cases, it may be desirable to employ a simple dispersion model based on antenna design parameters which may be optimized in situ. This paper explores the dispersive properties of the sinuous antenna and presents a simple, adjustable, model that may be used to correct dispersed pulses. The dispersion model is successfully applied to both simulated and measured scenarios, thereby enabling the use of sinuous antennas in polarimetric GPR applications.
The wave energy resource for U.S. coastal regions has been estimated at approximately 1,200 TWh/ yr (EPRI 2011). The magnitude is comparable to the natural gas and coal energy generation. Although the wave energy industry is relatively new from a commercial perspective, wave energy conversion (WEC) technology is developing at an increasing pace. Ramping up to commercial scale deployment of WEC arrays requires demonstration of performance that is economically competitive with other energy generation methods. The International Electrotechnical Commission has provided technical specifications for developing wave energy resource assessments and characterizations, but it is ultimately up to developers to create pathways for making a specific site competitive. The present study uses example sites to evaluate the annual energy production using different wave energy conversion strategies and examines pathways available to make WEC deployments competitive. The wave energy resource is evaluated for sites along the U.S. coast and combinations of wave modeling and basic resource assessments determine factors affecting the cost of energy at these sites. The results of this study advance the understanding of wave resource and WEC device assessment required to evaluate commercial-scale deployments.
Falling particle receivers are an emerging technology for use in concentrating solar power systems. In this work, quartz tubes cut in half to form tube shells (referred to as quartz half-shells) are investigated for use as a full or partial aperture cover to reduce radiative and advective losses from the receiver. A receiver subdomain and surrounding air volume are modeled using ANSYS® Fluent®. The model is used to simulate fluid dynamics and heat transfer for the following cases: (1) open aperture, (2) aperture fully covered by quartz half-shells, and (3) aperture partially covered by quartz half-shells. We compare the percentage of total incident solar power lost due to conduction through the receiver walls, advective losses through the aperture, and radiation exiting out of the aperture. Contrary to expected outcomes, simulation results using the simplified receiver subdomain show that quartz aperture covers can increase radiative losses and, in the partially covered case, also increase advective losses. These increased heat losses are driven by elevated quartz half-shell temperatures and have the potential to be mitigated by active cooling and/or material selection.
Advances in machine intelligence have sparked interest in hardware accelerators to implement these algorithms, yet embedded electronics have stringent power, area budgets, and speed requirements that may limit non- volatile memory (NVM) integration. In this context, the development of fast nanomagnetic neural networks using minimal training data is attractive. Here, we extend an inference-only proposal using the intrinsic physics of domain-wall MTJ (DW-MTJ) neurons for online learning to implement fully unsupervised pattern recognition operation, using winner-take-all networks that contain either random or plastic synapses (weights). Meanwhile, a read-out layer trains in a supervised fashion. We find our proposed design can approach state-of-the-art success on the task relative to competing memristive neural network proposals, while eliminating much of the area and energy overhead that would typically be required to build the neuronal layers with CMOS devices.
Sandia National Laboratories has developed a model characterizing the nonlinear encoding operator of the world's first hyperspectral x-ray computed tomography (H-CT) system as a sequence of discrete-to-discrete, linear image system matrices across unique and narrow energy windows. In fields such as national security, industry, and medicine, H-CT has various applications in the non-destructive analysis of objects such as material identification, anomaly detection, and quality assurance. However, many approaches to computed tomography (CT) make gross assumptions about the image formation process to apply post-processing and reconstruction techniques that lead to inferior data, resulting in faulty measurements, assessments, and quantifications. To abate this challenge, Sandia National Laboratories has modeled the H-CT system through a set of point response functions, which can be used for calibration and anaylsis of the real-world system. This work presents the numerical method used to produce the model through the collection of data needed to describe the system; the parameterization used to compress the model; and the decompression of the model for computation. By using this linear model, large amounts of accurate synthetic H-CT data can be efficiently produced, greatly reducing the costs associated with physical H-CT scans. Furthermore, successfully approximating the encoding operator for the H-CT system enables quick assessment of H-CT behavior for various applications in high-performance reconstruction, sensitivity analysis, and machine learning.
Ionic liquids are a unique class of materials with several potential applications in electrochemical energy storage. When used in electrolytes, these highly coordinating solvents can influence device performance through their high viscosities and strong solvation behaviors. In this work, we explore the effects of pyrrolidinium cation structure and Li+ concentration on transport processes in ionic liquid electrolytes. We present correlated experimental measurements and molecular simulations of Li+ mobility and O2 diffusivity, and connect these results to dynamic molecular structural information and device performance. In the context of Li-O2/Li-air battery chemistries, we find that Li+ mobility is largely influenced by Li+-anion coordination, but that both Li+ and O2 diffusion may be affected by variations of the pyrrolidinium cation and Li+ concentration.
Batteries for grid storage applications must be inexpensive, safe, reliable, as well as have a high energy density. Here, we utilize the high capacity of sulfur (S) (1675 mAh g-1, based on the idealized redox couple of S2./S) in order to demonstrate for the first time, a reversible high capacity solid-state S-based cathode for alkaline batteries. To maintain S in the solid-state, it is bound to copper (Cu), initially in its fully reduced state as the sulfide. Upon charging, the sulfide is oxidized to a polysulfide species which is captured and maintained in the solid-state by the Cu ions. This solid-state sulfide/polysulfide cathode was analyzed versus a zinc (Zn) anode which gives a nominal >1.2 V cell voltage based on the sulfide/polysulfide redox cathode chemistry. It was found that in order for the S cathode to have the best cycle life in the solid-state it must not only be bound to Cu ions but bound to Cu ions in the +1 valence state, forming Cu2S as a discharge product. Zn/Cu2S batteries cycled between 1.45 V and 0.4 V vs. Zn displayed capacities of 1500 mAh g-1 (based on mass of S) or i300 mAh g-1 (based on mass of Cu2S) and high areal (>23 mAh cm.2) and energy densities (>135 Wh L-1), but suffered from moderate cycle lifes (<250 cycles). The failure mechanism of this electrode was found to be disproportionation of the charged S species into irreversible sulfite releasing the bound Cu ions. The Cu ions become free to perform Cu specific redox reactions which slowly changes the battery redox chemistry from that of S to that of Cu with a S additive. Batteries utilizing the Cu2S cathode and a 50% depth of charge (DOC) cathode cycling protocol, with 5 wt% Na2S added to the electrolyte, retained a cathode capacity of 838 mAh g-1 (based on the mass of S) or 169 mA h g-1 (based on mass of Cu2S) after 450 cycles with >99.7% coulombic efficiency. These Zn/Cu2S batteries provided a grid storage relevant energy density of >42Wh L-1 (at 65 wt% Cu2S loading), despite only using a 3% depth of discharge (DOD) for the Zn anode. This work opens the way to a new class of energy dense grid storage batteries based on high capacity solid-state S-based cathodes.
Historically, “skill-of-the-craft” was the single measure of job qualification. In those days, no one gave workers a procedure to follow. Today, large complex industries rely on procedures as a way of ensuring the job will be performed reliably and safely. Typically, these procedures provide a layer of protection to mitigate the severity of an accident or prevent it from happening. While paper-based procedures have long been the standard way of doing business, there is increasing interest in replacing this format with Computer-Based Procedures. Though, the transition from paper to paperless can be more problematic than it seems. Some issues that have led to these problems are discussed here. It is hoped that, by knowing what these issues are, the same mistakes will not be repeated in the future. Mistake avoidance begins with a well-defined set of user requirements for the proposed system. Plus, it is important to realize that Computer-Based Procedures are likely going to be placed in a facility that has never used this type of technology before. As for any new technology, a new way of thinking must come with it. Otherwise, if attempts are made to intermingle old ideas with new ways of doing business, problems are destined to occur.
Neuromorphic computers could overcome efficiency bottlenecks inherent to conventional computing through parallel programming and readout of artificial neural network weights in a crossbar memory array. However, selective and linear weight updates and <10-nanoampere read currents are required for learning that surpasses conventional computing efficiency. We introduce an ionic floating-gate memory array based on a polymer redox transistor connected to a conductive-bridge memory (CBM). Selective and linear programming of a redox transistor array is executed in parallel by overcoming the bridging threshold voltage of the CBMs. Synaptic weight readout with currents <10 nanoamperes is achieved by diluting the conductive polymer with an insulator to decrease the conductance. The redox transistors endure >1 billion write-read operations and support >1-megahertz write-read frequencies.
When attempting to integrate single-molecule fluorescence microscopy with microfabricated devices such as microfluidic channels, fabrication constraints may prevent using traditional coverslips. Instead, the fabricated devices may require imaging through material with a different thickness or index of refraction. Altering either can easily reduce the quality of the image formation (measured by the Strehl ratio) by a factor of 2 or more, reducing the signal-to-noise ratio accordingly. In such cases, successful detection of single-molecule fluorescence may prove difficult or impossible. Here we provide software to calculate the effect of non-design materials upon the Strehl ratio or ensquared energy and explore the impact of common materials used in microfabrication.
International nuclear safeguards inspectors visit nuclear facilities to assess their compliance with international nonproliferation agreements. Inspectors note whether anything unusual is happening in the facility that might indicate the diversion or misuse of nuclear materials, or anything that changed since the last inspection. They must complete inspections under restrictions imposed by their hosts, regarding both their use of technology or equipment and time allotted. Moreover, because inspections are sometimes completed by different teams months apart, it is crucial that their notes accurately facilitate change detection across a delay. The current study addressed these issues by investigating how note-taking methods (e.g., digital camera, hand-written notes, or their combination) impacted memory in a delayed recall test of a complex visual array. Participants studied four arrays of abstract shapes and industrial objects using a different note-taking method for each, then returned 48–72Â h later to complete a memory test using their notes to identify objects changed (e.g., location, material, orientation). Accuracy was highest for both conditions using a camera, followed by hand-written notes alone, and all were better than having no aid. Although the camera-only condition benefitted study times, this benefit was not observed at test, suggesting drawbacks to using just a camera to aid recall. Change type interacted with note-taking method; although certain changes were overall more difficult, the note-taking method used helped mitigate these deficits in performance. Finally, elaborative hand-written notes produced better performance than simple ones, suggesting strategies for individual note-takers to maximize their efficacy in the absence of a digital aid.
This work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply (Petrov-)Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discreteoptimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be defined to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time Gauss-Newton with Approximated Tensors (GNAT) variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include a reduction of both the spatial and temporal dimensions of the dynamical system, and a priori error bounds that bound the solution error by the best space-time approximation error and whose stability constants exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy for a fixed spatio-temporal discretization.
Mining rock salt results in subsurface damage, which may affect the strength because of applied stress, anisotropy, and deformation rate. In this study, we used a Kolsky compression bar to measure the high strain rate response of bedded and domal salt at strain rates up to approximately 50 s−1 in parallel and perpendicular directions to bedding or foliation direction depending on rock salt type. Both types of salt exhibited a negative strain rate effect wherein a decrease in strength was observed with increasing strain rate compared to strength measured in the quasi-static regime. Both materials exhibited strength anisotropy. Fracturing and microfracturing were the dominant deformation mechanisms. High pore pressures and frictional heating due to the high loading rate may have contributed to reduction in strength.
Photoelectrochemical (PEC) water splitting has the potential to significantly reduce the costs associated with electrochemical hydrogen production through the direct utilization of solar energy. Many PEC cells utilize liquid electrolytes that are detrimental to the durability of the photovoltaic (PV) or photoactive materials at the heart of the device. The membrane-electrode-assembly (MEA) style, PEC cell presented herein is a deviation from that paradigm as a solid electrolyte is used, which allows the use of a water vapor feed. The result of this is a correspondent reduction in the amount of liquid and electrolyte contact with the PV, thereby opening the possibility of longer PEC device lifetimes. In this study, we demonstrate the operation of a liquid and vapor-fed PEC device utilizing a commercial III-V photovoltaic that achieves a solar-to-hydrogen (STH) efficiency of 7.5% (12% as a PV-electrolyzer). While device longevity using liquid water was limited to less than 24 hours, replacement of reactant with water vapor permitted 100 hours of continuous operation under steady-state conditions and diurnal cycling. Key findings include the observations that the exposure of bulk water or water vapor to the PV must be minimized, and that operating in mass-transport limited regime gave preferable performance.
Advancements in micro-scale additive manufacturing techniques have made it possible to fabricate intricate architectures including 3D interpenetrating electrode microstructures. A mesoscale electrochemical lithium-ion battery model is presented and implemented in the PETSc software framework using a finite volume scheme. The model is used to investigate interpenetrating 3D electrode architectures that offer potential energy density and power density improvements over traditional particle bed battery geometries. Using the computational model, a variety of battery electrode geometries are simulated and compared across various battery discharge rates and length scales to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density vs. power density relationship of the electrode microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle-bed electrode designs are predicted, and electrode microarchitectures derived from minimal surfaces are shown to be superior under a minimum feature size constraint, especially when subjected to high discharge currents. An average Thiele modulus formulation is presented as a back-of-the-envelope calculation to predict the performance trends of microbattery electrode geometries.
Caswell, Jacob; Gans, Jason D.; Generous, Nicholas; Hudson, Corey M.; Merkley, Eric; Johnson, Curtis; Oehmen, Christopher; Omberg, Kristin; Purvine, Emilie; Taylor, Karen; Ting, Christina L.; Wolinsky, Murray; Xie, Gary
Progress in modern biology is being driven, in part, by the large amounts of freely available data in public resources such as the International Nucleotide Sequence Database Collaboration (INSDC), the world's primary database of biological sequence (and related) information. INSDC and similar databases have dramatically increased the pace of fundamental biological discovery and enabled a host of innovative therapeutic, diagnostic, and forensic applications. However, as high-value, openly shared resources with a high degree of assumed trust, these repositories share compelling similarities to the early days of the Internet. Consequently, as public biological databases continue to increase in size and importance, we expect that they will face the same threats as undefended cyberspace. There is a unique opportunity, before a significant breach and loss of trust occurs, to ensure they evolve with quality and security as a design philosophy rather than costly "retrofitted" mitigations. This Perspective surveys some potential quality assurance and security weaknesses in existing open genomic and proteomic repositories, describes methods to mitigate the likelihood of both intentional and unintentional errors, and offers recommendations for risk mitigation based on lessons learned from cybersecurity.
DNA structural transitions facilitate genomic processes, mediate drug-DNA interactions, and inform the development of emerging DNA-based biotechnology such as programmable materials and DNA origami. While some features of DNA conformational changes are well characterized, fundamental information such as the orientations of the DNA base pairs is unknown. Here, we use concurrent fluorescence polarization imaging and DNA manipulation experiments to probe the structure of S-DNA, an elusive, elongated conformation that can be accessed by mechanical overstretching. To this end, we directly quantify the orientations and rotational dynamics of fluorescent DNA-intercalated dyes. At extensions beyond the DNA overstretching transition, intercalators adopt a tilted (q ~ 54°) orientation relative to the DNA axis, distinct from the nearly perpendicular orientation (q ~ 90°) normally assumed at lower extensions. These results provide the first experimental evidence that S-DNA has substantially inclined base pairs relative to those of the standard (Watson-Crick) B-DNA conformation.
Gold deposition on rotating disk electrodes, Bi3+ adsorption on planar Au films and superconformal Au filling of trenches up to 45 μm deep are examined in Bi3+-containing Na3Au(SO3)2 electrolytes with pH between 9.5 and 11.5. Higher pH is found to increase the potential-dependent rate of Bi3+ adsorption on planar Au surfaces, shortening the incubation period that precedes active Au deposition on planar surfaces and bottom-up filling in patterned features. Decreased contact angles between the Au seeded sidewalls and bottom-up growth front also suggest improved wetting. The bottom-up filling dynamic in trenches is, however, lost at pH 11.5. The impact of Au concentration, 80 mmol/L versus 160 mmol/L Na3Au(SO3)2, on bottom-up filling is examined in trenches up to ≈ 210 μm deep with aspect ratio of depth/width ≈ 30. The microstructures of void-free, bottom-up filled trench arrays used as X-ray diffraction gratings are characterized by scanning electron microscopy (SEM) and Electron Backscatter Diffraction (EBSD), revealing marked spatial variation of the grain size and orientation within the filled features.
Levine, Edlyn V.; Turner, Matthew J.; Kehayias, Pauli; Hart, Connor A.; Langellier, Nicholas; Trubko, Raisa; Glenn, David R.; Fu, Roger R.; Walsworth, Ronald L.
We provide an overview of the experimental techniques, measurement modalities, and diverse applications of the quantum diamond microscope (QDM). The QDM employs a dense layer of fluorescent nitrogen-vacancy (NV) color centers near the surface of a transparent diamond chip on which a sample of interest is placed. NV electronic spins are coherently probed with microwaves and optically initialized and read out to provide spatially resolved maps of local magnetic fields. NV fluorescence is measured simultaneously across the diamond surface, resulting in a wide-field, two-dimensional magnetic field image with adjustable spatial pixel size set by the parameters of the imaging system. NV measurement protocols are tailored for imaging of broadband and narrowband fields, from DC to GHz frequencies. Here we summarize the physical principles common to diverse implementations of the QDM and review example applications of the technology in geoscience, biology, and materials science.
Tin-germanium alloys are increasingly of interest as optoelectronic and thermoelectric materials as well as materials for Li/Na ion battery electrodes. However, the lattice incompatibility of bulk Sn and Ge makes creating such alloys challenging. By exploiting the unique strain tolerance of nanosized crystals, we have developed a facile synthetic method for homogeneous SnxGe1-x alloy nanocrystals with composition varying from essentially pure Ge to 95% Sn while still maintaining the cubic structure.
An electrodeposition process for void-free bottom-up filling of sub-millimeter scale through silicon vias (TSVs) with Cu is detailed. The 600 μm deep and nominally 125 μm diameter metallized vias were filled with Cu in less than 7 hours under potentiostatic control. The electrolyte is comprised of 1.25 mol/L CuSO4 - 0.25 mol/L CH3SO3H with polyether and halide additions that selectively suppress metal deposition on the free surface and side walls. A brief qualitative discussion of the procedures used to identify and optimize the bottom-up void-free feature filling is presented.
Understanding the impact of distributed photovoltaic (PV) resources on various elements of the distribution feeder is imperative for their cost effective integration. A year-long quasi-static time series (QSTS) simulation at 1-second granularity is often necessary to fully study these impacts. However, the significant computational burden associated with running QSTS simulations is a major challenge to their adoption. In this paper, we propose a fast scalable QSTS simulation algorithm that is based on a linear sensitivity model for estimating voltage-related PV impact metrics of a three-phase unbalanced, nonradial distribution system with various discrete step control elements including tap changing transformers and capacitor banks. The algorithm relies on computing voltage sensitivities while taking into account all the effects of discrete controllable elements in the circuit. Consequently, the proposed sensitivity model can accurately estimate the state of controllers at each time step and the number of control actions throughout the year. For the test case of a real distribution feeder with 2969 buses (5469 nodes), 6 load/PV time series power profiles, and 9 voltage regulating elements including controller delays, the proposed algorithm demonstrates a dramatic time reduction, more than 180 times faster than traditional QSTS techniques.
The curing of diglycidyl ether of bisphenol A (DGEBA) epoxy with diethanolamine (DEA) is studied. DEA has three reactive groups, a secondary amine hydrogen and two hydroxyls. The secondary amine reacts rapidly, forming an adduct containing tertiary amines, epoxides and hydroxyls. The epoxides and hydroxyls then react in the presence of the amines to crosslink and vitrify the epoxy in the “gelation” reaction. The gelation reaction, the subject of this study, is not simple. The reaction exhibits unusual dependencies on both temperature and degree of cure. Previously, the general mechanisms of this curing process were explored by a number of us. In the present paper, both differential scanning calorimetry (DSC) and isothermal microcalorimetry (IMC) are used to determine a number of characteristic times associated with the reaction. The characteristic times show that the reaction rate has different functional forms at different temperatures and extents of reaction. This results from the reaction rate not depending solely upon the temperature and over-all extent-of-reaction. The concentration of a number of auxiliary reactive species that are generated in the course of the reaction (as well as their mobility and steric hindrance) appear to be key factors in defining the reaction kinetics. The dependence of the final network structure on cure schedule for this type of tertiary amine activated reaction is then discussed in the context of the literature. Finally, in the Supplementary Material, Kamal-like functions are fit to the isothermal reaction kinetics, with the reader cautioned in applying the functions to non-isothermal cures.
Widespread permafrost thaw in response to changing climate conditions has the potential to dramatically impact ecosystems, infrastructure, and the global carbon budget. Ambient seismic noise techniques allow passive subsurface monitoring that could provide new insights into permafrost vulnerability and active-layer processes. Using nearly 2 years of continuous seismic data recorded near Fairbanks, Alaska, we measured relative velocity variations that showed a clear seasonal cycle reflecting active-layer freeze and thaw. Relative to January 2014, velocities increased up to 3% through late spring, decreased to −8% by late August, and then gradually returned to the initial values by the following winter. Velocities responded rapidly (over ~2 to 7 days) to discrete hydrologic events and temperature forcing and indicated that spring snowmelt and infiltration events from summer rainfall were particularly influential in propagating thaw across the site. Velocity increases during the fall zero-curtain captured the refreezing process and incremental ice formation. Looking across multiple frequency bands (3–30 Hz), negative relative velocities began at higher frequencies earlier in the summer and then shifted lower when active-layer thaw deepened, suggesting a potential relationship between frequency and thaw depth; however, this response was dependent on interstation distance. Bayesian tomography returned 2-D time-lapse images identifying zones of greatest velocity reduction concentrated in the western side of the array, providing insight into the spatial variability of thaw progression, soil moisture, and drainage. This study demonstrates the potential of passive seismic monitoring as a new tool for studying site-scale active-layer and permafrost thaw processes at high temporal and spatial resolution.
Model error estimation remains one of the key challenges in uncertainty quantification and predictive science. For computational models of complex physical systems, model error, also known as structural error or model inadequacy, is often the largest contributor to the overall predictive uncertainty. This work builds on a recently developed frame- work of embedded, internal model correction, in order to represent and quantify structural errors, together with model parameters, within a Bayesian inference context.We focus specifically on a polynomial chaos representation with addi- tive modification of existing model parameters, enabling a nonintrusive procedure for efficient approximate likelihood construction, model error estimation, and disambiguation of model and data errors’ contributions to predictive uncer- tainty. The framework is demonstrated on several synthetic examples, as well as on a chemical ignition problem.
Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. In addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.
Golaz, Jean C.; Caldwell, Peter M.; Van Roekel, Luke P.; Petersen, Mark R.; Tang, Qi; Wolfe, Jonathan D.; Abeshu, Guta; Anantharaj, Valentine; Asay-Davis, Xylar S.; Bader, David C.; Baldwin, Sterling A.; Bisht, Gautam; Bogenschutz, Peter A.; Branstetter, Marcia; Brunke, Michael A.; Brus, Steven R.; Burrows, Susannah M.; Cameron-Smith, Philip J.; Donahue, Aaron S.; Deakin, Michael; Easter, Richard C.; Evans, Katherine J.; Feng, Yan; Flanner, Mark; Foucar, James G.; Fyke, Jeremy G.; Griffin, Brian M.; Hannay, Cecile; Harrop, Bryce E.; Hunke, Elizabeth C.; Jacob, Robert L.; Jacobsen, Douglas W.; Jeffery, Nicole; Jones, Philip W.; Keen, Noel D.; Klein, Stephen A.; Larson, Vincent E.; Leung, L.R.; Li, Hong Y.; Lin, Wuyin; Lipscomb, William H.; Ma, Po L.; Mahajan, Salil; Maltrud, Mathew E.; Mametjanov, Azamat; Mcclean, Julie L.; Mccoy, Renata B.; Neale, Richard B.; Price, Stephen F.; Qian, Yun; Rasch, Philip J.; Reeves Eyre, J.E.J.; Riley, William J.; Ringler, Todd D.; Roberts, Andrew F.; Roesler, Erika L.; Salinger, Andrew G.; Shaheen, Zeshawn; Shi, Xiaoying; Singh, Balwinder; Tang, Jinyun; Taylor, Mark A.; Thornton, Peter E.; Turner, Adrian K.; Veneziani, Milena; Wan, Hui; Wang, Hailong; Wang, Shanlin; Williams, Dean N.; Wolfram, Phillip J.; Worley, Patrick H.; Xie, Shaocheng; Yang, Yang; Yoon, Jin H.; Zelinka, Mark D.; Zender, Charles S.; Zeng, Xubin; Zhang, Chengzhu; Zhang, Kai; Zhang, Yuying; Zheng, Xue; Zhou, Tian; Zhu, Qing
This work documents the first version of the U.S. Department of Energy (DOE) new Energy Exascale Earth System Model (E3SMv1). We focus on the standard resolution of the fully coupled physical model designed to address DOE mission-relevant water cycle questions. Its components include atmosphere and land (110-km grid spacing), ocean and sea ice (60 km in the midlatitudes and 30 km at the equator and poles), and river transport (55 km) models. This base configuration will also serve as a foundation for additional configurations exploring higher horizontal resolution as well as augmented capabilities in the form of biogeochemistry and cryosphere configurations. The performance of E3SMv1 is evaluated by means of a standard set of Coupled Model Intercomparison Project Phase 6 (CMIP6) Diagnosis, Evaluation, and Characterization of Klima simulations consisting of a long preindustrial control, historical simulations (ensembles of fully coupled and prescribed SSTs) as well as idealized CO2 forcing simulations. The model performs well overall with biases typical of other CMIP-class models, although the simulated Atlantic Meridional Overturning Circulation is weaker than many CMIP-class models. While the E3SMv1 historical ensemble captures the bulk of the observed warming between preindustrial (1850) and present day, the trajectory of the warming diverges from observations in the second half of the twentieth century with a period of delayed warming followed by an excessive warming trend. Using a two-layer energy balance model, we attribute this divergence to the model's strong aerosol-related effective radiative forcing (ERFari+aci = −1.65 W/m2) and high equilibrium climate sensitivity (ECS = 5.3 K).
In magneto-inertial-fusion experiments, energy losses such as a radiation need to be well controlled in order to maximize the compressional work done on the fuel and achieve thermonuclear conditions. One possible cause for high radiation losses is high-Z material mixing from the target components into the fuel. In this work, we analyze the effects of mix on target performance in Magnetized Liner Inertial Fusion (MagLIF) experiments at Sandia National Laboratories. Our results show that mix is likely produced from a variety of sources, approximately half of which originates during the laser heating phase and the remainder near stagnation, likely from the liner deceleration. By changing the "cushion" component of MagLIF targets from Al to Be, we achieved a 10× increase in neutron yield, a 60% increase in ion temperature, and an ∼50% increase in fuel energy at stagnation.
Flame-sampling experiments, especially in conjunction with laminar low-pressure premixed flames, are routinely used in combustion chemistry studies to unravel the identities and quantities of key intermediates and their pathways. In many instances, however, an unambiguous interpretation of the experimental and modeling results is hampered by the uncertainties about the probe-induced, perturbed temperature profile. To overcome this limitation, two-dimensional perturbations of the temperature field caused by sampling probes with different geometries have been investigated using synchrotron-based X-ray fluorescence spectroscopy. In these experiments, which were performed at the 7-BM beamline of the Advanced Photon Source (APS) at the Argonne National Laboratory, a continuous beam of hard X-rays at 15keV was used to excite krypton atoms that were added in a concentration of 5vol.-% to the unburnt gas mixture and the resulting krypton fluorescence at 12.65keV was subsequently collected. The highly spatially resolved signal was converted into the local flame temperature to obtain temperature fields at various burner-probe separations as functions of the distance to the burner surface and the radial distance from the centerline. Multiple measurements were performed with different probe geometries and because of the observed impact on the temperature profiles the results clearly revealed the need to specify the sampling probe design to enable quantitative and meaningful comparisons of modeling results with flame-sampled mole fraction data.
The term photonic wire laser is now widely used for lasers with transverse dimensions much smaller than the wavelength. As a result, a large fraction of the mode propagates outside the solid core. Here, we propose and demonstrate a scheme to form a coupled cavity by taking advantage of this unique feature of photonic wire lasers. In this scheme, we used quantum cascade lasers with antenna-coupled third-order distributed feedback grating as the platform. Inspired by the chemistry of hybridization, our scheme phase-locks multiple such lasers by π coupling. With the coupled-cavity laser, we demonstrated several performance metrics that are important for various applications in sensing and imaging: a continuous electrical tuning of ~10 GHz at ~3.8 THz (fractional tuning of ~0.26%), a good level of output power (~50–90 mW of continuous-wave power) and tight beam patterns (~100 of beam divergence).
He, Xu; Li, Yankai; Sjoberg, Carl M.; Vuilleumier, David; Ding, Carl P.; Liu, Fushui; Li, Xiangrong
A late-injection strategy is typically adopted in stratified-charge direct injection spark ignition (DISI) engines to improve combustion stability for lean operation, but this may induce wall wetting on the piston surface and result in high soot emissions. E30 fuel, i.e., gasoline with 30% ethanol, is a potential alternative fuel that can offer a high Research Octane Number. However, the relatively high ethanol content increases the heat of vaporization, potentially exacerbating wall-wetting issues in DISI engines. In this study, the Refractive Index Matching (RIM) technique is used to measure fuel wall films in the piston bowl. The RIM implementation uses a novel LED illumination, integrated in the piston assembly and providing side illumination of the piston-bowl window. This RIM diagnostics in combination with high-speed imaging was used to investigate the impact of coolant temperature on the characteristics of wall wetting and combustion in an optical DISI engine fueled with E30. The experiments reveal that the smoke emissions increase drastically from 0.068 FSN to 1.14 FSN when the coolant temperature is reduced from 90 °C to 45 °C. Consistent with this finding, natural flame luminosity imaging reveals elevated soot incandescence with a reduction of the coolant temperature, indicative of pool fires. The RIM diagnostics show that a lower coolant temperature also leads to increased fuel film thickness, area, and volume, explaining the onset of pool fires and smoke.
As batteries become more prevalent in grid energy storage applications, the controllers that decide when to charge and discharge become critical to maximizing their utilization. Controller design for these applications is based on models that mathematically represent the physical dynamics and constraints of batteries. Unrepresented dynamics in these models can lead to suboptimal control. Our goal is to examine the state-of-the-art with respect to the models used in optimal control of battery energy storage systems (BESSs). This review helps engineers navigate the range of available design choices and helps researchers by identifying gaps in the state-of-the-art. BESS models can be classified by physical domain: state-of-charge (SoC), temperature, and degradation. SoC models can be further classified by the units they use to define capacity: electrical energy, electrical charge, and chemical concentration. Most energy based SoC models are linear, with variations in ways of representing efficiency and the limits on power. The charge based SoC models include many variations of equivalent circuits for predicting battery string voltage. SoC models based on chemical concentrations use material properties and physical parameters in the cell design to predict battery voltage and charge capacity. Temperature is modeled through a combination of heat generation and heat transfer. Heat is generated through changes in entropy, overpotential losses, and resistive heating. Heat is transferred through conduction, radiation, and convection. Variations in thermal models are based on which generation and transfer mechanisms are represented and the number and physical significance of finite elements in the model. Modeling battery degradation can be done empirically or based on underlying physical mechanisms. Empirical stress factor models isolate the impacts of time, current, SoC, temperature, and depth-of-discharge (DoD) on battery state-of-health (SoH). Through a few simplifying assumptions, these stress factors can be represented using regularization norms. Physical degradation models can further be classified into models of side-reactions and those of material fatigue. This article demonstrates the importance of model selection to optimal control by providing several example controller designs. Simpler models may overestimate or underestimate the capabilities of the battery system. Adding details can improve accuracy at the expense of model complexity, and computation time. Our analysis identifies six gaps: deficiency of real-world data in control literature, lack of understanding in how to balance modeling detail with the number of representative cells, underdeveloped model uncertainty based risk-averse and robust control of BESS, underdevelopment of nonlinear energy based SoC models, lack of hysteresis in voltage models used for control, lack of entropy heating and cooling in thermal modeling, and deficiency of knowledge in what combination of empirical degradation stress factors is most accurate. These gaps are opportunities for future research.
Various versions of deep borehole nuclear waste disposal have been proposed in the past in which effective sealing of a borehole after waste emplacement is generally required. In a high temperature disposal mode, the sealing function will be fulfilled by melting the ambient granitic rock with waste decay heat or an external heating source, creating a melt that will encapsulate waste containers or plug a portion of the borehole above a stack of the containers. However, there are certain drawbacks associated with natural materials, such as high melting temperatures, inefficient consolidation, slow crystallization kinetics, the resulting sealing materials generally being porous with low mechanical strength, insufficient adhesion to waste container surface, and lack of flexibility for engineering controls. In this study, we showed that natural granitic materials can be purposefully engineered through chemical modifications to enhance the sealing capability of the materials for deep borehole disposal. The present work systematically explores the effect of chemical modification and crystallinity (amorphous vs. crystalline) on the melting and crystallization processes of a granitic rock system. The approach can be applied to modify granites excavated from different geological sites. Several engineered granitic materials have been explored which possess significantly lower processing and densification temperatures than natural granites. Those new materials consolidate more efficiently by viscous flow and accelerated recrystallization without compromising their mechanical integrity and properties.
This chapter considers the collection of sparse samples in electron microscopy, either by modification of the sampling methods utilized on existing microscopes, or with new microscope concepts that are specifically designed and optimized for collection of sparse samples. It explores potential embodiments of a multi-beam compressive sensing electron microscope. Sparse measurement matrices offer an advantage of efficient image recovery, since each iteration of the process becomes a simple multiplication by a sparse matrix. Electron microscopy is well suited to compressed or sparse sampling due to the difficulty of building electron microscopes that can accurately record more than one electron signal at a time. Sparse sampling in electron microscopy has been considered for dose reduction, improving three-dimensional reconstructions and accelerating data acquisition. For sparse sampling, variations of scanning transmission electron microscopy (STEM) are typically used. In STEM, the electron probe is scanned across the specimen, and the detector measurement is recorded as a function of probe location.
Understanding fluid flow and transport in shale is of great importance to the development of unconventional hydrocarbon reservoirs and nuclear waste repositories. Tracer techniques have proven to be a useful tool for gaining such understanding. Shale is characterized by the presence of nanometer-sized pores and the resulting extremely low permeability. Chemical species confined in nanopores could behave drastically differently from those in a bulk system and the interaction of these species with pore surfaces is much enhanced due to a high surface/fluid volume ratio, both of which could potentially affect tracer migration and chromatographic differentiation in shale. Nanoconfinement manifests the discrete nature of fluid molecules in transport, therefore enhancing mass-dependent isotope fractionations. All these effects combined lead to a distinct set of tracer signatures that may not be observed in a conventional hydrocarbon reservoir or highly permeable groundwater aquifer system. These signatures can be used to delineate flow regimes, trace fluid sources, and quantify the rate and extent of a physical/chemical process. Such signatures can be used for the evaluation of cap rock structural integrity, the postclosure monitoring of a geologic repository, or the detection of a possible contamination in a water aquifer by a shale oil/gas extraction.
The goal of this work is to build model credibility of a structural dynamics model by comparing simulated responses to measured responses in random vibration environments, with limited knowledge of the true test input. Oftentimes off-axis excitations can be introduced during single axis vibration testing in the laboratory due to shaker or test fixture dynamics and interface variation. Model credibility cannot be improved by comparing predicted responses to measured responses with unknown excitation profiles. In the absence of sufficient time domain response measurements, the true multi-degree-of-freedom input cannot be exactly characterized for a fair comparison between the model and experiment. Methods exist, however, to estimate multi-degree-of-freedom (MDOF) inputs required to replicate field test data in the laboratory Ross et al.: 6-DOF Shaker Test Input Derivation from Field Test. In: Proceedings of the 35th IMAC, A Conference and Exposition on Structural Dynamics, Bethel (2017). This work focuses on utilizing one of these methods to approximately characterize the off-axis excitation present during laboratory random vibration testing. The method selects a sub-set of the experimental output spectral density matrix, in combination with the system transmissibility matrix, to estimate the input spectral density matrix required to drive the selected measurement responses. Using the estimated multi-degree-of-freedom input generated from this method, the error between simulated predictions and measured responses was significantly reduced across the frequency range of interest, compared to the error computed between experimental data to simulated responses generated assuming single axis excitation.
Multi-degree of freedom testing is growing in popularity and in practice. This is largely due to its inherent benefits in producing realistic stresses that the test article observes in its working environment and the efficiency of testing all axes at one time instead of individually. However, deriving and applying the “correct” inputs to a test has been a challenge. This paper explores a recently developed theory into deriving rigid body accelerations as an input to a test article through sub-structuring techniques. The theory develops a transformation matrix that separates the complete system dynamics into two sub-structures, the test article and next level assembly. The transformation does this by segregating the test article’s fixed base modal coordinates and the next level assembly’s free modal coordinates. This transformation provides insight into the damage that the test article acquires from its excited fixed base shapes and how to properly excite the test article by observing the next level assembly’s rigid body motion. This paper examines using next level assembly’s rigid body motion as a direct input in a multi-degree of freedom test to excite the test article’s fixed base shapes in the same way as the working environment.
Contact in structures with mechanical interfaces has the ability to significantly influence the system dynamics, such that the energy dissipation and resonant frequencies vary as a function of the response amplitude. Finite element analysis is commonly used to study the physics of such problems, particularly when examining the local behavior at the interfaces. These high fidelity, nonlinear models are computationally expensive to run with time-stepping solvers due to their large mesh densities at the interface, and because of the high expense required to update the tangent operators. Hurty/Craig-Bampton substructuring and interface reduction techniques are commonly utilized to reduce computation time for jointed structures. In the past, these methods have only been applied to substructures rigidly attached to one another, resulting in a linear model. The present work explores the performance of a particular interface reduction technique (system-level characteristic constraint modes) on a nonlinear model with node-to-node contact for a benchmark structure consisting of two c-shape beams bolted together at each end.
3D scanning laser Doppler vibrometry (LDV) systems are well known for modal testing of articles whose excited dynamic properties are time-invariant over the duration of all scans. However, several potential test situations can arise in which the modal parameters of a given article will change over the course of a typical LDV scan. One such instance is considered in this work, in which the internal state of a thermal battery changes at different rates over its activation lifetime. These changes substantially alter its dynamic properties as a function of time. Due to the extreme external temperatures of the battery, non-contact LDV was the preferred method of response measurement. However, scanning such an object is not optimal due to the non-simultaneous nature of the scanning LDV when capturing a full set of data. Nonetheless, by carefully considering the test configuration, hardware and software setup, as well as data acquisition and processing methods it was possible to utilize a scanning LDV system to collect sufficient information to provide a measure of the time varying dynamic characteristics of the test article. This work will demonstrate the techniques used, the acquired results and discuss the technical issues encountered.
Finite element models are regularly used in many disciplines to predict dynamic behavior of a structure under certain loads and subject to various boundary conditions, in particular when analytical models cannot be used due to geometric complexity. One such example is a structure with an entrained fluid cavity. To assist an experimental study of the acoustoelastic effect, numerical studies of an enclosed cylinder were performed to design the test hardware. With a system that demonstrates acoustoelastic coupling, it was then desired to make changes to decouple the structure from the fluid by making changes to either the fluid or the structure. In this paper, simulation is used to apply various changes and observe the effects on the structural response to choose an effective decoupling approach for the experimental study.
Acoustoelastic coupling occurs when a hollow structure’s in-vacuo mode aligns with an acoustic mode of the internal cavity. The impact of this coupling on the total dynamic response of the structure can be quite severe depending on the similarity of the modal frequencies and shapes. Typically, acoustoelastic coupling is not a design feature, but rather an unfortunate result that must be remedied as modal tests are often used to correlate or validate finite element models of the uncoupled structure. Here, however, a test structure is intentionally designed such that multiple structural and acoustic modes are well-aligned, resulting in a coupled system that allows for an experimental investigation. Coupling in the system is first identified using a measure termed the magnification factor and the structural-acoustic interaction for a target mode is then measured. Modifications to the system demonstrate the dependency of the coupling with respect to changes in the mode shape and frequency proximity. This includes an investigation of several practical techniques used to decouple the system by altering the internal acoustic cavity, as well as the structure itself. Furthermore, acoustic absorption material effectively decoupled the structure while structural modifications, in their current form, proved unsuccessful. The most effective acoustic absorption method consisted of randomly distributing typical household paper towels in the acoustic cavity; a method that introduces negligible mass to the structural system with the additional advantages of being inexpensive and readily available.
Time-resolved particle image velocimetry was conducted at 40 kHz using a pulse-burst laser in the supersonic wake of a wall-mounted hemisphere. Velocity fields suggest a recirculation region with two lobes, in which flow moves away from the wall near the centerline and recirculates back toward the hemisphere off the centerline, contrary to transonic configurations. Spatio-temporal cross-correlations and conditional ensemble averages relate the characteristic behavior of the unsteady shock motion to the flapping of the shear layer. At Mach 1.5, oblique shocks develop, associated with vortical structures in the shear layer and convect downstream in tandem; a weak periodicity is observed. Shock motion at Mach 2.0 appears somewhat different, wherein multiple weak disturbances propagate from shear-layer turbulent structures to form an oblique shock that ripples as these vortices pass by. Bifurcated shock feet coalesce and break apart without evident periodicity. Power spectra show a preferred frequency of shear-layer flapping and shock motion for Mach 1.5, but at Mach 2.0, a weak preferred frequency at the same Strouhal number of 0.32 is found only for oblique shock motion and not shear-layer unsteadiness.
Understanding and quantifying the relative importance of premixed and non-premixed reaction zones within turbulent partially premixed flames is an important issue for multi-regime combustion. In the present work, the recently-developed method of gradient-free regime identification (GFRI) is applied to instantaneous 1D Raman/Rayleigh measurements of temperature and major species from two turbulent lifted methane/air flames. Local premixed and non-premixed reaction zones are identified using criteria based on the mixture fraction, the chemical explosive mode, and the heat release rate, the latter two being calculated from an approximation of the full thermochemical state of each measured sample. A chemical mode (CM) zero-crossing is a previously documented marker for a premixed reaction zone. Results from the lifted flames show strong correlations among the mixture fraction at the CM zero-crossing, the magnitude of the change in CM at the zero-crossing, and the local heat release rate at the CM zero-crossing compared to the maximum heat release rate. The trends are confirmed through a comparable analysis of numerical simulations of two laminar triple flames. These newly documented trends are associated with the transition from dominantly premixed flame structures to dominantly non-premixed flames structures. The methods introduced for assessing the relative importance of local premixed and non-premixed reactions zones have potential for application to a broad range of turbulent flames.
International nuclear safeguards inspectors are tasked with verifying that nuclear materials in facilities around the world are not misused or diverted from peaceful purposes. They must conduct detailed inspections in complex, information-rich environments, but there has been relatively little research into the cognitive aspects of their jobs. We posit that the speed and accuracy of the inspectors can be supported and improved by designing the materials they take into the field such that the information is optimized to meet their cognitive needs. Many in-field inspection activities involve comparing inventory or shipping records to other records or to physical items inside of a nuclear facility. The organization and presentation of the records that the inspectors bring into the field with them could have a substantial impact on the ease or difficulty of these comparison tasks. In this paper, we present a series of mock inspection activities in which we manipulated the formatting of the inspectors’ records. We used behavioral and eye tracking metrics to assess the impact of the different types of formatting on the participants’ performance on the inspection tasks. The results of these experiments show that matching the presentation of the records to the cognitive demands of the task led to substantially faster task completion.
Here, we apply density functional theory (DFT) to investigate rare-earth metal organic frameworks (RE-MOFs), RE12(μ3-OH)16(C8O6H4)8(C8O6H5)4 (RE = Y, Eu, Tb, Yb), and characterize the level of theory needed to accurately predict structural and electronic properties in MOF materials with 4f-electrons. A two-step calculation approach of geometry optimization with spin-restricted DFT and large core potential (LCPs), and detailed electronic structures with spin-unrestricted DFT with a full valence potential + Hubbard U correction is investigated. Spin-restricted DFT with LCPs resulted in good agreement between experimental lattice parameters and optimized geometries, while a full valence potential is necessary for accurate representation of the electronic structure. The electronic structure of Eu-DOBDC MOF indicated a strong dependence on the treatment of highly localized 4f-electrons and spin polarization, as well as variation within a range of Hubbard corrections (U = 1-9 eV). For Hubbard corrected spin-unrestricted calculations, a U value of 1-4 eV maintains the non-metallic character of the band gap with slight deviations in f-orbital energetics. When compared with experimentally reported results, the importance of the full valence calculation and the Hubbard correction in correctly predicting the electronic structure is highlighted.
In its simplest implementation, patent-protected AeroMINE consists of two opposing foils, where a low-pressure zone is generated between them. The low pressure draws fluid through orifices in the foil surfaces from plenums inside the foils. The inner plenums are connected to ambient pressure. If an internal turbine-generator is placed in the path of the flow to the plenums, energy can be extracted. The fluid transports the energy through the plenums, and the turbine-generator can be located at ground level, inside a controlled environment for easy access and to avoid inclement weather conditions or harsh environments. This contained internal turbine-generator has the only moving parts in the system, isolated from people, birds and other wildlife. AeroMINEs could be used in distributed-wind energy settings, where the stationary foil pairs are located on warehouse rooftops, for example. Flow created by several such foil pairs could be combined to drive a common turbine-generator.
The degradation of a chemical warfare agent simulant using a catalytically active Zr-based metal-organic framework (MOF) as a function of different solvent systems was investigated. Complementary molecular modelling studies indicate that the differences in the degradation rates are related to the increasing size in the nucleophile, which hinders the rotation of the product molecule during degradation. Methanol was identified as an appropriate solvent for non-Aqueous degradation applications and demonstrated to support the MOF-based destruction of both sarin and soman.
Developers of optical systems are seeking lighter, cheaper, and rapidly-developed systems. Design, fabrication, and testing a 10x dual-focus telescope is presented utilizing additive manufacturing, active alignment, and image correction algorithms.
Controlling microscopic morphology of energetic materials is of significant interest for the improvement of their performance and production consistency. As an important insensitive high explosive material, triaminotrinitrobenzene (TATB) has attracted tremendous research effort for military grade explosives and propellants. In this study, a new, rapid and inexpensive synthesis method for monodispersed TATB microparticles based on micelle-confined precipitation was developed. Surfactant with proper hydrophilic-lipophilic balance value was found to be critical to the success of this synthesis. The morphology of the TATB microparticles can be tuned between quasi-spherical and faceted by controlling the speed of recrystallization.
Observation of vibrational properties of phyllosilicate edges via a combined molecular modeling and experimental approach was performed. Deuterium exchange was utilized to isolate edge vibrational modes from their internal counterparts. The appearance of a specific peak within the broader D2O band indicates the presence of deuteration on the edge surface, and this peak is confirmed with the simulated spectra. These results are the first to unambiguously identify spectroscopic features of phyllosilicate edge sites.
The performances of five commercial anion exchange membranes are compared in aqueous soluble organic redox flow batteries (RFBs) containing the TEMPO and methyl viologen (MV) redox pair. Capacities between RFBs with different membranes are found to vary by >50% of theoretical after 100 cycles. This capacity loss is attributed to crossover of TEMPO and MV across the membrane and is dominated by either diffusion, migration, or electroosmotic drag, depending on the membrane. Counterintuitively, the worst performing membranes display the lowest diffusion coefficients for TEMPO and MV, instead seeing high crossover fluxes due to electroosmotic drag. This trend is rationalized in terms of the ion exchange capacity and water content of these membranes. Decreasing these values in an effort to minimize diffusion of the redox-active species while the RFB rests can inadvertently exacerbate conditions for electroosmotic drag when theRFBoperates.Using fundamental membrane properties, it is demonstrated that the relative magnitude of crossover and capacity loss during RFB operation may be understood.
Multiple fastener reduced-order models and fitting strategies are used on a multiaxial dataset and these models are further evaluated using a high-fidelity analysis model to demonstrate how well these strategies predict load-displacement behavior and failure. Two common reduced-order modeling approaches, the plug and spot weld, are calibrated, assessed, and compared to a more intensive approach – a “two-block” plug calibrated to multiple datasets. An optimization analysis workflow leveraging a genetic algorithm was exercised on a set of quasistatic test data where fasteners were pulled at angles from 0° to 90° in 15° increments to obtain material parameters for a fastener model that best capture the load-displacement behavior of the chosen datasets. The one-block plug is calibrated just to the tension data, the spot weld is calibrated to the tension (0°) and shear (90°), and the two-block plug is calibrated to all data available (0°-90°). These calibrations are further assessed by incorporating these models and modeling approaches into a high-fidelity analysis model of the test setup and comparing the load-displacement predictions to the raw test data.
Austenitic stainless steels (Fe-Cr-Ni) are resistant to hydrogen embrittlement but have not been studied using molecular dynamics simulations due to the lack of an Fe-Cr-Ni-H interatomic potential. Herein we describe our recent progress towards molecular dynamics studies of hydrogen effects in Fe-Cr-Ni stainless steels. We first describe our Fe-Cr-Ni-H interatomic potential and demonstrate its characteristics relevant to mechanical properties. We then demonstrate that our potential can be used in molecular dynamics simulations to derive Arrhenius equation of hydrogen diffusion and to reveal twinning and phase transformation deformation mechanisms in stainless steels.
Zhang, Xin; Wang, Q.J.; Harrison, Katharine L.; Jungjohann, Katherine; Boyce, Brad L.; Roberts, Scott A.; Attia, Peter M.; Harris, Stephen J.
We offer an explanation for how dendrite growth can be inhibited when Li metal pouch cells are subjected to external loads, even for cells using soft, thin separators. We develop a contact mechanics model for tracking Li surface and sub-surface stresses where electrodes have realistically (micron-scale) rough surfaces. Existing models examine a single, micron-scale Li metal protrusion under a fixed local current density that presses more or less conformally against a separator or stiff electrolyte. At the larger, sub-mm scales studied here, contact between the Li metal and the separator is heterogeneous and far from conformal for surfaces with realistic roughness: the load is carried at just the tallest asperities, where stresses reach tens of MPa, while most of the Li surface feels no force at all. Yet, dendrite growth is suppressed over the entire Li surface. To explain this dendrite suppression, our electrochemical/mechanics model suggests that Li avoids plating at the tips of growing Li dendrites if there is sufficient local stress; that local contact stresses there may be high enough to close separator pores so that incremental Li+ ions plate elsewhere; and that creep ensures that Li protrusions are gradually flattened. These mechanisms cannot be captured by single-dendrite-scale analyses.
Dichroic coatings have been developed for high transmission at 527 nm and high reflection at 1054 nm for laser operations in the nanosecond pulse regime. The coatings consist of HfO2 and SiO2 layers deposited with e-beam evaporation, and laser-induced damage thresholds as high as 12.5 J/cm2 were measured at 532 nm with 3.5 ns pulses (22.5 degrees angle of incidence, in S-polarization). However, laser damage measurements at the single wavelength of 532 nm do not adequately characterize the laser damage resistance of these coatings, since they were designed to operate at dual wavelengths simultaneously. This became apparent after one of the coatings damaged prematurely at a lower fluence in the beam train, which inspired further investigations. To gain a more complete understanding of the laser damage resistance, results of a dual-wavelength laser damage test performed at both 532 nm and 1064 nm are presented.
Spectral linewidths are used to assess a variety of physical properties, even as spectral overlap makes quantitative extraction difficult owing to uncertainty. Uncertainty, in turn, can be minimized with the choice of appropriate experimental conditions used in spectral collection. In response, we assess the experimental factors dictating uncertainty in the quantification of linewidth from a Raman experiment highlighting the comparative influence of (1) spectral resolution, (2) signal to noise, and (3) relative peak intensity (RPI) of the overlapping peaks. Practically, Raman spectra of SiGe thin films were obtained experimentally and simulated virtually under a variety of conditions. RPI is found to be the most impactful parameter in specifying linewidth followed by the spectral resolution and signal to noise. While developed for Raman experiments, the results are generally applicable to spectroscopic linewidth studies illuminating the experimental trade-offs inherent in quantification.
Microtubules are stiff biopolymers that self-assemble via the addition of GTP-tubulin (αβ-dimer bound to GTP), but hydrolysis of GTP- to GDP-tubulin within the tubules destabilizes them toward catastrophically-fast depolymerization. The molecular mechanisms and features of the individual tubulin proteins that drive such behavior are still not well-understood. Using molecular dynamics simulations of whole microtubules built from a coarse-grained model of tubulin, we demonstrate how conformational shape changes (i.e., deformations) in subunits that frustrate tubulin-tubulin binding within microtubules drive depolymerization of stiff tubules via unpeeling "ram's horns" consistent with experiments. We calculate the sensitivity of these behaviors to the length scales and strengths of binding attractions and varying degrees of binding frustration driven by subunit shape change, and demonstrate that the dynamic instability and mechanical properties of microtubules can be produced based on either balanced or imbalanced strengths of lateral and vertical binding attractions. Finally, we show how catastrophic depolymerization can be interrupted by small regions of the microtubule containing undeformed dimers, corresponding to incomplete lattice hydrolysis. The results demonstrate a mechanism by which microtubule rescue can occur.
This paper investigates the application of a method to find the cost function or the weight matrices to be used in model predictive control (MPC) such that the MPC has the same performance as a predesigned linear controller in state-feedback form when constraints are not active. This is potentially useful when a successful linear controller already exists and it is necessary to incorporate the constraint-handling capabilities of MPC. This is the case for a wave energy converter (WEC), where the maximum power transfer law is well-understood. In addition to solutions based on numerical optimization, a simple analytical solution is also derived for cases with a short prediction horizon. These methods are applied for the control of an empirically-based WEC model. The results show that the MPC can be successfully tuned to follow an existing linear control law and to comply with both input and state constraints, such as actuator force and actuator stroke.
Cross Reality (XR) immersive environments offer challenges and opportunities in designing for cognitive aspects (e.g. learning, memory, attention, etc.) of information design and interactions. Information design is a multidisciplinary endeavor involving data science, communication science, cognitive science, media, and technology. In the present paper the holodeck metaphor is extended to illustrate how information design practices and some of the qualities of this imaginary computationally augmented environment (a.k.a. the holodeck) may be achieved in XR environments to support information-rich storytelling and real life, face-to-face, and virtual collaborative interactions. The Simulation Experience Design Framework & Method is introduced to organize challenges and opportunities in the design of information for XR. The notion of carefully blending both real and virtual spaces to achieve total immersion is discussed as the reader moves through the elements of the cyclical framework. A solution space leveraging cognitive science, information design, and transmedia learning highlights key challenges facing contemporary XR designers. Challenges include but are not limited to interleaving information, technology, and media into the human storytelling process, and supporting narratives in a way that is memorable, robust, and extendable.
This paper describes the fissile mass and concentration necessary for a critical event to occur outside containers disposed in a bedded salt repository. The criticality limits are based on modeling mixtures of water, salt, dolomite, concrete, rust, and fissile material using a neutron/photon transport computational code. Several idealized depositional configurations of fissile material in the host rock are analyzed: homogeneous spheres and heterogeneous arrangements of plate fractures in regular arrays. Deposition of large masses and concentrations are required for criticality to occur for low enriched 235U enrichment. Homogeneous mixtures with deposition in all the porosity are more reactive at high enrichments of 235U and 239Pu. However, unlike typical engineered systems, heterogeneous configurations can be more reactive than homogeneous systems at high enrichment when deposition occurs in only a portion of the porosity and the total porosity is small, because the relationship between the porosity of the fractures and matrix also strongly influences the results.
Weiland, Nathan T.; Lance, Blake; Pidaparti, Sandeep R.
Supercritical CO2 (sCO2) power cycles find potential application with a variety of heat sources including nuclear, concentrated solar (CSP), coal, natural gas, and waste heat sources, and consequently cover a wide range of scales. Most studies to date have focused on the performance of sCO2 power cycles, while economic analyses have been less prevalent, due in large part to the relative scarcity of reliable cost estimates for sCO2 power cycle components. Further, the accuracy of existing sCO2 techno-economic analyses suffer from a small sample set of vendor-based component costs for any given study. Improved accuracy of sCO2 component cost estimation is desired to enable a shift in focus from plant efficiency to economics as a driver for commercialization of sCO2 technology. This study reports on sCO2 component cost scaling relationships that have been developed collaboratively from an aggregate set of vendor quotes, cost estimates, and published literature. As one of the world’s largest supporters of sCO2 research and development, the Department of Energy (DOE) National Laboratories have access to a considerable pool of vendor component costs that span multiple applications specific to each National Laboratory’s mission, including fossil-fueled sCO2 applications at the National Energy Technology Laboratory (NETL), CSP at the National Renewable Energy Laboratory (NREL), and CSP, nuclear, and distributed energy sources at Sandia National Laboratories (SNL). The resulting cost correlations are relevant to sCO2 components in all these applications, and for scales ranging from 5-750 MWe. This work builds upon prior work at SNL, in which sCO2 component cost models were developed for CSP applications ranging from 1-100 MWe in size. Similar to the earlier SNL efforts, vendor confidentiality has been maintained throughout this collaboration and in the published results. Cost models for each component were correlated from 4-24 individual quotes from multiple vendors, although the individual cost data points are proprietary and not shown. Cost models are reported for radial and axial turbines, integrally-geared and barrel-style centrifugal compressors, high temperature and low temperature recuperators, dry sCO2 coolers, and primary heat exchangers for coal and natural gas fuel sources. These models are applicable to sCO2-specific components used in a variety of sCO2 cycle configurations, and include incremental cost factors for advanced, high temperature materials for relevant components. Non-sCO2-specific costs for motors, gearboxes, and generators have been included to allow cycle designers to explore the cost implications of various turbomachinery configurations. Finally, the uncertainty associated with these component cost models is quantified by using AACE International-style class ratings for vendor estimates, combined with component cost correlation statistics.
Compact heat exchangers for supercritical CO2 (sCO2) service are often designed with external, semi-circular headers. Their design is governed by the ASME Boiler & Pressure Vessel Code (BPVC) whose equations were typically derived by following Castigliano’s Theorems. However, there are no known validation experiments to support their claims of pressure rating or burst pressure predictions nor is there much information about how and where failures occur. This work includes high pressure bursting of three semicircular header prototypes for the validation of three aspects: (1) burst pressure predictions from the BPVC, (2) strain predictions from Finite Element Analysis (FEA), and (3) deformation from FEA. The header prototypes were designed with geometry and weld specifications from the BPVC Section VIII Division 1, a design pressure typical of sCO2 service of 3,900 psi (26.9 MPa), and were built with 316 SS. Repeating the test in triplicate allows for greater confidence in the experimental results and enables data averaging. Burst pressure predictions are compared with experimental results for accuracy assessment. The prototypes are analyzed to understand their failure mechanism and locations. Experimental strain and deformation measurements were obtained optically with Digital Image Correlation (DIC). This technique allows strain to be measured in two dimensions and even allows for deformation measurements, all without contacting the prototype. Eight cameras are used for full coverage of both headers on the prototypes. The rich data from this technique are an excellent validation source for FEA strain and deformation predictions. Experimental data and simulation predictions are compared to assess simulation accuracy.
Probabilistic simulations of the post-closure performance of a generic deep geologic repository for commercial spent nuclear fuel in shale host rock provide a test case for comparing sensitivity analysis methods available in Geologic Disposal Safety Assessment (GDSA) Framework, the U.S. Department of Energy's state-of-the-art toolkit for repository performance assessment. Simulations assume a thick low-permeability shale with aquifers (potential paths to the biosphere) above and below the host rock. Multi-physics simulations on the 7-million-cell grid are run in a high-performance computing environment with PFLOTRAN. Epistemic uncertain inputs include properties of the engineered and natural systems. The output variables of interest, maximum I-129 concentrations (independent of time) at observation points in the aquifers, vary over several orders of magnitude. Variance-based global sensitivity analyses (i.e., calculations of sensitivity indices) conducted with Dakota use polynomial chaos expansion (PCE) and Gaussian process (GP) surrogate models. Results of analyses conducted with raw output concentrations and with log-transformed output concentrations are compared. Using log-transformed concentrations results in larger sensitivity indices for more influential input variables, smaller sensitivity indices for less influential input variables, and more consistent values for sensitivity indices between methods (PCE and GP) and between analyses repeated with samples of different sizes.
The U.S. Department of Energy is conducting research and development on generic concepts for disposal of spent nuclear fuel and high-level radioactive waste in multiple lithologies, including salt, crystalline (granite/metamorphic), and argillaceous (clay/shale) host rock. These investigations benefit greatly from international experience gained in disposal programs in many countries around the world. The focus of this study is the post-closure degradation and radionuclide-release rates for tristructural-isotropic (TRISO) coated particle spent fuels for various generic geologic repository environments.1,2,3 The TRISO particle coatings provide safety features during and after reactor operations, with the SiC layer representing the primary barrier. Three mechanisms that may lead to release of radionuclides from the TRISO particles are: (1) helium pressure buildup4 that may eventually rupture the SiC layer, (2) diffusive transport through the layers (solid-state diffusion in reactor, aqueous diffusion in porous media at repository conditions), and (3) corrosion5 of the layers in groundwater/brine. For TRISO particles in a graphite fuel element, the degradation in an oxidizing geologic repository was concluded to be directly dependent on the oxidative corrosion rate of the graphite matrix4, which was analyzed as much slower than SiC layer corrosion processes. However, accumulated physical damage to the graphite fuel element may decrease its post-closure barrier capability more rapidly. Our initial performance model focuses on the TRISO particles and includes SiC failure from pressure increase via alpha-decay helium, as exacerbated by SiC layer corrosion5. This corrosion mechanism is found to be much faster than solid-state diffusion at repository temperatures but includes no benefit of protection by the other outer layers, which may prolong lifetime. Our current model enhancements include constraining the material properties of the layers for porous media diffusion analyses. In addition to evaluating the SiC layer porosity structure, this work focuses on the pyrolytic carbon layers (inner/outer-IPyC/OPyC) layers, and the graphite compact, which are to be analyzed with the SiC layer in two modes: (a) intact SiC barrier until corrosion failure and (b) SiC with porous media transport. Our detailed performance analyses will consider these processes together with uncertainties in the properties of the layers to assess radionuclide release from TRISO particles and their graphite compacts.
Two surrogate models are under development to rapidly emulate the effects of the Fuel Matrix Degradation (FMD) model in GDSA Framework. One is a polynomial regression surrogate with linear and quadratic fits, and the other is a k-Nearest Neighbors regressor (kNNr) method that operates on a lookup table. Direct coupling of the FMD model to GDSA Framework is too computationally expensive. Preliminary results indicate these surrogate models will enable GDSA Framework to rapidly simulate spent fuel dissolution for each individual breached spent fuel waste package in a probabilistic repository simulation. This capability will allow uncertainties in spent fuel dissolution to be propagated and sensitivities in FMD inputs to be quantified and ranked against other inputs.
Bedded salt contains interfaces between the host salt and other in situ materials such as clay seams, or different materials such as anhydrite or polyhalite in contact with the salt. These inhomogeneities are thought to have first-order effects on the closure of nearby drifts and potential roof collapses. Despite their importance, characterizations of the peak shear strength and residual shear strength of interfaces in salt are extremely rare in the published literature. This paper presents results from laboratory experiments designed to measure the mechanical behavior of a bedding interface or clay seam as it is sheared. The series of laboratory direct shear tests reported in this paper were performed on several samples of materials from the Permian Basin in New Mexico. These tests were conducted at several normal and shear loads up to the expected in situ pre-mining stress conditions. Tests were performed on samples with a halite/clay contact, a halite/anhydrite contact, a halite/polyhalite contact, and on plain salt samples without an interface for comparison. Intact shear strength values were determined for all of the test samples along with residual values for the majority of the tests. The test results indicated only a minor variation in shear strength, at a given normal stress, across all samples. This result was surprising because sliding along clay seams is regularly observed in the underground, suggesting the clay seam interfaces should be weaker than plain salt. Post-test inspections of these samples noted that salt crystals were intrinsic to the structure of the seam, which probably increased the shear strength as compared to a more typical clay seam.
The fluid injection into the subsurface perturbs the states of pore pressure and stress on the pre-existing faults, potentially causing earthquakes. In the multiphase flow system, the contrast of fluid and rock properties between different structures produces the changes in pressure gradients and subsequently stress fields. Assuming two-phase fluid flow (gas-water system) and poroelasticity, we simulate the three-layered formation including a basement fault, in which injection-induced pressure encounters the fault directly given injection scenarios. The single-phase poroelasticity model with the same setting is also conducted to evaluate the multiphase flow effects on poroelastic response of the fault to gas injection. Sensitivity tests are performed by varying the fault permeability. The presence of gaseous phase reduces the pressure buildup within the highly gas-saturated region, causing less Coulomb stress changes, whereas capillarity increases the pore pressure within the gas-water mixed region. Even though the gaseous plume does not approach the fault, the poroelastic stressing can affect the fault stability, potentially the earthquake occurrence.
The reaction network of the simplest Criegee intermediate (CI) CH2OO has been studied experimentally during the ozonolysis of ethylene. The results provide valuable information about plasma- and ozone-assisted combustion processes and atmospheric aerosol formation. A network of CI reactions was identified, which can be described best by the sequential addition of CI with ethylene, water, formic acid, and other molecules containing hydroxy, aldehyde, and hydroperoxy functional groups. Species resulting from as many as four sequential CI addition reactions were observed, and these species are highly oxygenated oligomers that are known components of secondary organic aerosols in the atmosphere. Insights into these reaction pathways were obtained from a near-atmospheric pressure jet-stirred reactor coupled to a high-resolution molecular-beam mass spectrometer. The mass spectrometer employs single-photon ionization with synchrotron-generated, tunable vacuum-ultraviolet radiation to minimize fragmentation via near-threshold ionization and to observe mass-selected photoionization efficiency (PIE) curves. Species identification is supported by comparison of the mass-selected, experimentally observed photo-ionization thresholds with theoretical calculations for the ionization energies. A variety of multi-functional peroxide species are identified, including hydroxymethyl hydroperoxide (HOCH2OOH), hydroperoxymethyl formate (HOOCH2OCHO), methoxymethyl hydroperoxide (CH3OCH2OOH), ethoxymethyl hydroperoxide (C2H5OCH2OOH), 2-hydroxyethyl hydroperoxide (HOC2H4OOH), dihydroperoxy methane (HOOCH2OOH), and 1-hydroperoxypropan-2-one [CH3C(O)CH2OOH]. A semi-quantitative analysis of the signal intensities as a function of successive CI additions and temperature provides mechanistic insights and valuable information for future modeling work of the associated energy conversion processes and atmospheric chemistry. This work provides further evidence that the CI is a key intermediate in the formation of oligomeric species via the formation of hydroperoxides.
Atmospheric tracer transport is a computationally demanding component of the atmospheric dynamical core of weather and climate simulations. Simulations typically have tens to hundreds of tracers. A tracer field is required to preserve several properties, including mass, shape, and tracer consistency. To improve computational efficiency, it is common to apply different spatial and temporal discretizations to the tracer transport equations than to the dynamical equations. Using different discretizations increases the difficulty of preserving properties. This paper provides a unified framework to analyze the property preservation problem and classes of algorithms to solve it. We examine the primary problem and a safety problem; describe three classes of algorithms to solve these; introduce new algorithms in two of these classes; make connections among the algorithms; analyze each algorithm in terms of correctness, bound on its solution magnitude, and its communication efficiency; and study numerical results. A new algorithm, QLT, has the smallest communication volume, and in an important case it redistributes mass approximately locally. These algorithms are only very loosely coupled to the underlying discretizations of the dynamical and tracer transport equations and thus are broadly and efficiently applicable. In addition, they may be applied to remap problems in applications other than tracer transport.
The detection, location, and identification of suspected underground nuclear explosions (UNEs) are global security priorities that rely on integrated analysis of multiple data modalities for uncertainty reduction in event analysis. Vegetation disturbances may provide complementary signatures that can confirm or build on the observables produced by prompt sensing techniques such as seismic or radionuclide monitoring networks. For instance, the emergence of non-native species in an area may be indicative of anthropogenic activity or changes in vegetation health may reflect changes in the site conditions resulting from an underground explosion. Previously, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. In this work, we employ various spectral detection and classification algorithms to identify and map vegetation species in an area of interest containing the legacy test site. We employed a frequentist framework for fusing multiple spectral detections across various reference spectra captured at different times and sampled from multiple locations. The spatial distribution of vegetation species is compared to the location of the underground nuclear explosion. We find a difference in species abundance within a 130 m radius of the center of the test site.
In this work we investigate the Orowan hypothesis, that decreases in surface energy due to surface adsorbates lead directly to lowered fracture toughness, at an atomic/molecular level. We employ a Lennard-Jones system with a slit crack and an infiltrating fluid, nominally with gold-water properties, and explore steric effects by varying the soft radius of fluid particles and the influence of surface energy/hydrophobicity via the solid–fluid binding energy. Using previously developed methods, we employ the J-integral to quantify the sensitivity of fracture toughness to the influence of the fluid on the crack tip, and exploit dimensionless scaling to discover universal trends in behavior.
We use dielectric metasurfaces made from direct bandgap semiconductors to generate high-harmonics and nonlinear mixing simultaneously, without the need of phase matching. Inclusion of broken-symmetry designs and quantum heterostructures can lead to even higher efficiencies.
Novel multilayered FeSiCrB-Fe x N (x = 2-4) metallic glass composites were fabricated using spark plasma sintering of FeSiCrB amorphous ribbons (Metglas 2605SA3 alloy) and Fe x N (x = 2-4) powder. Crystalline Fe x N can serve as a high magnetic moment, high electrical resistance binder, and lamination material in the consolidation of amorphous and nanocrystalline ribbons, mitigating eddy currents while boosting magnetic performance and stacking factor in both wound and stacked soft magnetic cores. Stacking factors of nearly 100% can be achieved in an amorphous ribbon/iron nitride composite. FeSiCrB-Fe x N multilayered metallic glass composites prepared by spark plasma sintering have the potential to serve as a next-generation soft magnetic material in power electronics and electrical machines.
Aqueous dissolution of silicate materials exhibits complex temporal evolution and rich pattern formations. Mechanistic understanding of this process is critical for the development of a predictive model for a long-term performance assessment of silicate glass as a waste form for high-level radioactive waste disposal. Here we provide a summary of a recently developed nonlinear dynamic model for silicate material degradation in an aqueous environment. This model is based on a simple self-organizational mechanism: dissolution of silica framework of a material is catalyzed by cations released from material degradation, which in turn accelerate the release of cations. This model provides a systematical prediction of the key features observed in silicate glass dissolution, including the occurrence of a sharp corrosion front, oscillatory dissolution, multiple stages of the alteration process, wavy dissolution fronts, growth rings, incoherent bandings of alteration products, and corrosion pitting. This work provides a new perspective for understanding silicate material degradation and evaluating the long-term performance of these materials as a waste form for radioactive waste disposal.
Over the last 13 years, at Sandia National Laboratories we have applied the belief/plausibility measure from evidence theory to estimate the uncertainty for numerous safety and security issues for nuclear weapons. For such issues we have significant epistemic uncertainty and are unable to assign probability distributions. We have developed and applied custom software to implement the belief/plausibility measure of uncertainty. For safety issues we perform a quantitative evaluation, and for security issues (e.g., terrorist acts) we use linguistic variables (fuzzy sets) combined with approximate reasoning. We perform the following steps: Train Subject Matter Experts (SMEs) on assignment of evidence Work with SMEs to identify the concern(s): the top-level variable(s) Work with SMEs to identify lower-level variable and functional relationship(s) to the top-level variable(s) Then the SMEs gather their State of Knowledge (SOK) and assign evidence to the lower-level variables. Using this information, we evaluate the variables using custom software and produce an estimate for the top-level variable(s) including uncertainty. We have extended the Kaplan-Garrick risk triplet approach for risk to use the belief/plausibility measure of uncertainty.
Haddock, Walker; Bangalore, Purushotham V.; Curry, Matthew L.; Skjellum, Anthony
Exascale computing demands high bandwidth and low latency I/O on the computing edge. Object storage systems can provide higher bandwidth and lower latencies than tape archive. File transfer nodes present a single point of mediation through which data moving between these storage systems must pass. By increasing the performance of erasure coding, stripes can be subdivided into large numbers of shards. This paper’s contribution is a prototype nearline disk object storage system based on Ceph. We show that using general purpose graphical processing units (GPGPU) for erasure coding on file transfer nodes is effective when using a large number of shards. We describe an architecture for nearline disk archive storage for use with high performance computing (HPC) and demonstrate the performance with benchmarking results. We compare the benchmark performance of our design with the IntelR⃝ Storage Acceleration Library (ISA-L) CPU based erasure coding libraries using the native Ceph erasure coding feature.
Seguin, Trevor J.; Hahn, Nathan T.; Zavadil, Kevin R.; Persson, Kristin A.
Rational design of novel electrolytes with enhanced functionality requires fundamental molecular-level understanding of structure-property relationships. Here we examine the suitability of a range of organic solvents for non-aqueous electrolytes in secondary magnesium batteries using density functional theory (DFT) calculations as well as experimental probes such as cyclic voltammetry and Raman spectroscopy. The solvents considered include ethereal solvents (e.g., glymes) sulfones (e.g., tetramethylene sulfone), and acetonitrile. Computed reduction potentials show that all solvents considered are stable against reduction by Mg metal. Additional computations were carried out to assess the stability of solvents in contact with partially reduced Mg cations (Mg 2+ → Mg + ) formed during cycling (e.g., deposition) by identifying reaction profiles of decomposition pathways. Most solvents, including some proposed for secondary Mg energy storage applications, exhibit decomposition pathways that are surprisingly exergonic. Interestingly, the stability of these solvents is largely dictated by magnitude of the kinetic barrier to decomposition. This insight should be valuable toward rational design of improved Mg electrolytes.
The MPI multithreading model has been historically difficult to optimize; the interface that it provides for threads was designed as a process-level interface. This model has led to implementations that treat function calls as critical regions and protect them with locks to avoid race conditions. We hypothesize that an interface designed specifically for threads can provide superior performance than current approaches and even outperform single-threaded MPI. In this paper, we describe a design for partitioned communication in MPI that we call finepoints. First, we assess the existing communication models for MPI two-sided communication and then introduce finepoints as a hybrid of MPI models that has the best features of each existing MPI communication model. In addition, “partitioned communication” created with finepoints leverages new network hardware features that cannot be exploited with current MPI point-to-point semantics, making this new approach both innovative and useful both now and in the future. To demonstrate the validity of our hypothesis, we implement a finepoints library and show improvements against a state-of-the-art multithreaded optimized Open MPI implementation on a Cray XC40 with an Aries network. Our experiments demonstrate up to a 12 × reduction in wait time for completion of send operations. This new model is shown working on a nuclear reactor physics neutron-transport proxy-application, providing up to 26.1% improvement in communication time and up to 4.8% improvement in runtime over the best performing MPI communication mode, single-threaded MPI.
We use molecular simulations to provide a conceptual understanding of a crystalline-amorphous interface for a candidate negative thermal expansion (NTE) material. Specifically, classical molecular dynamics (MD) simulations were used to investigate the temperature and pressure dependence on structural properties of ZrW2O8. Polarizability of oxygen atoms was included to better account for the electronic charge distribution within the lattice. Constant-pressure simulations of cubic crystalline ZrW2O8 at ambient pressure reveal a slight NTE behavior, characterized by a small structural rearrangement resulting in oxygen sharing between adjacent WO4 tetrahedra. Periodic quantum calculations confirm that the MD-optimized structure is lower in energy than the idealized structure obtained from neutron diffraction experiments. Additionally, simulations of pressure-induced amorphization of ZrW2O8 at 300 K indicate that an amorphous phase forms at pressures greater than 10 GPa, and this phase persists when the pressure is decreased to 1 bar. Simulations were performed on a hybrid model consisting of amorphous ZrW2O8 in direct contact with the cubic crystalline phase. Upon equilibration at 300 K and 1 bar, the crystalline phase remains unchanged beyond a thin layer of disrupted structure at the amorphous interface. Detailed analysis reveals the transition in metal coordination at the interface.
Austenitic stainless steels are used extensively in hydrogen gas containment components due to their known resilience in hydrogen environments. Depending on the conditions, degradation can occur in austenitic stainless steels but typically the materials retain sufficient mechanical properties within such extreme environments. In many hydrogen containment applications, it is necessary or advantageous to join components through welding as it ensures minimal gas leakage, unlike mechanical fittings that can become leak paths that develop over time. Over the years many studies have focused on the mechanical behavior of austenitic stainless steels in hydrogen environments and determined their properties to be sufficient for most applications. However, significantly less data have been generated on austenitic stainless steel welds, which can exhibit more degradation than the base material. In this paper, we assess the trends observed in austenitic stainless steel welds tested in hydrogen. Experiments of welds including tensile and fracture toughness testing are assessed and comparisons to behavior of base metals are discussed.
Uranyl ion, UO22+, and its aqueous complexes with organic and inorganic ligands, are the dominant species for transport of natural occurring uranium at the Earth surface environments. In the nuclear waste management, uranyl ion and its aqueous complexes are expected to be responsible for uranium mobilization in the disposal concepts where spent fuel is disposed in oxidized environments such as unsaturated zones relative to the underground water table. In the natural environments, oxalate, in fully deprotonated form, C2O42-, is ubiquitous, as oxalate is one of the most important degradation products of humic and fulvic acids. Oxalate is known to form aqueous complexes with uranyl ion to facilitate the transport of uranium. However, oxalate also forms solid phases with uranyl ion in certain environments, limiting the movement of uranium. Therefore, the knowledge of the stability constants of aqueous and solid uranyl oxalate complexes is important not only to the understanding of the mobility of uranium in natural environments, but also to the performance assessment of radionuclides in geological repositories for spent nuclear fuel. In this work, we present the stability constants for UO2C2O4(aq) and UO2(C2O4)22- at infinite dilution based on our evaluation of the literature data over a wide range of ionic strengths up to 9.5 mol•kg-1. We also obtain the solubility constants at infinite dilution for the following solid uranyl oxalates, UO2C2O4•3H2O and UO2C2O4•H2O, based on the solubility data in a wide range of ionic strengths up to 11 mol•kg-1. In our evaluation, we use the computer code EQ3/6 Version 8.0a. The model developed by us is expected to enable researchers to accurately assess the role of oxalate in mobilization/immobilization of uranium under various conditions including those in geological repositories.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: An array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: An array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.
Campione, Salvatore; Warne, Larry K.; Halligan, Matthew; Lavrova, Olga; San Martin, Luis
We analytically model single-, two-, and three-wires above ground to determine the decay lengths of common and differential modes induced by an E1 high-altitude electromagnetic pulse (HEMP) excitation. Decay length information is pivotal to determine whether any two nodes in the power grid may be treated as uncoupled. We employ a frequency-domain method based on transmission line theory named ATLOG — Analytic Transmission Line Over Ground to model infinitely long and finite single wires, as well as solve the eigenvalue problem of a single-, two-, and three-wire system. Our calculations show that a single, semi-infinite power line can be approximated by a 10 km section of line and that the second electrical reflection for all line lengths longer than the decay length are below half the rated operating voltage. Furthermore, our results show that the differential mode propagates longer distances than the common mode in two-and three-wire systems, and this should be taken into account when performing damage assessment from HEMP excitation. This analysis is a significant step toward simplifying the modeling of practical continental grid lengths, yet maintaining accuracy, a result of enormous impact.
Additive manufacturing (AM) offers the potential for increased design flexibility in the low volume production of complex engineering components for hydrogen service. However the suitability of AM materials for such extreme service environments remains to be evaluated. This work examines the effects of internal and external hydrogen on AM type 304L austenitic stainless steels fabricated via directed-energy deposition (DED) and powder bed fusion (PBF) processes. Under ambient test conditions, AM materials with minimal manufacturing defects exhibit excellent combinations of tensile strength, tensile ductility, and fatigue resistance. To probe the effects of extreme hydrogen environments on the AM materials, tensile and fatigue tests were performed after thermalprecharging in high pressure gaseous hydrogen (internal H) or in high pressure gaseous hydrogen (external H). Hydrogen appears to have a comparable influence on the AM 304L as in wrought materials, although the micromechanisms of tensile fracture and fatigue crack growth appear distinct. Specifically, microstructural characterization implicates the unique solidification microstructure of AM materials in the propagation of cracks under conditions of tensile fracture with hydrogen. These results highlight the need to establish comprehensive microstructure-property relationships for AM materials to ensure their suitability for use in extreme hydrogen environments.
High-quality image products in an X-Ray Phase Contrast Imaging (XPCI) system can be produced with proper system hardware and data acquisition. However, it may be possible to further increase the quality of the image products by addressing subtleties and imperfections in both hardware and the data acquisition process. Noting that addressing these issues entirely in hardware and data acquisition may not be practical, a more prudent approach is to determine the balance of how the apparatus may reasonably be improved and what can be accomplished with image post-processing techniques. Given a proper signal model for XPCI data, image processing techniques can be developed to compensate for many of the image quality degradations associated with higher-order hardware and data acquisition imperfections. However, processing techniques also have limitations and cannot entirely compensate for sub-par hardware or inaccurate data acquisition practices. Understanding system and image processing technique limitations enables balancing between hardware, data acquisition, and image post-processing. In this paper, we present some of the higher-order image degradation effects we have found associated with subtle imperfections in both hardware and data acquisition. We also discuss and demonstrate how a combination of hardware, data acquisition processes, and image processing techniques can increase the quality of XPCI image products. Finally, we assess the requirements for high-quality XPCI images and propose reasonable system hardware modifications and the limits of certain image processing techniques.
It is well-known that a slotted resonant cavity with high-quality factor exhibits interior electromagnetic (EM) fields that may be even larger than the external field. The authors aim to reduce the cavity’s EM fields and quality factor over a frequency band analytically, numerically, and experimentally by introducing microwave absorbing materials in the cavity. A perturbation model approach was developed to estimate the quality factor of loaded cavities, which is validated against full-wave simulations and experiments. Results with 78.7 mils (2 mm) thick ECCOSORB-MCS absorber placed on the inside cavity wall above and below the aperture slot (with only 0.026% cavity volume) result in a reduction of shielding effectiveness >19 dB and reductions in quality factor >91%, providing confirmation of the efficacy of this approach.
The flow rates and aerosol transmission properties were evaluated for an engineered microchannel with characteristic dimensions similar to those of stress corrosion cracks (SCCs) capable of forming in dry cask storage systems (DCSS) for spent nuclear fuel. Pressure differentials covering the upper limit of commercially available DCSS were also examined. These preliminary data sets are intended to demonstrate a new capability to characterize SCCs under well-controlled boundary conditions.
We have investigated cubic zirconium tungstate (ZrW2O8) using density functional perturbation theory (DFPT), along with experimental characterization to assess and validate computational results. Cubic zirconium tungstate is among the few known materials exhibiting isotropic negative thermal expansion (NTE) over a broad temperature range, including room temperature where it occurs metastably. Isotropic NTE materials are important for technological applications requiring thermal-expansion compensators in composites designed to have overall zero or adjustable thermal expansion. While cubic zirconium tungstate has attracted considerable attention experimentally, a very few computational studies have been dedicated to this well-known NTE material. Therefore, spectroscopic, mechanical and thermodynamic properties have been derived from DFPT calculations. A systematic comparison of the calculated infrared, Raman, and phonon density-of-state spectra has been made with Fourier transform far-/mid-infrared and Raman data collected in this study, as well as with available inelastic neutron scattering measurements. The thermal evolution of the lattice parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with the observed negative thermal expansion characteristics of cubic zirconium tungstate, α-ZrW2O8. These results show that this DFPT approach can be used for studying the spectroscopic, mechanical and thermodynamic properties of prospective NTE ceramic waste forms for encapsulation of radionuclides produced during the nuclear fuel cycle.
Pyrolysis of materials at high heat fluxes are less well-studied because the high heat flux regime is not as common to many practical fire applications. The fire behavior of organic materials in such an environment needs further characterization in order to construct models to predict the dynamics in this regime. The test regime is complicated because of the temperatures achieved and the speed at which materials decompose, due to the flux condition. A series of tests has been performed, which exposed a variety of materials to this environment. The resulting imagery from the tests provides some unique insights into the behavior of various materials at these conditions. Furthermore, experimental and processing techniques suggest analytical methods that can be employed to extract quantitative information from pyrolysis experiments.
A variety of energy sources produce intense radiative flux (»100 kW/m2) well beyond those typical of fire environments. Such energy sources include directed energy, nuclear weapons, and propellant fires. Studies of material response to irradiation typically focus on much lower heat flux; characterization of materials at extreme flux is limited. Various common cellulosic and synthetic-polymer materials were exposed to intense irradiation (up to 3 MW/m2) using the Solar Furnace at Sandia National Laboratories. When irradiated, these materials typically pyrolyzed and ignited after a short time (<1 s). The mass loss for each sample was recorded; the topology of the pyrolysis crater was reconstructed using a commercial three-dimensional scanner. The scans spatially resolved the volumetric displacement, mapping this response to the radially varying flux and fluence. These experimental data better characterize material properties and responses, such as the pyrolysis efflux rate, aiding the development of pyrolysis and ignition models at extreme heat flux.
The DOE and industry collaborators have initiated the high burn-up demonstration project to evaluate the effects of drying and long-term dry storage on high burn-up fuel. Fuel was transferred to a dry storage cask, which was then dried using standard industry vacuum-drying techniques and placed on a storage pad to be opened and the fuel examined in 10 years. Helium fill gas samples were collected 5 hours, 5 days, and 12 days after closure. The samples were analyzed for fission gases (85Kr) as an indicator of damaged or leaking rods, and then analyzed to determine water content and concentrations of other trace gases. Gamma-ray spectroscopy found no detectible 85Kr. Sample water contents proved difficult to measure, requiring heating to desorb water from the inner surface of the sampling bottles. Final results indicated that water in the cask gas phase built up over 12 days to 17,400 ppmv ±10%, equivalent to ∼100 ml of water within the cask gas phase. Trace gases were measured by direct gas mass spectrometry. Carbon dioxide built up over two weeks to 930 ppmv, likely due to breakdown of hydrocarbon contaminants (possibly vacuum pump oil) in the cask. Hydrogen built up to nearly 500 ppmv. and may be attributable to water radiolysis and/or to metal corrosion in the cask.
For long-term storage, spent nuclear fuel (SNF) is placed in dry storage systems, commonly consisting of welded stainless steel canisters enclosed in ventilated overpacks. Choride-induced stress corrosion cracking (CISCC) of these canisters may occur due to the deliquescence of sea-salt aerosols as the canisters cool. Current experimental and modeling efforts to evaluate canister CISCC assume that the deliquescent brines, once formed, persist on the metal surface, without changing chemical or physical properties. Here we present data that show that magnesium chloride rich-brines, which form first as the canisters cool and sea-salts deliquesce, are not stable at elevated temperatures, degassing HCl and converting to solid carbonates and hydroxychloride phases, thus limiting conditions for corrosion. Moreover, once pitting corrosion begins on the metal surface, oxygen reduction in the cathode region surrounding the pits produces hydroxide ions, increasing the pH under some experimental conditions, leads to precipitation of magnesium hydroxychloride hydrates. Because magnesium carbonates and hydroxychloride hydrates are less deliquescent than magnesium chloride, precipitation of these compounds causes a reduction in the brine volume on the metal surface, potentially limiting the extent of corrosion. If taken to completion, such reactions may lead to brine dry-out, and cessation of corrosion.
Sodium-cooled Fast Reactors (SFRs) have an extended operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories has recently reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new modern database called the Sodium System Component Reliability Database (NaSCoRD). NaSCoRD contains a record of 117 pumps, 60 with a sodium working fluid, that have operated in EBR-II, FFTF, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper will present sodium pump failure probabilities for various conditions allowable from the U.S. facility CREDO data that has been recovered under NaSCoRD. The current sodium pump reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from an EG&G Idaho report and various prior distributions on these reliability estimates will also be presented.
Sodium-cooled Fast Reactors (SFRs) have an extended operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories has recently reconstituted the United States SFR data from the Centralized Reliability Data Organization (CREDO) into a new modern database called the Sodium System Component Reliability Database (NaSCoRD). NaSCoRD contains a record of 117 pumps, 60 with a sodium working fluid, that have operated in EBR-II, FFTF, and test loops including those operated by both Westinghouse and the Energy Technology Engineering Center. This paper will present sodium pump failure probabilities for various conditions allowable from the U.S. facility CREDO data that has been recovered under NaSCoRD. The current sodium pump reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of the suggested corrections from an EG&G Idaho report and various prior distributions on these reliability estimates will also be presented.
Radiation transport in stochastic media is a problem found in a multitude of applications, and the need for tools that are capable of thoroughly modeling this type of problem remains. A collection of approximate methods have been developed to produce accurate mean results, but the demand for methods that are capable of quantifying the spread of results caused by the randomness of material mixing remains. In this work, the new stochastic media transport algorithm Conditional Point Sampling is expanded using Embedded Variance Deconvolution such that it can compute the variance caused by material mixing. The accuracy of this approach is assessed for 1D, binary, Markovian-mixed media by comparing results to published benchmark values, and the behavior of the method is numerically studied as a function of user parameters. We demonstrate that this extension of Conditional Point Sampling is able to compute the variance caused by material mixing with accuracy dependent on the accuracy of the conditional probability function used.
Appropriate waste-forms for radioactive materials must isolate the radionuclides from the environment for long time periods. To accomplish this typically requires low waste-form solubility, to minimize radionuclide release to the environment. However, radiation eventually damages most waste-forms, leading to expansion, crumbling, increased exposed surface area, and faster dissolution. We have evaluated the use of a novel class of materials-ZrW2O8, Zr2P2WO12 and related compounds-that contract upon amorphization. The proposed ceramic waste-forms would consist of zoned grains, or sintered ceramics with center-loaded radionuclides and barren shells. Radiation-induced amorphization would result in core shrinkage but would not fracture the shells or overgrowths, maintaining isolation of the radionuclide. We have synthesized these phases and have evaluated their leach rates. Tungsten forms stable aqueous species at neutral to basic conditions, making it a reliable indicator of phase dissolution. ZrW2O8 leaches rapidly, releasing tungstate while Zr is retained as a solid oxide or hydroxide. Tungsten release rates remain elevated over time and are highly sensitive to contact times, suggesting that this material will not be an effective waste-form. Conversely, tungsten release rates from Zr2P2WO12 rapidly drop and are tied to P release rates; we speculate that a low-solubility protective Zr-phosphate leach layer forms, slowing further dissolution.
Post-closure performance assessment (PA) calculations suggest that deep borehole disposal of cesium (Cs)/strontium (Sr) capsules, a U.S. Department of Energy (DOE) waste form (WF), is safe, resulting in no releases to the biosphere over 10,000,000 years when the waste is placed in a 3-5 km deep waste disposal zone. The same is true when a hypothetical breach of a stuck waste package (WP) is assumed to occur at much shallower depths penetrated by through-going fractures. Cs and Sr retardation in the host rock is a key control over movement. Calculated borehole performance would be even stronger if credit was taken for the presence of the WP.
Porphyrins are vital pigments involved in biological energy transduction processes. Their abilities to absorb light, then convert it to energy, have raised the interest of using porphyrin nanoparticles as photosensitizers in photodynamic therapy. A recent study showed that self- assembled porphyrin-silica composite nanoparticles can selectively destroy tumor cells, but detection of the cellular uptake of porphyrin-silica composite nanoparticles was limited to imaging microscopy. Here we developed a novel method to rapidly identify porphyrin-silica composite nanoparticles using Atmospheric Solids Analysis Probe-Mass Spectrometry (ASAP-MS). ASAP-MS can directly analyze complex mixtures without the need for sample preparation. Porphyrin-silica composite nanoparticles were vaporized using heated nitrogen desolvation gas, and their thermo-profiles were examined to identify distinct mass- to-charge (M/Z) signatures. HeLa cells were incubated in growth media containing the nanoparticles, and after sufficient washing to remove residual nanoparticles, the cell suspension was loaded onto the end of ASAP glass capillary probe. Upon heating, HeLa cells were degraded and porphyrin-silica composite nanoparticles were released. Vaporized nanoparticles were ionized and detected by MS. The cellular uptake of porphyrin-silica composite nanoparticles was identified using this ASAP-MS method.
Energy and cost efficient synthesis pathways are important for the production, processing, and recycling of rare earth metals necessary for a range of advanced energy and environmental applications. In this work, we present results of successful in situ liquid cell transmission electron microscopy production and imaging of rare earth element nanostructure synthesis, from aqueous salt solutions, via radiolysis due to exposure to a 200 keV electron beam. Nucleation, growth, and crystallization processes for nanostructures formed in yttrium(iii) nitrate hydrate (Y(NO3)3·4H2O), europium(iii) chloride hydrate (EuCl3·6H2O), and lanthanum(iii) chloride hydrate (LaCl3·7H2O) solutions are discussed. In situ electron diffraction analysis in a closed microfluidic configuration indicated that rare earth metal, salt, and metal oxide structures were synthesized. Real-time imaging of nanostructure formation was compared in closed cell and flow cell configurations. Notably, this work also includes the first known collection of automated crystal orientation mapping data through liquid using a microfluidic transmission electron microscope stage, which permits the deconvolution of amorphous and crystalline features (orientation and interfaces) inside the resulting nanostructures.
The Nuclear Energy Systems Laboratory (NESL) Brayton Laboratory at Sandia National Laboratories has been at the forefront of supercritical carbon dioxide (sCO2) power cycle development since 2007 when internal R&D funds were used to investigate the stability of sCO2 as a working fluid for power cycles. Since then, Sandia has been a leader in research and development of sCO2 power cycles through government funded research and by partnering with industry to design and test components necessary for commercialization of sCO2 Brayton cycles. Peregrine Turbine Technologies (PTT) is a small business working to commercialize sCO2 power cycles with their proprietary thermodynamic cycles, heat exchangers, and turbomachinery designs. Under a Small Business Innovation Research (SBIR) program with the United States Air Force Research Laboratory, PTT has designed a novel motorless turbocompressor for sCO2 power cycles. In 2017, Sandia purchased the first sCO2 turbocompressor from PTT and installed it into the 1-MW thermal turbomachinery development platform at Sandia. PTT and Sandia have worked together to experimentally test the turbocompressor to the limits of the development platform (932 F @ 2500 psi). This report will detail the design of the turbomachinery development platform, the novel process used to start the turbomachinery, and the experimental results to date. The report will also look at lessons learned throughout the process of constructing and operating an experimental sCO2 loop.
The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is challenging due to the need to protect sensitive electronics from the space radiation environment by means of radiation shielding. This is further complicated by the need to account for uncertainties, e.g. in manufacturing. There is growing interest in automated design optimization and uncertainty quantification (UQ) techniques to help achieve that objective. Traditional optimization and UQ approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron and/or proton shields in one- or two-dimensional Cartesian geometries. In this paper we extend that work to UQ and to robust design (i.e. optimization that considers uncertainties) in 2D. This consists primarily of using the sensitivities to geometric changes, originally derived for optimization, within relevant algorithms for UQ and robust design. We perform UQ analyses on previous optimized designs given some assumed manufacturing uncertainties. We also conduct a new optimization exercise that accounts for the same uncertainties. Our results show much improved computational efficiencies over previous approaches.
Optical remote sensing has become a valuable tool in many application spaces because it can be unobtrusive, search large areas efficiently, and is increasingly accessible through commercially available products and systems. In the application space of chemical, biological, radiological, nuclear, and explosives (CBRNE) sensing, optical remote sensing can be an especially valuable tool because it enables data to be collected from a safe standoff distance. Data products and results from remote sensing collections can be combined with results from other methods to offer an integrated understanding of the nature of activities in an area of interest and may be used to inform in-situ verification techniques. This work will overview several independent research efforts focused on developing and leveraging spectral and polarimetric sensing techniques for CBRNE applications, including system development efforts, field deployment campaigns, and data exploitation and analysis results. While this body of work has primarily focused on the application spaces of chemical and underground nuclear explosion detection and characterization, the developed tools and techniques may have applicability to the broader CBRNE domain.
Near-wall turbulence models in Large-Eddy Simulation (LES) typically approximate near-wall behavior using a solution to the mean flow equations. This approach inevitably leads to errors when the modeled flow does not satisfy the assumptions surrounding the use of a mean flow approximation for an unsteady boundary condition. Herein, modern machine learning (ML) techniques are utilized to implement a coordinate frame invariant model of the wall shear stress that is derived specifically for complex flows for which mean near-wall models are known to fail. The model operates on a set of scalar and vector invariants based on data taken from the first LES grid point off the wall. Neural networks were trained and validated on spatially filtered direct numerical simulation (DNS) data. The trained networks were then tested on data to which they were never previously exposed and comparisons of the accuracy of the networks’ predictions of wall-shear stress were made to both a standard mean wall model approach and to the true stress values taken from the DNS data. The ML approach showed considerable improvement in both the accuracy of individual shear stress predictions as well as produced a more accurate distribution of wall shear stress values than did the standard mean wall model. This result held both in regions where the standard mean approach typically performs satisfactorily as well as in regions where it is known to fail, and also in cases where the networks were trained and tested on data taken from the same flow type/region as well as when trained and tested on data from different respective flow topologies.
Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.
PFLOTRAN is well-established in single-phase reactive transport problems, and current research is expanding its visibility and capability in two-phase subsurface problems. A critical part of the development of simulation software is quality assurance (QA). The purpose of the present work is QA testing to verify the correct implementation and accuracy of two-phase flow models in PFLOTRAN. An important early step in QA is to verify the code against exact solutions from the literature. In this work a series of QA tests on models that have known analytical solutions are conducted using PFLOTRAN. In each case the simulated saturation profile is rigorously shown to converge to the exact analytical solution. These results verify the accuracy of PFLOTRAN for use in a wide variety of two-phase modelling problems with a high degree of nonlinearity in the interaction between phase behavior and fluid flow.
Houchens, Brent C.; Scott, Sarah N.; Brunini, Victor E.; Jones, E.M.C.; Montoya, Michael M.; Flores-Brito, Wendy; Hoffmeister, Kathryn N.G.
It is experimentally observed that multilayer fibre–resin composites can soften and swell significantly when heated above their designed operating temperatures. This swelling is expected to further accelerate the pyrolysis, releasing volatile components which can ignite in an oxygenated environment if exposed to a spark, flame or sufficiently elevated temperature. Here the intumescent behaviour of resin-infused carbon-fibre is investigated. Preliminary experiments and simulations are compared for a carbon-fibre sample radiatively heated on the top side and insulated on the bottom. Simulations consider coupled thermal and porous media flow.
The DOE and industry collaborators have initiated the high burn-up demonstration project to evaluate the effects of drying and long-term dry storage on high burn-up fuel. Fuel was transferred to a dry storage cask, which was then dried using standard industry vacuum-drying techniques and placed on a storage pad to be opened and the fuel examined in 10 years. Helium fill gas samples were collected 5 hours, 5 days, and 12 days after closure. The samples were analyzed for fission gases (85Kr) as an indicator of damaged or leaking rods, and then analyzed to determine water content and concentrations of other trace gases. Gamma-ray spectroscopy found no detectible 85Kr. Sample water contents proved difficult to measure, requiring heating to desorb water from the inner surface of the sampling bottles. Final results indicated that water in the cask gas phase built up over 12 days to 17,400 ppmv ±10%, equivalent to ∼100 ml of water within the cask gas phase. Trace gases were measured by direct gas mass spectrometry. Carbon dioxide built up over two weeks to 930 ppmv, likely due to breakdown of hydrocarbon contaminants (possibly vacuum pump oil) in the cask. Hydrogen built up to nearly 500 ppmv. and may be attributable to water radiolysis and/or to metal corrosion in the cask.
Streamline-based quad meshing algorithms use smooth cross fields to partition surfaces into quadrilateral regions by tracing cross field separatrices. In practice, re-entrant corners and misalignment of singularities lead to small regions and limit cycles, negating some of the benefits a quad layout can provide in quad meshing. We introduce three novel methods to improve on a pipeline for coarse quad partitioning. First, we formulate an efficient method to compute high-quality cross fields on curved surfaces by extending the diffusion generated method from Viertel and Osting (SISC, 2019). Next, we introduce a method for accurately computing the trajectory of streamlines through singular triangles that prevents tangential crossings. Finally, we introduce a robust method to produce coarse quad layouts by simplifying the partitions obtained via naive separatrix tracing. Our methods are tested on a database of 100 objects and the results are analyzed. The algorithm performs well both in terms of efficiency and visual results on the database when compared to state-of-the-art methods.
Previous efforts determined a set of calibrated model parameters for ReynoldsAveraged Navier Stokes (RANS) simulations of a compressible jet in crossflow (JIC) using a k-ɛ turbulence model. These coefficients were derived from Particle Image Velocimetry (PIV) data of a complementary experiment using a limited set of flow conditions. Here, k-ɛ models using conventional (nominal) and calibrated parameters are rigorously validated against PIV data acquired under a much wider variety of JIC cases, including a flight configuration. The results from the simulations using the calibrated model parameters showed considerable improvements over those using the nominal values, even for cases that were not used in defining the calibrated parameters. This improvement is demonstrated using quality metrics defined specifically to test the spatial alignment of the jet core as well as the magnitudes of flow variables on the PIV planes. These results suggest that the calibrated parameters have applicability well outside the specific flow case used in defining them and that with the right model parameters, RANS results can be improved significantly over the nominal.
Exposure to chemicals in everyday life is now more prevalent than ever. Air and water pollution can be delivery mechanisms for toxins, carcinogens, and other chemicals of interest (COI). A compact, multiplexed, chemical sensor with high responsivity and selectivity is desperately needed. We demonstrate the integration of unique Zr-based metal organic frameworks (MOFs) with a plasmonic transducer to demonstrate a nanoscale optical sensor that is both highly sensitive and selective to the presence of COI. MOFs are a product of coordination chemistry where a central ion is surrounded by a group of ligands resulting in a thin-film with nano-to micro-porosity, ultra-high surface area, and precise structural tunability. These properties make MOFs an ideal candidate for gaseous chemical sensing, however, transduction of a signal which probes changes in MOF films has been difficult. Plasmonic sensors have performed well in many sensing environments, but have had limited success detecting gaseous chemical analytes at low levels. This is due, in part, to the volume of molecules required to interact with the functionalized surface and produce a detectable shift in plasmonic resonance frequency. The fusion of a highly porous thin-film layer with an efficient plasmonic transduction platform is investigated and summarized. We will discuss the integration and characterization of the MOF/plasmonic sensor and summarize our results which show, upon exposure to COI, small changes in optical characteristics of the MOF layer are effectively transduced by observing shifts in plasmonic resonance.
Wilson, David G.; Darani, Shadi; Abdelkhalik, Ossama; Robinett, Rush D.
The dynamic model ofWave Energy Converters (WECs) may have nonlinearities due to several reasons such as a nonuniform buoy shape and/or nonlinear power takeoff units. This paper presents the Hamiltonian Surface-Shaping (HSS) approach as a tool for the analysis and design of nonlinear control of WECs. The Hamiltonian represents the stored energy in the system and can be constructed as a function of the WEC's system states, its position, and velocity. The Hamiltonian surface is defined by the energy storage, while the system trajectories are constrained to this surface and determined by the power flows of the applied non-conservative forces. The HSS approach presented in this paper can be used as a tool for the design of nonlinear control systems that are guaranteed to be stable. The optimality of the obtained solutions is not addressed in this paper. The case studies presented here cover regular and irregular waves and demonstrate that a nonlinear control system can result in a multiple fold increase in the harvested energy.
Understanding the viscosity and friction of a fluid under nanoconfinement is the key to nanofluidics research. Existing work on nanochannel flow enhancement has been focused on simple systems with only one to two fluids considered such as water flow in carbon nanotubes, and large slip lengths have been found to be the main factor for the massive flow enhancement. In this study, we use molecular dynamics simulations to study the fluid flow of a ternary mixture of octane-carbon dioxide-water confined within two muscovite and kerogen surfaces. The results indicate that, in a muscovite slit, supercritical CO2 (scCO2) and H2O both enhance the flow of octane due to (i) a decrease in the friction of octane with the muscovite wall because of the formation of thin layers of H2O and scCO2 near the surfaces; and (ii) a reduction in the viscosity of octane in nanoconfinement. Water reduces octane viscosity by weakening the interaction of octane with the muscovite surface, while scCO2 reduces octane viscosity by weakening both octane-octane and octane-surface interactions. In a kerogen slit, water does not play any significant role in changing the friction or viscosity of octane. In contrast, scCO2 reduces both the friction and the viscosity of octane, and the enhancement of octane flow is mainly caused by the reduction of viscosity. Our results highlight the importance of multicomponent interactions in nanoscale fluid transport. The results presented here also have a direct implication in enhanced oil recovery in unconventional reservoirs.
Recently, a Cambrian explosion of a novel, non-volatile memory (NVM) devices known as memristive devices have inspired effort in building hardware neural networks that learn like the brain. Early experimental prototypes built simple perceptrons from nanosynapses, and recently, fully-connected multi-layer perceptron (MLP) learning systems have been realized. However, while backpropagating learning systems pair well with high-precision computer memories and achieve state-of-the-art performances, this typically comes with a massive energy budget. For future Internet of Things/peripheral use cases, system energy footprint will be a major constraint, and emerging NVM devices may fill the gap by sacrificing high bit precision for lower energy. In this paper, we contrast the well-known MLP approach with the extreme learning machine (ELM) or NoProp approach, which uses a large layer of random weights to improve the separability of high-dimensional tasks, and is usually considered inferior in a software context. However, we find that when taking the device non-linearity into account, NoProp manages to equal hardware MLP system in terms of accuracy. While also using a sign-based adaptation of the delta rule for energy-savings, we find that NoProp can learn effectively with four to six 'bits' of device analog capacity, while MLP requires eight-bit capacity with the same rule. This may allow the requirements for memristive devices to be relaxed in the context of online learning. By comparing the energy footprint of these systems for several candidate nanosynapses and realistic peripherals, we confirm that memristive NoProp systems save energy compared with MLP systems. Lastly, we show that ELM/NoProp systems can achieve better generalization abilities than nanosynaptic MLP systems when paired with pre-processing layers (which do not require backpropagated error). Collectively, these advantages make such systems worthy of consideration in future accelerators or embedded hardware.
Patch antennas incorporating a U-shaped slot are well-known to have relatively large (about 30%) impedance bandwidths. This work uses characteristic mode analysis (CMA) to explain the impedance behavior of a classic U-slot patch geometry in terms of coupled mode theory and shows the relevant modes are in-phase and anti-phase coupled modes whose resonant frequencies are governed by coupled mode theory. Additional analysis shows that one uncoupled resonator is the conventional TM01 patch mode and the other is a lumped LC resonator involving the slot and the probe. An equivalent circuit model for the antenna is given wherein element values are extracted from CMA data and which explicitly demonstrates coupling between these two resonators. The circuit model approximately reproduces the impedance locus of the driven simulation. A design methodology based on coupled mode theory and guided by CMA is presented that allows wideband U-slot patch geometries to be designed quickly and efficiently. The methodology is illustrated through example.
Significant testing is required to design and certify primary aircraft structures subject to High Energy Dynamic Impact (HEDI) events; current work under the NASA Advanced Composites Consortium (ACC) HEDI Project seeks to determine the state-of-the-art of dynamic fracture simulations for composite structures in these events. This paper discusses one of three Progressive Damage Analysis (PDA) methods selected for the second phase of the NASA ACC project: peridynamics, through its implementation in EMU. A brief discussion of peridynamic theory is provided, including the effects of nonlinearity and strain rate dependence of the matrix followed by a blind prediction and test-analysis correlation for ballistic impact testing performed for configured skin-stringer panels.
A decentered Zernike overlay is utilized in the design of a field biased off-axis wide field of view reflective imager and the optical performance with this surface type is compared to a conic only solution.
We study an iterative low-rank approximation method for the solution of the steady-state stochastic Navier-Stokes equations with uncertain viscosity. The method is based on linearization schemes using Picard and Newton iterations and stochastic finite element discretizations of the linearized problems. For computing the low-rank approximate solution, we adapt the nonlinear iterations to an inexact and low-rank variant, where the solution of the linear system at each nonlinear step is approximated by a quantity of low rank. This is achieved by using a tensor variant of the GMRES method as a solver for the linear systems. We explore the inexact low-rank nonlinear iteration with a set of benchmark problems, using a model of ow over an obstacle, under various configurations characterizing the statistical features of the uncertain viscosity, and we demonstrate its effectiveness by extensive numerical experiments.
We discuss chemical, structural, and ellipsometry characterization of low temperature epitaxial Si. While low temperature growth is not ideal, we are still able to prepare crystalline Si to cap functional atomic precision devices.
The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth’s atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this paper, we present our code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium, as well as their deployment in the Sandia Parallel Aerodynamics and Reentry Code (SPARC).
We report measurements of a+/− 5 mm toroidal variation of the outer strike point radial position using an array of three identical Langmuir probes distributed at 90° intervals around the torus (90° 180° 270°). The strike point radial location is determined from the profiles of floating potential (Vf) measured by the three 6 mm diameter domed Langmuir probes as the strike point is swept radially on a horizontal tile surface just outside of the upper small angle slot (SAS1) divertor. Based on the three probe measurements, the strike point variation is consistent with previous error field measurements by Schaffer [1,2] and estimates by Luxon [3] which indicated the strike point error could appear as an n = 1 radial variation of 4.5 mm at the outer mid plane and thus could be effectively described with a three point measurement. The results are also consistent with field line tracing calculations using the MAFOT code [4]. The small angle slot (SAS1) divertor performance is particularly sensitive to a misalignment with the divertor plasma since enhanced neutral confinement and recycling in the slot and distribution of neutrals along the slot surfaces are important for achieving divertor detachment at the lowest possible core plasma separatrix density. These strike point measurements are discussed with regard to the slot divertor alignment.
Photovoltaic (PV) power plants and their constituent components, by virtue of their application, are exposed to some of the harshest outdoor terrestrial environments. Most equipment is subject directly to the environment and myriad stresses (micro and macro environment). Other aspects including local site conditions, construction variability and quality, and maintenance practices also influence the likelihood of such hazards. Many discrete components, including PV modules, wires, connectors, wire management devices, combiner boxes, protection devices, inverters, and transformers, make up the PV generation system. While there are abundant data that illustrate PV modules and PV inverters to be the major contributors of PV system failures, the mentioned data illustrate the importance of minimizing failures in the often ignored components such as PV connectors, PV wires (both above and below ground), wire splices, fuses, fuse holders, fuse holder enclosures, and wire management devices. With the exception of PV fuses, these components predominantly use polymeric materials. Therefore, it is crucial to understand the typical materials used in components, degradation processes and mechanisms leading to component failure, and their impact on system performance or failure. It further provides some practical considerations, approaches, and methods in addressing the problems with practical solutions in the design to assure the performance of the PV plant over the intended design lifetime.
Structural Health Monitoring 2019: Enabling Intelligent Life-Cycle Health Management for Industry Internet of Things (IIOT) - Proceedings of the 12th International Workshop on Structural Health Monitoring
Reliable structural health monitoring (SHM) systems can automatically process data, assess structural condition and signal the need for human intervention. There is a significant need for formal SHM technology validation and quantitative performance assessment processes to uniformly and comprehensively support the evolution and adoption of SHM systems. In recent years, the SHM community has made significant advances in its efforts to evolve statistical methods for analyzing data from in-situ sensors. Several statistical approaches have been demonstrated using real data from multiple SHM technologies to produce Probability of Detection (POD) performance measures. Furthermore, limited comparisons of these methods - utilizing different simplification assumptions and data types - have shown them to produce similar POD values. Given these encouraging results, it is important to understand the circumstances under which the data was acquired. Thus far, the statistical analyses have assumed the viability of the data outright and focused on the performance quantification process once acceptable data has been compiled. This paper will address the array of parameters that must be considered when conducting tests to acquire representative SHM data. For some SHM applications, it may not be possible to simulate all environments in one single test. All relevant parameters must be identified and considered by properly merging results from multiple tests. Laboratory tests, for example, may have separate fatigue and environmental response components. Flight tests, which will likely not include statistically-relevant damage detection opportunities, will still play an important role in assessing overall SHM system performance under an aircraft operator's control. One statistical method, the One-Sided Tolerance Interval (OSTI) approach, will be discussed along with the test methods used to acquire the data. Finally, prospects for streamlining the deployment of SHM solutions will be considered by comparing SHM data needs during what is now an introductory phase of SHM usage with future data needs after a substantial database of SHM data and usage history has been compiled.
Femtosecond Laser Electronic Excitation Tagging (FLEET) is used to measure velocity flowfields in the wake of a sharp 7◦ half-angle cone in nitrogen at Mach 8, over freestream Reynolds numbers from 4.3∗106 /m to 13.8∗106 /m. Flow tagging reveals expected wake features such as the separation shear layer and two-dimensional velocity components. Frequency-tripled FLEET has a longer lifetime and is more energy efficient by tenfold compared to 800 nm FLEET. Additionally, FLEET lines written with 267 nm are three times longer and 25% thinner than that written with 800 nm at a 1 µs delay. Two gated detection systems are compared. While the PIMAX 3 ICCD offers variable gating and fewer imaging artifacts than a LaVision IRO coupled to a Photron SA-Z, its slow readout speed renders it ineffective for capturing hypersonic velocity fluctuations. FLEET can be detected to 25 µs following excitation within 10 mm downstream of the model base, but delays greater than 4 µs have deteriorated signal-to-noise and line fit uncertainties greater than 10%. In a hypersonic nitrogen flow, exposures of just several hundred nanoseconds are long enough to produce saturated signals and/or increase the line thickness, thereby adding to measurement uncertainty. Velocity calculated between the first two delays offer the lowest uncertainty (less than 3% of the mean velocity).