In the past decade, multi-axis vibration testing has progressed from its early research stages towards becoming a viable technology which can be used to simulate more realistic environmental conditions. The benefits of multi-axis vibration simulation over traditional uniaxial testing methods have been demonstrated by numerous authors. However, many challenges still exist to best utilize this new technology. Specifically, methods to obtain accurate and reliable multi-axis vibration specifications based on data acquired from field tests is of great interest. Traditional single axis derivation approaches may be inadequate for multi-axis vibration as they may not constrain profiles to adhere to proper cross-axis relationships—they may introduce behavior that is neither controllable nor representative of the field environment. A variety of numerical procedures have been developed and studied by previous authors. The intent of this research is to benchmark the performance of these different methods in a well-controlled lab setting to provide guidance for their usage in a general context. Through a combination of experimental and analytical work, the primary questions investigated are as follows: (1) In the absence of part-to-part variability and changes to the boundary condition, which specification derivation method performs the best? (2) Is it possible to optimize the sensor selection from field data to maximize the quality/accuracy of derived multi-axis vibration specifications? (3) Does the presence of response energy in field data which did not originate due to rigid body motion degrade the accuracy of multi-axis vibration specifications obtained via these derivation methods?
Random vibration tests have been conducted for over 5 decades using vibration machines which excite a test item in uniaxial motion. With the advent of multi shaker test systems, excitation in multiple axes and/or at multiple locations is feasible. For random vibration testing, both the auto spectrum of the individual controls and the cross spectrum, which defines the relationship between the controls, define the test environment. This is a striking contrast to uniaxial testing where only the control auto spectrum is defined. In a vibration test the energy flow proceeds from drive excitation voltages to control acceleration auto and cross spectral densities and finally, to response auto and cross spectral densities. This paper examines these relationships, which are encoded in the frequency response function. Following the presentation of a complete system diagram, examination of the relationships between the excitation and control spectral density matrices is clarified. It is generally assumed that the control auto spectra are known from field measurements, but the control cross spectra may be unknown or uncertain. Given these constraints, control algorithms often prioritize replication of the field auto spectrum. The system dynamics determine the cross spectrum. The Nearly Independent Drive Algorithm, described herein, is one approach. A further issue in Multi Input Multi Response testing is the link between cross spectrum at one set of locations and auto spectra at a second set of locations. The effect of excitation cross spectra on control auto spectra is one important case, encountered in every test. The effect of control cross spectra on response auto spectra is important since we may desire to adjust control cross spectra to achieve some desired response auto spectra. The relationships between cross spectra at one set of locations and auto spectra at another set of locations is examined with the goal of elucidating the advantages and limitations of using control cross spectra to define response auto spectra.
Spatially resolved, line-of-sight measurements of aluminum monoxide emission spectra in laser ablation plasma are used with Abel inversion techniques to extract radial plasma temperatures. Contour mapping of the radially deconvolved signal intensity shows a ring of AlO formation near the plasma boundary with the ambient atmosphere. Simulations of the molecular spectra were coupled with the line profile fitting routines. Temperature results are presented with simultaneous inferences from lateral, asymmetric radial, and symmetric radial AlO spectral intensity profiles. This analysis indicates that shockwave phenomena in the radial profiles, including a temperature drop behind the blast wave created during plasma initiation were measured.
Trajectories of unique particles were tracked using spatially and temporally interlaced single-shot images from multiple views. Synthetic data were investigated to verify the ability of the technique to track particles in three-dimensions and time. The synthetic data was composed of four images from unique perspectives at four instances in time. The analysis presented verifies that under certain circumstances particle trajectories can be mapped in three dimensions from a minimal amount of information, i.e. one image per viewing angle. These results can enable four-dimensional measurements where they may otherwise prove unfeasible.
A collaborative testing and analysis effort investigating the effects of threaded fastener size on load-displacement behavior and failure was conducted to inform the modeling of threaded connections. A series of quasistatic tension tests were performed on #00, #02, #04, #06 and #4 (1/4”) A286 stainless steel fasteners (NAS1351N00-4, NAS1352N02-6, NAS1352N04-8, NAS1352N06-10, and NAS1352N4-24, respectively) to provide calibration and validation data for the analysis portion of the study. The data obtained from the testing series reveals that the size of the fastener may influence the characteristic stress-strain response, as the failure strains and ultimate loads varied between the smaller (#00 and #02) and larger (#04, #06, and #4) fasteners. These results motivated the construction of high-fidelity finite element models to investigate the underlying mechanics of these responses. Two threaded fastener models, one with axisymmetric threads and the other with full 3D helical threads, were calibrated to subsets of the data to compare modeling approaches, analyze fastener material properties, and assess how well these calibrated properties extend to fasteners of varying sizes and if trends exist that can inform future best modeling practices. The modeling results are complemented with a microstructural analysis to further investigate the root cause of size effects observed in the experimentally obtained load-displacement curves. These analyses are intended to inform and guide reduced-order modeling approaches that can be incorporated in system level analyses of abnormal environments where modeling fidelity is limited and each component is not always testable, but models must still capture fastener behavior up to and including failure. This complimentary testing and analysis study identifies differences in the characteristic stress-strain response of varying sized fasteners, provides microstructural evidence to support these variations, evaluates our ability to extrapolate calibrated properties to different sized fasteners, and ultimately further educates the analysis community on the robustness of fastener modeling.
Mazumdar, Yi C.; Heyborne, Jeffery D.; Guildenbecher, Daniel R.; Smyser, Michael E.; Slipchenko, Mikhail N.
Digital in-line holography techniques for coherent imaging are important for object sizing and tracking applications in multiphase flows and combustion systems. In explosive, supersonic, or hypersonic environments, however, gas-phase shocks impart phase distortions that obscure objects. In this work, we implement phase-conjugate digital in-line holography (PCDIH) with both a picosecond laser and a nanosecond pulse-burst laser for reducing the phase distortions caused by shock-waves. The technique operates by first passing a forward beam of coherent light through the shock-wave phase-distortion. The light then enters a phase-conjugate mirror, created via a degenerate four-wave-mixing process, to produce a return beam with the opposite phase-delay as the forward beam. By passing the return beam back through the phase-distortion, phase delays are canceled producing phase-distortion-free images. This technique enables the measurement of the three-dimensional position and velocity of objects through shock-wave distortions at rates up to 500 kHz. This method is demonstrated in a variety of experiments including imaging supersonic shock-waves, sizing objects through laser-spark plasma-generated shock-waves, and tracking explosively-generated hypersonic fragments.
The simulation of various structural systems often requires accounting for the fasteners holding the distinct parts together. When fasteners are not expected to yield, simple reduced representations like linear springs can be used. However, in analyses of abnormal environments where fastener failure must be accounted for, fasteners are often represented with more detail. One common approach is to mesh the head and the shank of the fastener as smooth cylinders, neglecting the threads (referred to as a plug model). The plug can elicit a nonlinear mechanical response by using an elastic-plastic material model, which can be calibrated to experimental load-displacement curves, typically in pure tension. Fasteners rarely fail exclusively in pure tension, so the study presented here considers current plug modeling practice at multiaxial loadings. Comparisons of this plug model are made to experimental data as well as a higher fidelity model that includes the threads of the fastener. For both models, a multilinear elastic-plastic constitutive model is used, and two different failure models are explored to capture the ultimate failure of the fastener. The load-displacement curves of all three sets of data (the plug model, threaded model, and the experiments) are compared. The comparisons between simulations and experiments contribute to understanding the role of multiaxial loading on fastener response, and motivate future work on improving fastener models that can accurately capture multiaxial failure.
Wood, Michael G.; Reines, Isak C.; Luk, Ting S.; Serkland, Darwin K.; Campione, Salvatore
We numerically analyze the role of carrier mobility in transparent conducting oxides in epsilon-near-zero phase modulators. High-mobility materials such as cadmium oxide enable compact photonic phase modulators with a modulation figure of merit >29 º/dB.
In computational structural dynamics problems, the ability to calibrate numerical models to physical test data often depends on determining the correct constraints within a structure with mechanical interfaces. These interfaces are defined as the locations within a built-up assembly where two or more disjointed structures are connected. In reality, the normal and tangential forces arising from friction and contact, respectively, are the only means of transferring loads between structures. In linear structural dynamics, a typical modeling approach is to linearize the interface using springs and dampers to connect the disjoint structures, then tune the coefficients to obtain sufficient accuracy between numerically predicted and experimentally measured results. This work explores the use of a numerical inverse method to predict the area of the contact patch located within a bolted interface by defining multi-point constraints. The presented model updating procedure assigns contact definitions (fully stuck, slipping, or no contact) in a finite element model of a jointed structure as a function of contact pressure computed from a nonlinear static analysis. The contact definitions are adjusted until the computed modes agree with experimental test data. The methodology is demonstrated on a C-shape beam system with two bolted interfaces, and the calibrated model predicts modal frequencies with <3% total error summed across the first six elastic modes.
Reza, Shahed; Klein, Brianna A.; Baca, Albert G.; Armstrong, Andrew M.; Allerman, Andrew A.; Douglas, Erica A.; Kaplar, Robert J.
The emerging Al-rich AlGaN-channel Al x Ga1-xN/Al y Ga1-yN high-electron-mobility transistors (HEMTs) with 0.7 ≤ y < x ≤ 1.0 have the potential to greatly exceed the power handling capabilities of today's GaN HEMTs, possibly by five times. This projection is based on the expected 4× enhancement of the critical electric field, the 2× enhancement of sheet carrier density, and the parity of the electron saturation velocity for Al-rich AlGaN-channel HEMTs relative to GaN-channel HEMTs. In this paper, the expected increased RF power density in Al-rich AlGaN-channel HEMTs is calculated by theoretical analysis and computer simulations, based on existing data on long-channel AlGaN devices. It is shown that a saturated power density of 18 W mm-1, a power-added efficiency of 55% and an output third-order intercept point over 40 dB can be achieved for this technology. The method for large-signal RF performance estimation presented in this paper is generic and can be applied to other novel high-power device technologies at the early stages of development.
This paper details a joint numerical and experimental investigation of transition-delaying roughness. A numerical simulation was undertaken to design a surface roughness configuration that would suppress Mack’s 2nd mode instability in order to maintain laminar flow over a Mach 8 hypersonic blunt cone. Following the design process the roughness configuration was implemented on a hypersonic cone test article. Multiple experimental runs at the Mach 8 condition with different Reynolds numbers were run, as well as an off-design Mach 5 condition. The roughness did appear to delay transition in the Mach 8 case as intended, but did not appear to delay transition in the Mach 5 case. Concurrently, simulations of the roughness configuration were also computed for both Mach cases utilizing the experimental conditions. Linear stability theory was applied to the simulations in order to determine their boundary layer stability characteristics. This investigation of multiple cases helps to validate the numerical code with real experimental results as well as provide physical evidence for the transition-delaying roughness phenomenon.
A numerical study of the response of a conical structure to periodic turbulent spot loading at Mach 6 is conducted and compared with experimental results. First, a deterministic model which describes the birthing of turbulent spots established by a defined forcing frequency as well as the evolution of the spots is derived. The model is then used to apply turbulent spot loading to a calibrated finite element model of a slender cone structure. The numerical solution yielded acceleration response data for the cone structure. These data are compared to experimental measurement. Similar damping times and acceleration amplitudes are observed for isolated spots. At higher frequencies of turbulent spot generation, the panel response corresponds to the structural natural mode shape being forced; however, only qualitative agreement is observed. Finally, the convection velocity for two cases is varied. It is shown that marginal deviations in the convection velocity of turbulent spots yields little change in the resulting response of a structure. This result illustrates that the time between spot events provides the dominant determination of which structural modes are excited.
Al-rich AlGaN-channel high electron mobility transistors with 80-nm long gates and 85% (70%) Al in the barrier (channel) were evaluated for RF performance. The dc characteristics include a maximum current of 160 mA/mm with a transconductance of 24 mS/mm, limited by source and drain contacts, and an on/off current ratio of 109. fT of 28.4 GHz and fMAX of 18.5 GHz were determined from small-signal S-parameter measurements. Output power density of 0.38 W/mm was realized at 3 GHz in a power sweep using on-wafer load pull techniques.
A controlled between-groups experiment was conducted to demonstrate the value of human factors for process design. Twenty-four Sandia National Laboratories employees completed a simple visual inspection task simulating receipt inspection. The experimental group process was designed to conform to human factors and visual inspection principles, whereas the control group process was designed without consideration of such principles. Results indicated the experimental group exhibited superior performance accuracy, lower workload, and more favorable usability ratings as compared to the control group. The study provides evidence to help human factors experts revitalize the critical message regarding the benefits of human factors involvement for a new generation of systems engineers.
The image classification accuracy of a TaOx ReRAM-based neuromorphic computing accelerator is evaluated after intentionally inducing a displacement damage up to a fluence of 1014 2.5-MeV Si ions/cm2 on the analog devices that are used to store weights. Results are consistent with a radiation-induced oxygen vacancy production mechanism. When the device is in the high-resistance state during heavy ion radiation, the device resistance, linearity, and accuracy after training are only affected by high fluence levels. The findings in this paper are in accordance with the results of previous studies on TaOx-based digital resistive random access memory. When the device is in the low-resistance state during irradiation, no resistance change was detected, but devices with a 4-kΩ inline resistor did show a reduction in accuracy after training at 1014 2.5-MeV Si ions/cm2. This indicates that changes in resistance can only be somewhat correlated with changes to devices' analog properties. This paper demonstrates that TaOx devices are radiation tolerant not only for high radiation environment digital memory applications but also when operated in an analog mode suitable for neuromorphic computation and training on new data sets.
With the growth of light field imaging as an emerging diagnostic tool for the measurement of 3D particle fields, various algorithms for 3D particle measurements have been developed. These methods have exploited both the computational refocusing and perspective-shift capabilities of plenoptic imaging. This work continues the development of a 3D particle location method based on perspective-shifted plenoptic images. Specific focus is placed on adaptations that provide increased robustness for variations in and measurement of size and shape characteristics, thus allowing measurements of fragment fields. An experimental data set of non-spherical fragment simulants is studied to examine the dependency of the uncertainty of this perspective-shift based processing method on particle shape and the uncertainty of size measurements of fragments. Synthetic data sets are examined to provide metrics of the relationship between measurement uncertainty that can be achieved using this method, particle density, and processing time requirements.
Privat, A.; Barnaby, H.J.; Adell, P.C.; Tolleson, B.S.; Wang, Y.; Davis, P.; Buchheit, Thomas E.
A multiscale modeling platform that supports the 'virtual' qualification of commercial-off-the-shelf parts is presented. The multiscale approach is divided into two modules. The first module generates information related to the bipolar junction transistor gain degradation that is a function of fabrication process, operational, and environmental inputs. The second uses this information as inputs for radiation-enabled circuit simulations. The prototype platform described in this paper estimates the total ionizing dose and dose rate responses of linear bipolar integrated circuits for different families of components. The simulation and experimental results show good correlation and suggest this platform to be a complementary tool within the radiation-hardness assurance flow. The platform may reduce some of the costly reliance on testing for all systems.
Poisson’s ratio of soft, hyperelastic foam materials such as silicone foam is typically assumed to be both a constant and a small number near zero. However, when the silicone foam is subjected to large deformation into densification, the Poisson’s ratio may significantly change, which warrants careful and appropriate consideration in modeling and simulation of impact/shock mitigation scenarios. The evolution of the Poisson’s ratio of foam materials has not yet been characterized. In this study, radial and axial measurements of specimen strain are made simultaneously during quasi-static and dynamic compression test on a silicone foam. The Poisson’s ratio was found to exhibit a transition from compressible to nearly incompressible based on strain level and reached different values at quasi-static and dynamic rates.
Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratio of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. The Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.
Understanding the dynamic behavior of geomaterials is critical for refining modeling and simulation of applications that involve impacts or explosions. Obtaining material properties of geomaterials is challenging, particularly in tension, due to the brittle and low-strength nature of such materials. Dynamic split tension technique (also called dynamic Brazilian test) has been employed in recent decades to determine the dynamic tensile strength of geomaterials. This is primarily because the split tension method is relatively straightforward to implement in a Kolsky compression bar. Typically, investigators use the peak load reached by the specimen to calculate the tensile strength of the specimen material, which is valid when the specimen is compressed at quasi-static strain rate. However, the same assumption cannot be safely made at dynamic strain rates due to wave propagation effects. In this study, the dynamic split tension (or Brazilian) test technique is revisited. High-speed cameras and digital image correlation (DIC) were used to image the failure of the Brazilian-disk specimen to discover when the first crack occurred relative to the measured peak load during the experiment. Differences of first crack location and time on either side of the sample were compared. The strain rate when the first crack is initiated was also compared to the traditional estimation method of strain rate using the specimen stress history.
To better understand the factors contributing to electromagentic (EM) observables in developed field sites, we examine in detail through finite element analysis the specific effects of casing completion design. The presense of steel casing has long been exploited for improved subsurface interrogation and there is growing interest in remote methods for assessing casing integrity accross a range of geophysical scenarios related to resource development and sequestration/storage activities. Accurate modeling of the casing response to EM stimulation is recognized as relevant, and a difficult computational challenge because of the casing's high conductivity contrast with geomaterials and its relatively small volume fraction over the field scale. We find that casing completion design can have a significant effect on the observed EM fields, especially at zero frequency. This effect appears to originate in the capacitive coupling between inner production casing and the outer surface casing. Furthermore we show that an equivalent “effective conductivity” for the combined surface/production casing system is inadequate for replicating this effect, regardless of whether the casings are grounded to one another or not. Lastly, we show that in situations where this coupling can be ignored and knowledge of casing currents is not required, simplifying the casing as a perfectly conducting line can be an effective strategy for reducing the computational burden in modeling field-scale response.
We seek to develop a fundamental understanding of dynamic strain aging through discovery experiments to inform the development of a dislocation based micromechanical constitutive model that can tie to existing continuum level plasticity and failure analysis tools. Dynamic strain aging (DSA) occurs when dislocation motion is hindered by the repetitive interaction of solute atoms, most frequently interstitials, with dislocation cores. At temperatures where the interstitials are mobile enough, the atmospheres can repeatedly reform, lock, and release dislocations producing a characteristic serrated flow curve. This phenomenon can produce reversals in the expected mechanical behavior of materials with varying strain rate or temperature. Loss of ductility can also occur. Experiments were conducted on various forms of 304L stainless steel over a range of temperatures and strain rates, along with temporally extreme measurements to capture information from the data signals during serrated flow. The experimental approach and observations for some of the test conditions are described herein.
Polymeric foams have been extensively used in shock isolation applications because of their superior shock or impact energy absorption capability. In order to meet the shock isolation requirements, the polymeric foams need to be experimentally characterized and numerically modeled in terms of material response under shock/impact loading and then evaluated with experimental, analytical, and/or numerical efforts. Measurement of the dynamic compressive stress-strain response of polymeric foams has become fundamental to the shock isolation performance. However, radial inertia has become a severe issue when characterizing soft materials. It is even much more complicated and difficult to address the radial inertia effect in soft polymeric foams. In this study, we developed an analytical method to calculate the additional stress induced by radial inertia in a polymeric foam specimen. The effect of changing profile of Poisson’s ratio during deformation on radial inertia was investigated. The analytical results were also compared with experimental results obtained from Kolsky compression bar tests on a silicone foam.
Delaminations are of great concern to any fiber reinforced polymer composite (FRPC) structure. In order to develop the most efficient structure, designers may incorporate hybrid composites to either mitigate the weaknesses in one material or take advantage #of the strengths of another. When these hybrid structures are used at service temperatures outside of the cure temperature, residual stresses can develop at the dissimilar interfaces. These residual stresses impact the initial stress state at the crack tip of any flaw in the structure and govern whether microcracks, or other defects, grow into large scale delaminations. Recent experiments have shown that for certain hybrid layups which are used to determine the strain energy release rate, G, there may be significant temperature dependence on the apparent toughness. While Nairn and Yokozeki believe that this effect may solely be attributed to the release of stored strain energy in the specimen as the crack grows, others point to a change in the inherent mode mixity of the test, like in the classic interface crack between two elastic layers solution given by Suo and Hutchinson. When a crack is formed at the interface of two dissimilar materials, while the external loading, in the case of a double cantilever beam (DCB), is pure mode I, the stress field at the crack tip produces a mixed-mode failure. Perhaps a change in apparent toughness with temperature can be the result of an increase in mode mixity. This study serves to investigate whether the residual stress formed at the bimaterial interface produces a noticeable shift in the strain energy release rate-mode mixing curve.
The concept of progressive failure modeling is an ongoing concern within the composite community. A common approach is to employ a building block approach where constitutive material properties lead to lamina level predictions which then lead to laminate predictions and then up to structural predictions. There are advantages to such an approach, developments can be made within each step and the whole workflow can be updated. However, advancements made at higher length scales can be hampered by insufficient modeling at lower length scales. This can make industry wide evaluations of methodologies more complicated. For instance, significant advances have been made in recent years to strain rate independent failure theories on the lamina level. However, since the Northwestern Theory is stress dependent, for adequate use in a progressive damage model, a similarly robust constitutive model must also be employed to calculate these lamina level stresses. An improper constitutive model could easily cause a valid failure model to produce incorrect results. Also, any global strain rate applied to a multi-directional laminate will produce a spectrum of local lamina level strain rates so it is important for the constitutive law to account for strain rate dependent deformation.
As the complexity of composite laminates rises, the use of hybrid structures and multi-directional laminates, large operating temperature ranges, the process induced residual stresses become a significant factor in the design. In order to properly model the initial stress state of a structure, a solid understanding of the stress free temperature, the temperature at which the initial crosslinks are formed, as well as the contribution of cure shrinkage, must be measured. Many in industry have moved towards using complex cure kinetics models with the assistance of commercial software packages such as COMPRO. However, in this study a simplified residual stress model using the coefficient of thermal expansion (CTE) mismatch and change in temperature from the stress free temperature are used. The limits of this simplified model can only be adequately tested using an accurate measure of the stress free temperature. Only once that is determined can the validity of the simplified model be determined. Various methods were used in this study to test for the stress free temperature and their results are used to validate each method. Two approaches were taken, both involving either cobonded carbon fiber reinforced polymer (CFRP) or glass fiber reinforced polymer (GFRP) to aluminum. The first method used a composite-aluminum plate which was allowed to warp due to the residual stress. The other involved producing a geometrical stable hybrid composite-aluminum cylinder which was then cut open to allow it to spring in. Both methods placed the specimens within an environmental chamber and tracked the residual stress induced deformation as the temperature was ramped beyond the stress free temperature. Both methods revealed a similar stress free temperature that could then be used in future cure modeling simulations.
A new apparatus – “Dropkinson Bar” – has been successfully developed for material property characterization at intermediate strain rates. This Dropkinson bar combines a drop table and a Hopkinson bar. The drop table was used to generate a relatively long and stable low-speed impact to the specimen, whereas the Hopkinson bar principle was applied to measure the load history with accounting for inertia effect in the system. Pulse shaping technique was also applied to the Dropkinson bar to facilitate uniform stress and strain as well as constant strain rate in the specimen. The Dropkinson bar was then used to characterize 304L stainless steel and 6061-T6 aluminum at a strain rate of ∼600 s−1. The experimental data obtained from the Dropkinson bar tests were compared with the data obtained from conventional Kolsky tensile bar tests of the same material at similar strain rates. Both sets of experimental results were consistent, showing the newly developed Dropkinson bar apparatus is reliable and repeatable.
Poisson’s ratio of soft, hyperelastic foam materials such as silicone foam is typically assumed to be both a constant and a small number near zero. However, when the silicone foam is subjected to large deformation into densification, the Poisson’s ratio may significantly change, which warrants careful and appropriate consideration in modeling and simulation of impact/shock mitigation scenarios. The evolution of the Poisson’s ratio of foam materials has not yet been characterized. In this study, radial and axial measurements of specimen strain are made simultaneously during quasi-static and dynamic compression test on a silicone foam. The Poisson’s ratio was found to exhibit a transition from compressible to nearly incompressible based on strain level and reached different values at quasi-static and dynamic rates.
Understanding the dynamic behavior of geomaterials is critical for refining modeling and simulation of applications that involve impacts or explosions. Obtaining material properties of geomaterials is challenging, particularly in tension, due to the brittle and low-strength nature of such materials. Dynamic split tension technique (also called dynamic Brazilian test) has been employed in recent decades to determine the dynamic tensile strength of geomaterials. This is primarily because the split tension method is relatively straightforward to implement in a Kolsky compression bar. Typically, investigators use the peak load reached by the specimen to calculate the tensile strength of the specimen material, which is valid when the specimen is compressed at quasi-static strain rate. However, the same assumption cannot be safely made at dynamic strain rates due to wave propagation effects. In this study, the dynamic split tension (or Brazilian) test technique is revisited. High-speed cameras and digital image correlation (DIC) were used to image the failure of the Brazilian-disk specimen to discover when the first crack occurred relative to the measured peak load during the experiment. Differences of first crack location and time on either side of the sample were compared. The strain rate when the first crack is initiated was also compared to the traditional estimation method of strain rate using the specimen stress history.
The Sandia Fracture Challenges provide the mechanics community a forum for assessing its ability to predict ductile fracture through a blind, round-robin format where computationalists are asked to predict the deformation and failure of an arbitrary geometry given experimental calibration data. This presentation will cover the three Sandia Fracture Challenges, with emphasis on the third. The third Challenge, issued in 2017, consisted of an additively manufactured 316L stainless steel tensile bar with through holes and internal cavities that could not have been conventionally machined. The volunteer prediction teams were provided extensive materials data from tensile tests of specimens printed on the same build tray to electron backscatter diffraction microstructural maps and micro-computed tomography scans of the Challenge geometry. The teams were asked a variety of questions, including predictions of variability in the resulting fracture response, as the basis for assessment of their predictive capabilities. This presentation will describe the Challenges and compare the experimental results to the predictions, identifying gaps in capabilities, both experimentally and computationally, to inform future investments. The Sandia Fracture Challenge has evolved into the Structural Reliability Partnership, where researchers will create several blind challenges covering a wider variety of topics in structural reliability. This presentation will also describe this new venture.
The Tularosa study was designed to understand how defensive deception-including both cyber and psychological-affects cyber attackers. Over 130 red teamers participated in a network penetration task over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a “typical” red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, data, population characteristics, and begins to examine preliminary results.
This report examines the role of interfaces in electronic packaging applications with the focus placed on soldering technology. Materials and processes are described with respect to their roles on the performance and reliability of associated interfaces. The discussion will also include interface microstructures created by coatings and finishes that are frequently used in packaging applications. Numerous examples are cited to illustrate the importance of interfaces in physical and mechanical metallurgy as well as the engineering function of interconnections. Regardless of the specific application, interfaces are non-equilibrium structures, which has important ramifications for the long-term reliability of electronic packaging.
A procedure for determining the joint uncertainty of Arrhenius parameters across multiple combustion reactions of interest is demonstrated. This approach is capable of constructing the joint distribution of the Arrhenius parameters arising from the uncertain measurements performed in specific target experiments without having direct access to the underlying experimental data. The method involves constructing an ensemble of hypothetical data sets with summary statistics consistent with the available information reported by the experimentalists, followed by a fitting procedure that learns the structure of the joint parameter density across reactions using this consistent hypothetical data as evidence. The procedure is formalized in a Bayesian statistical framework, employing maximum-entropy and approximate Bayesian computation methods and utilizing efficient Markov chain Monte Carlo techniques to explore data and parameter spaces in a nested algorithm. We demonstrate the application of the method in the context of experiments designed to measure the rates of selected chain reactions in the H2-O2 system and highlight the utility of this approach for revealing the critical correlations between the parameters within a single reaction and across reactions, as well as for maximizing consistency when utilizing rate parameter information in predictive combustion modeling of systems of interest.
A general formulation of silicon damage metrics and associated energy-dependent response functions relevant to the radiation effects community is provided. Using this formulation, a rigorous quantitative treatment of the energy-dependent uncertainty contributors is performed. This resulted in the generation of a covariance matrix for the displacement kerma, the Norgett-Robinson-Torrens-based damage energy, and the 1-MeV(Si)-equivalent damage function. When a careful methodology is used to apply a reference 1-MeV damage value, the systematic uncertainty in the fast fission region is seen to be removed, and the uncertainty for integral metrics in broad-based fission-based neutron fields is demonstrated to be significantly reduced.
Silicon-on-insulator latch designs and layouts that are robust to multiple-node charge collection are introduced. A general Monte Carlo radiative energy deposition (MRED) approach is used to identify potential single-event susceptibilities associated with different layouts prior to fabrication. MRED is also applied to bound single-event testing responses of standard and dual interlocked cell latch designs. Heavy ion single-event testing results validate new latch designs and demonstrate bounds for standard latch layouts.
Silicon-on-insulator latch designs and layouts that are robust to multiple-node charge collection are introduced. A general Monte Carlo radiative energy deposition (MRED) approach is used to identify potential single-event susceptibilities associated with different layouts prior to fabrication. MRED is also applied to bound single-event testing responses of standard and dual interlocked cell latch designs. Heavy ion single-event testing results validate new latch designs and demonstrate bounds for standard latch layouts.
With the growing interest to explore Jupiter's moons, technologies with +10 Mrad(Si) tolerance are now needed, to survive the Jovian environment. Conductive-bridging random access memory (CBRAM) is a nonvolatile memory that has shown a high tolerance to total ionizing dose (TID). However, it is not well understood how CBRAM behaves in an energetic ion environment where displacement damage (DD) effects may also be an issue. In this paper, the response of CBRAM to 100-keV Li, 1-MeV Ta, and 200-keV Si ion irradiations is examined. Ion bombardment was performed with increasing fluence steps until the CBRAM devices failed to hold their programed state. The TID and DD dose (DDD) at the fluence of failure were calculated and compared against tested ion species. Results indicate that failures are more highly correlated with TID than DDD. DC cycling tests were performed during 100-keV Li irradiations and evidence was found that the mobile Ag ion supply diminished with increasing fluence. The cycling results, in addition to prior 14-MeV neutron work, suggest that DD may play a role in the eventual failure of a CBRAM device in a combined radiation environment.
Sampling of drinking water distribution systems is performed to ensure good water quality and protect public health. Sampling also satisfies regulatory requirements and is done to respond to customer complaints or emergency situations. Water distribution system modeling techniques can be used to plan and inform sampling strategies. However, a high degree of accuracy and confidence in the hydraulic and water quality models is required to support real-time response. One source of error in these models is related to uncertainty in model input parameters. Effective characterization of these uncertainties and their effect on contaminant transport during a contamination incident is critical for providing confidence estimates in model-based design and evaluation of different sampling strategies. In this paper, the effects of uncertainty in customer demand, isolation valve status, bulk reaction rate coefficient, contaminant injection location, start time, duration, and rate on the size and location of the contaminant plume are quantified for two example water distribution systems. Results show that the most important parameter was the injection location. The size of the plume was also affected by the reaction rate coefficient, injection rate, and injection duration, whereas the exact location of the plume was additionally affected by the isolation valve status. Uncertainty quantification provides a more complete picture of how contaminants move within a water distribution system and more information when using modeling results to select sampling locations.
We present an approach to the development and evaluation of environmental stress screening (ESS) for a dormant-storage, multi-shot component. The ESS is developed to precipitate and detect latent manufacturing defects without significantly degrading the component's probability of successful function under normal operating environments. The evaluation of the ESS is achieved by using an additional strength of screen (SOS) operation to test for escapes from the screen. The resulting data are pass/fail data only, because the characteristics of this type of component do not allow a standard 'time to failure' analysis. The calculated SOS efficiency f is then used to estimate initial field 'reliability.' We illustrate the use of the methodology with a case study involving an electrical component manufactured within the Nuclear Security Enterprise (NSE). In development and qualification, twelve failures were detected by the ESS, and the SOS operations detected one escape. The resulting analysis showed the SOS efficiency to be approximately 92%, adequate for the component reliability goal. The resulting initial field reliability was estimated to be 99.3%, acceptable for this electrical component. Failure investigations were conducted to determine the root cause of each of these failures. Information from these investigations resulted in changes to the manufacturing process to eliminate or minimize the reoccurrence of these failures. The number of ESS failures have been reduced, and no additional failures have been observed at the SOS operation.
We present an approach to the development and evaluation of environmental stress screening (ESS) for a dormant-storage, multi-shot component. The ESS is developed to precipitate and detect latent manufacturing defects without significantly degrading the component's probability of successful function under normal operating environments. The evaluation of the ESS is achieved by using an additional strength of screen (SOS) operation to test for escapes from the screen. The resulting data are pass/fail data only, because the characteristics of this type of component do not allow a standard 'time to failure' analysis. The calculated SOS efficiency f is then used to estimate initial field 'reliability.' We illustrate the use of the methodology with a case study involving an electrical component manufactured within the Nuclear Security Enterprise (NSE). In development and qualification, twelve failures were detected by the ESS, and the SOS operations detected one escape. The resulting analysis showed the SOS efficiency to be approximately 92%, adequate for the component reliability goal. The resulting initial field reliability was estimated to be 99.3%, acceptable for this electrical component. Failure investigations were conducted to determine the root cause of each of these failures. Information from these investigations resulted in changes to the manufacturing process to eliminate or minimize the reoccurrence of these failures. The number of ESS failures have been reduced, and no additional failures have been observed at the SOS operation.
Apparent char kinetic rates are commonly used to predict pulverized coal char burning rates. These kinetic rates quantify the char burning rate based on the temperature of the particle and the oxygen concentration at the external particle surface, inherently neglecting the impact of variations in the internal diffusion rate and penetration of oxygen. To investigate the impact of bulk gas diffusivity on these phenomena during Zone II burning conditions, experimental measurements were performed of char particle combustion temperature and burnout for a subbituminous coal burning in an optical entrained flow reactor with helium and nitrogen diluents. The combination of much higher thermal conductivity and mass diffusivity in the helium environments resulted in cooler char combustion temperatures than in equivalent N2 environments. Measured char burnout was similar in the two environments for a given bulk oxygen concentration but was approximately 60% higher in helium environments for a given char combustion temperature. To augment the experimental measurements, detailed particle simulations of the experimental conditions were conducted with the SKIPPY code. These simulations also showed a 60% higher burning rate in the helium environments for a given char particle combustion temperature. To differentiate the effect of enhanced diffusion through the external boundary layer from the effect of enhanced diffusion through the particle, additional SKIPPY simulations were conducted under selected conditions in N2 and He environments for which the temperature and concentrations of reactants (oxygen and steam) were identical on the external char surface. Under these conditions, which yield matching apparent char burning rates, the computed char burning rate for He was 50% larger, demonstrating the potential for significant errors with the apparent kinetics approach. However, for specific application to oxy-fuel combustion in CO2 environments, these results suggest the error to be as low as 3% when applying apparent char burning rates from nitrogen environments.
Over the last 13 years, at Sandia National Laboratories we have applied the belief/plausibility measure from evidence theory to estimate the uncertainty for numerous safety and security issues for nuclear weapons. For such issues we have significant epistemic uncertainty and are unable to assign probability distributions. We have developed and applied custom software to implement the belief/plausibility measure of uncertainty. For safety issues we perform a quantitative evaluation, and for security issues (e.g., terrorist acts) we use linguistic variables (fuzzy sets) combined with approximate reasoning. We perform the following steps: Train Subject Matter Experts (SMEs) on assignment of evidence Work with SMEs to identify the concern(s): the top-level variable(s) Work with SMEs to identify lower-level variable and functional relationship(s) to the top-level variable(s) Then the SMEs gather their State of Knowledge (SOK) and assign evidence to the lower-level variables. Using this information, we evaluate the variables using custom software and produce an estimate for the top-level variable(s) including uncertainty. We have extended the Kaplan-Garrick risk triplet approach for risk to use the belief/plausibility measure of uncertainty.
We present an overview of optimization under uncertainty efforts under the DARPA Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) ScramjetUQ project. We introduce the mathematical frameworks and computational tools employed for performing this task. In particular, we provide details in the optimization and multilevel uncertainty quantification algorithms, which are available through the SNOWPAC and DAKOTA software packages. The overall workflow is first demonstrated on a simplified model design problem with non-reacting inviscid supersonic flows. Preliminary results and updates are then reported for a in-progress scramjet design optimization case using large-eddy simulations of supersonic reactive flows inside the HIFiRE Direct Connect Rig.
In the context of the DARPA funded project SEQUOIA we are interested in the design under uncertainty of a jet engine nozzle subject to the performance requirements of a reconnaissance mission for a small unmanned military aircraft. This design task involves complex and expensive aero-thermo-structural computational analyses where it is of a paramount importance to also include the effect of the uncertain variables to obtain reliable predictions of the device’s performance. In this work we focus on the forward propagation analysis which is a key part of the design under uncertainty workflow. This task cannot be tackled directly by means of single fidelity approaches due to the prohibitive computational cost associated to each realization. We report here a summary of our latest advancement regarding several multilevel and multifidelity strategies designed to alleviate these challenges. The overall goal of these techniques is to reduce the computational cost of analyzing a high-fidelity model by resorting to less accurate, but less computationally demanding, lower fidelity models. The features of these multifidelity UQ approaches are initially illustrated and demonstrated on several model problems and afterward for the aero-thermo-structural analysis of the jet engine nozzle.
The generation of optimal trajectories for test flights of hypersonic vehicles with highly nonlinear dynamics and complicated physical and path constraints is often time consuming and sometimes intractable for high-fidelity, software-in-the-loop vehicle models. Practical use of hypersonic vehicles requires the ability to rapidly generate a feasible and robust optimal trajectory. We propose a solution that involves interaction between an optimizer using a low fidelity 3-DOF vehicle model and feedback from vehicle simulations of varying fidelities, with the goal of rapidly converging to a solution trajectory for a hypersonic vehicle mission. Further computational efficiency is sought using aerodynamic surrogate models in place of aerodynamic coefficient look-up tables. We address the need for rapidly converging optimization by analyzing how model fidelity choice impacts the quality and speed of the resulting guidance solution.
This paper describes and demonstrates the Real Space (RS) model validation approach and the Predictor-Corrector (PC) approach to extrapolative prediction given model bias information from RS validation assessments against experimental data. The RS validation method quantifies model prediction bias of selected output scalar quantities of engineering interest (QOIs) in terms of directional bias error and any uncertainty thereof. Information in this form facilitates potential bias correction of predicted QOIs. The PC extrapolation approach maps a QOI-specific bias correction and related uncertainty into perturbation of one or more model parameters selected for most robust extrapolation of that QOI’s bias correction to prediction conditions away from the validation conditions. Such corrections are QOI dependent and not legitimate corrections or fixes to the physics model itself, so extrapolation of the bias correction to the prediction conditions is not expected to be perfect. Therefore, PC extrapolation employs both the perturbed and unperturbed models to estimate upper and lower bounds to the QOI correction that are scaled with extrapolation distance as measured by magnitude of change of the predicted QOI. An optional factor of safety on the uncertainty estimate for the predicted QOI also scales with the extrapolation. The RS-PC methodology is illustrated on a cantilever beam end-to-end uncertainty quantification (UQ) problem. Complementary “Discrete-Direct” model calibration and simple and effective sparse-data UQ methods feed into the RS and PC methods and round out a pragmatic and versatile systems approach to end-to-end UQ.
Joining technologies such as welds, adhesives, and bolts are nearly ubiquitous and often lead to concentrated stresses, making them key in analyzing failure of a structure. While high-fidelity models for fasteners have been developed, they are impractical for use in a full system or component analyses, which may involve hundreds of fasteners undergoing mixed loading. Other failure models for fasteners which use specialized boundary conditions, e.g., spot welds, do well in replicating the load-displacement response of a fastener in a mesh independent manner, but are limited in their ability to transmit a bending moment and require constitutive assumptions when there is a lack of experimental data. A reduced-order finite element model using cohesive surface elements to model fastener failure is developed. A cohesive zone allows for more explicitly representing the fracture of the fastener, rather than simply specifying a load-displacement relationship between two surfaces as in the spot weld. This fastener model is assessed and calibrated against tensile and shear loading data and compared to a traditional spot weld approach. The cohesive zone model can reproduce the experimental data, demonstrating its viability as a reduced-order model of fastener behavior.
The flow rates and aerosol transmission properties were evaluated for an engineered microchannel with characteristic dimensions similar to those of stress corrosion cracks (SCCs) capable of forming in dry cask storage systems (DCSS) for spent nuclear fuel. Pressure differentials covering the upper limit of commercially available DCSS were also examined. These preliminary data sets are intended to demonstrate a new capability to characterize SCCs under well-controlled boundary conditions.
The DOE and industry collaborators have initiated the high burn-up demonstration project to evaluate the effects of drying and long-term dry storage on high burn-up fuel. Fuel was transferred to a dry storage cask, which was then dried using standard industry vacuum-drying techniques and placed on a storage pad to be opened and the fuel examined in 10 years. Helium fill gas samples were collected 5 hours, 5 days, and 12 days after closure. The samples were analyzed for fission gases (85Kr) as an indicator of damaged or leaking rods, and then analyzed to determine water content and concentrations of other trace gases. Gamma-ray spectroscopy found no detectible 85Kr. Sample water contents proved difficult to measure, requiring heating to desorb water from the inner surface of the sampling bottles. Final results indicated that water in the cask gas phase built up over 12 days to 17,400 ppmv ±10%, equivalent to ∼100 ml of water within the cask gas phase. Trace gases were measured by direct gas mass spectrometry. Carbon dioxide built up over two weeks to 930 ppmv, likely due to breakdown of hydrocarbon contaminants (possibly vacuum pump oil) in the cask. Hydrogen built up to nearly 500 ppmv. and may be attributable to water radiolysis and/or to metal corrosion in the cask.
A new free-piston driven shock tube is being constructed at Sandia National Laboratories for generating extreme aerodynamic environments relevant for the study of reacting particle dispersal. The high-temperature shock tube (HST) is designed to reach post-incident shock temperatures more than 2000 K, starting from a driven section initially at ambient temperature and pressure. A design study is presented on different driver methods, leading to the selection of a free-piston driver. The tuning and performance of this driver is analyzed using the Hornung one-dimensional model and the L1d quasi-one-dimensional flow solver. The final mechanical design is shown and compared to the X2 free-piston facility. Construction was completed in mid-2018, and an initial analysis of facility performance from the first shots is presented.
PFLOTRAN is well-established in single-phase reactive transport problems, and current research is expanding its visibility and capability in two-phase subsurface problems. A critical part of the development of simulation software is quality assurance (QA). The purpose of the present work is QA testing to verify the correct implementation and accuracy of two-phase flow models in PFLOTRAN. An important early step in QA is to verify the code against exact solutions from the literature. In this work a series of QA tests on models that have known analytical solutions are conducted using PFLOTRAN. In each case the simulated saturation profile is rigorously shown to converge to the exact analytical solution. These results verify the accuracy of PFLOTRAN for use in a wide variety of two-phase modelling problems with a high degree of nonlinearity in the interaction between phase behavior and fluid flow.
This paper highlights how, in the backdrop of prevalent tensions between India and Pakistan, deterrence stability is endangered even when overall strategic stability prevails. Unresolved legacy issues including an outstanding boundary dispute, conflict over Kashmir, and terrorism mar India-Pakistan relations. Variables undermining the prospects for long term peace, among others, include growing mutual mistrust, the proxy war that India believes is state perpetrated, increasing conventional asymmetry, rapid advancement in weapon technologies, growing size of nuclear arsenals, doctrinal mismatch and above all, bilateral gridlock in confidence building measures and arms control measures.
This is the second of three related conference papers focused on verifying and validating a CFD model for laminar hypersonic flows. The first paper deals with the code verification and solution verification activities. In this paper, we investigate whether the model can accurately simulate laminar, hypersonic experiments of flows over double-cones, conducted in CUBRC’s LENS-I and LENS-XX wind-tunnels. The approach is to use uncertainty quantification and sensitivity analysis, along with a careful examination of experimental uncertainties, to perform validation assessments. The validation assessments use metrics that probabilistically incorporate both parametric (i.e. freestream input) uncertainty and experimental uncertainty. Further validation assessments compare these uncertainties to iterative and convergence uncertainties described in the first paper in our series of related papers. As other researchers have found, the LENS-XX simulations under-predict experimental heat flux measurements in the laminar, attached region of the fore-cone. This is observed for a deterministic simulation, as well as a probabilistic approach to creating an ensemble of simulations derived from CUBRC-provided estimates of uncertainty for freestream conditions. This paper will conclude with possible reasons that simulations cannot bracket experimental observations, and motivate the third paper in our series, which will further examine these possible explanations. The results in this study emphasize the importance of careful measurement of experimental conditions and uncertainty quantification of validation experiments. This study, along with its sister papers, also demonstrates a process of verification, uncertainty quantification, and quantitative validation activities for building and assessing credibility of computational simulations.
Discharge Permit (DP)-1845 was issued by the New Mexico Environment (NMED) Ground Water Quality Bureau (GWQB) for discharges via up to three injection wells in a phased Treatability Study of in-situ bioremediation of groundwater at the Sandia National Laboratories, New Mexico, Technical Area-V Groundwater Area of Concern. This report fulfills the quarterly reporting requirements set forth in DP-1845, Section IV.B, Monitoring and Reporting. This reporting period is July 1 through September 30, 2018. The report is due to NMED GWQB by February 1, 2019.
Chhantyal-Pun, Rabi; Shannon, Robin J.; Tew, David P.; Caravan, Rebecca L.; Duchi, Marta; Wong, Callum; Ingham, Aidan; Feldman, Charlotte; Mcgillen, Max R.; Khan, M.U.; Antonov, Ivan O.; Rotavera, Brandon; Ramasesha, Krupa; Osborn, David L.; Taatjes, Craig A.; Percival, Carl J.; Shallcross, Dudley E.; Orr-Ewing, Andrew J.
Ammonia and amines are emitted into the troposphere by various natural and anthropogenic sources, where they have a significant role in aerosol formation. Here, we explore the significance of their removal by reaction with Criegee intermediates, which are produced in the troposphere by ozonolysis of alkenes. Rate coefficients for the reactions of two representative Criegee intermediates, formaldehyde oxide (CH2OO) and acetone oxide ((CH3)2COO) with NH3 and CH3NH2 were measured using cavity ring-down spectroscopy. Temperature-dependent rate coefficients, k(CH2OO + NH3) = (3.1 ± 0.5) × 10-20T2exp(1011 ± 48/T) cm3 s-1 and k(CH2OO + CH3NH2) = (5 ± 2) × 10-19T2exp(1384 ± 96/T) cm3 s-1 were obtained in the 240 to 320 K range. Both the reactions of CH2OO were found to be independent of pressure in the 10 to 100 Torr (N2) range, and average rate coefficients k(CH2OO + NH3) = (8.4 ± 1.2) × 10-14 cm3 s-1 and k(CH2OO + CH3NH2) = (5.6 ± 0.4) × 10-12 cm3 s-1 were deduced at 293 K. An upper limit of ≤2.7 × 10-15 cm3 s-1 was estimated for the rate coefficient of the (CH3)2COO + NH3 reaction. Complementary measurements were performed with mass spectrometry using synchrotron radiation photoionization giving k(CH2OO + CH3NH2) = (4.3 ± 0.5) × 10-12 cm3 s-1 at 298 K and 4 Torr (He). Photoionization mass spectra indicated production of NH2CH2OOH and CH3N(H)CH2OOH functionalized organic hydroperoxide adducts from CH2OO + NH3 and CH2OO + CH3NH2 reactions, respectively. Ab initio calculations performed at the CCSD(T)(F12∗)/cc-pVQZ-F12//CCSD(T)(F12∗)/cc-pVDZ-F12 level of theory predicted pre-reactive complex formation, consistent with previous studies. Master equation simulations of the experimental data using the ab initio computed structures identified submerged barrier heights of -2.1 ± 0.1 kJ mol-1 and -22.4 ± 0.2 kJ mol-1 for the CH2OO + NH3 and CH2OO + CH3NH2 reactions, respectively. The reactions of NH3 and CH3NH2 with CH2OO are not expected to compete with its removal by reaction with (H2O)2 in the troposphere. Similarly, losses of NH3 and CH3NH2 by reaction with Criegee intermediates will be insignificant compared with reactions with OH radicals.
This Sandia National Laboratories, New Mexico Environmental Restoration Operations (ER) Consolidated Quarterly Report (ER Quarterly Report) fulfills all quarterly reporting requirements set forth in the Compliance Order on Consent. Table I-1 lists the six sites remaining in the corrective action process. This edition of the ER Quarterly Report does not include Section II "Perchlorate Screening Quarterly Groundwater Monitoring Report" because no groundwater samples were analyzed for perchlorate during this reporting period. Additionally, Section III is not included in this edition of the ER Quarterly Report because there is no detailed Technical Area-V Groundwater information to present.
The ability to measure full-field strains is desirable for analytical model validation or characterization of test articles for which there is no model. Of further interest is the ability to determine if a given environmental test’s boundary conditions are suitable to replicate the strain fields the test article undergoes in service. In this work, full-field strain shapes are estimated using a 3D scanning laser Doppler vibrometer and several post-processing methods. The processing methods are categorized in two groups: direct or transformation. Direct methods compute strain fields with only spatial filtering applied to the measurements. Transformation methods utilize SEREP shape expansion/smoothing of the measurements in conjunction with a finite element model. Both methods are used with mode shapes as well as operational deflection shapes. A comparison of each method is presented. It was found that performing a SEREP expansion of the mode shapes and post-processing to estimate strain fields was very effective, while directly measuring strains from ODS or modes was highly subject to noise and filtering effects.
Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook's bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived from very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes. This enables us to better understand the fundamental physics and phenomena associated with the types of accidents in the handbook. In this fourth year, we improved existing computational capabilities to better model fragmentation situations to capture small fragments during an impact accident. In addition, we have provided additional new information for various sections of Chapters 4 and 5 of the Handbook on free fall powders and impacts of solids, and have provided the damage ratio simulations for containers (7A drum and standard waste box) for various drops and impact scenarios. Thus, this work provides a low-cost method to establish physics-justified safety bounds by considering specific geometries and conditions that may not have been previously measured and/or are too costly to perform during an experiment.
The excitation of Mg3F2GeO4:Mn thermographic phosphors using a UV LED centered at 365 nm is explored. Two different LED drivers, one available commercially and one built at Sandia National Laboratories (SNL), were used and assessed for their viability for phosphor thermometry utilizing LED excitation and intensified, high-speed, CMOS camera data collection. The SNL-driven LED was then utilized as an excitation source for Mg3F2GeO4:Mn-phosphor calibration and demonstration experiments measuring the temperature of a silicon carbide heating rod using the time-decay method. The results presented here serve as a step toward determining the application space, wherein SNL driven LED excitation would be preferable over the use of laser systems for thermographic phosphor measurements.
We generally desire to relate radar data values to the Radar Cross Section (RCS) of a radar target echo. This is essential to selecting proper gain values in a radar receiver, maintaining dynamic range, and to properly interpret the resulting data and data products. Ultimately, this impacts proper radar design. We offer herein a basic analysis of relevant concepts and calculations to properly calibrate a monostatic radar's echoes with respect to RCS, and to select appropriate receiver gain values.
For long-term storage, spent nuclear fuel (SNF) is placed in dry storage systems, commonly consisting of welded stainless steel canisters enclosed in ventilated overpacks. Choride-induced stress corrosion cracking (CISCC) of these canisters may occur due to the deliquescence of sea-salt aerosols as the canisters cool. Current experimental and modeling efforts to evaluate canister CISCC assume that the deliquescent brines, once formed, persist on the metal surface, without changing chemical or physical properties. Here we present data that show that magnesium chloride rich-brines, which form first as the canisters cool and sea-salts deliquesce, are not stable at elevated temperatures, degassing HCl and converting to solid carbonates and hydroxychloride phases, thus limiting conditions for corrosion. Moreover, once pitting corrosion begins on the metal surface, oxygen reduction in the cathode region surrounding the pits produces hydroxide ions, increasing the pH under some experimental conditions, leads to precipitation of magnesium hydroxychloride hydrates. Because magnesium carbonates and hydroxychloride hydrates are less deliquescent than magnesium chloride, precipitation of these compounds causes a reduction in the brine volume on the metal surface, potentially limiting the extent of corrosion. If taken to completion, such reactions may lead to brine dry-out, and cessation of corrosion.