Active brazes have been used for many years to produce bonds between metal and ceramic objects. By including a relatively small of a reactive additive to the braze one seeks to improve the wetting and spreading behavior of the braze. The additive modifies the substrate, either by a chemical surface reaction or possibly by alloying. By its nature, the joining process with active brazes is a complex nonequilibrium non-steady state process that couples chemical reaction, reactant and product diffusion to the rheology and wetting behavior of the braze. Most of the these subprocesses are taking place in the interfacial region, most are difficult to access by experiment. To improve the control over the brazing process, one requires a better understanding of the melting of the active braze, rate of the chemical reaction, reactant and product diffusion rates, nonequilibrium composition-dependent surface tension as well as the viscosity. This report identifies ways in which modeling and theory can assist in improving our understanding.
Visible spectroscopy is a powerful diagnostic, allowing plasma parameters ranging from temperature and density to electric and magnetic fields to be measured. Spectroscopic dopants are commonly introduced to make these measurements. On Z, dopants are introduced passively (i.e. a salt deposited on a current-carrying surface); however, in some cases, passive doping can limit the times and locations at which measurements can be made. Active doping utilizes an auxiliary energy source to disperse the dopant independently from the rest of the experiment. The objective of this LDRD project was to explore laser ablation as a method of actively introducing spectroscopic dopants. Ideally, the laser energy would be delivered to the dopant via fiber optic, which would eliminate the need for time-intensive laser alignments in the Z chamber. Experiments conducted in a light lab to assess the feasibility of fibercoupled and open-beam laser-ablated doping are discussed.
We develop a novel calibration approach to address the problem of predictive ke RANS simulations of jet-incrossflow. Our approach is based on the hypothesis that predictive ke parameters can be obtained by estimating them from a strongly vortical flow, specifically, flow over a square cylinder. In this study, we estimate three ke parameters, C%CE%BC, Ce2 and Ce1 by fitting 2D RANS simulations to experimental data. We use polynomial surrogates of 2D RANS for this purpose. We conduct an ensemble of 2D RANS runs using samples of (C%CE%BC;Ce2;Ce1) and regress Reynolds stresses to the samples using a simple polynomial. We then use this surrogate of the 2D RANS model to infer a joint distribution for the ke parameters by solving a Bayesian inverse problem, conditioned on the experimental data. The calibrated (C%CE%BC;Ce2;Ce1) distribution is used to seed an ensemble of 3D jet-in-crossflow simulations. We compare the ensemble's predictions of the flowfield, at two planes, to PIV measurements and estimate the predictive skill of the calibrated 3D RANS model. We also compare it against 3D RANS predictions using the nominal (uncalibrated) values of (C%CE%BC;Ce2;Ce1), and find that calibration delivers a significant improvement to the predictive skill of the 3D RANS model. We repeat the calibration using surrogate models based on kriging and find that the calibration, based on these more accurate models, is not much better that those obtained with simple polynomial surrogates. We discuss the reasons for this rather surprising outcome.
As alternative energy generating devices (i.e., solar, wind, etc) are added onto the electrical energy grid (AC grid), irregularities in the available electricity due to natural occurrences (i.e., clouds reducing solar input or wind burst increasing wind powered turbines) will be dramatically increased. Due to their almost instantaneous response, modern flywheel-based energy storage devices can act a mechanical mechanism to regulate the AC grid; however, improved spin speeds will be required to meet the necessary energy levels to balance these green energy variances. Focusing on composite flywheels, we have investigated methods for improving the spin speeds based on materials needs. The so-called composite flywheels are composed of carbon fiber (C-fiber), glass fiber, and a glue (resin) to hold them together. For this effort, we have focused on the addition of fillers to the resin in order to improve its properties. Based on the high loads required for standard meso-sized fillers, this project investigated the utility of ceramic nanofillers since they can be added at very low load levels due to their high surface area. The impact that TiO2 nanowires had on the final strength of the flywheel material was determined by a three-point-bend test. The results of the introduction of nanomaterials demonstrated an increase in strength of the flywheels C-fiber-resin moiety, with an upper limit of a 30% increase being reported. An analysis of the economic impact concerning the utilization of the nanowires was undertaken and after accounting for new-technology and additional production costs, return on improved-nanocomposite investment was approximated at 4-6% per year over the 20-year expected service life. Further, it was determined based on the 30% improvement in strength, this change may enable a 20-30% reduction in flywheel energy storage cost ($/kW-h).
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Bioweapons and emerging infectious diseases pose growing threats to our national security. Both natural disease outbreak and outbreaks due to a bioterrorist attack are a challenge to detect, taking days after the outbreak to identify since most outbreaks are only recognized through reportable diseases by health departments and reports of unusual diseases by clinicians. In recent decades, arthropod-borne viruses (arboviruses) have emerged as some of the most significant threats to human health. They emerge, often unexpectedly, from cryptic transmission foci causing localized outbreaks that can rapidly spread to multiple continents due to increased human travel and trade. Currently, diagnosis of acute infections requires amplification of viral nucleic acids, which can be costly, highly specific, technically challenging and time consuming. No diagnostic devices suitable for use at the bedside or in an outbreak setting currently exist. The original goals of this project were to 1) develop two highly sensitive and specific diagnostic assays for detecting RNA from a wide range of arboviruses; one based on an electrochemical approach and the other a fluorescent based assay and 2) develop prototype microfluidic diagnostic platforms for preclinical and field testing that utilize the assays developed in goal 1. We generated and characterized suitable primers for West Nile Virus RNA detection. Both optical and electrochemical transduction technologies were developed for DNA-RNA hybridization detection and were implemented in microfluidic diagnostic sensing platforms that were developed in this project.
We performed optical electric field measurements ion nanosecond time scales using the electrooptic crystal beta barium borate (BBO). Tests were based on a preliminary bench top design intended to be a proofofprinciple stepping stone towards a modulardesign optical Efield diagnostic that has no metal in the interrogated environment. The long term goal is to field a modular version of the diagnostic in experiments on large scale xray source facilities, or similarly harsh environments.
In the supercritical CO2-water-mineral systems relevant to subsurface CO2 sequestration, interfacial processes at the supercritical fluid-mineral interface will strongly affect core- and reservoir-scale hydrologic properties. Experimental and theoretical studies have shown that water films will form on mineral surfaces in supercritical CO2, but will be thinner than those that form in vadose zone environments at any given matric potential. The theoretical model presented here allows assessment of water saturation as a function of matric potential, a critical step for evaluating relative permeabilities the CO2 sequestration environment. The experimental water adsorption studies, using Quartz Crystal Microbalance and Fourier Transform Infrared Spectroscopy methods, confirm the major conclusions of the adsorption/condensation model. Additional data provided by the FTIR study is that CO2 intercalation into clays, if it occurs, does not involve carbonate or bicarbonate formation, or significant restriction of CO2 mobility. We have shown that the water film that forms in supercritical CO2 is reactive with common rock-forming minerals, including albite, orthoclase, labradorite, and muscovite. The experimental data indicate that reactivity is a function of water film thickness; at an activity of water of 0.9, the greatest extent of reaction in scCO2 occurred in areas (step edges, surface pits) where capillary condensation thickened the water films. This suggests that dissolution/precipitation reactions may occur preferentially in small pores and pore throats, where it may have a disproportionately large effect on rock hydrologic properties. Finally, a theoretical model is presented here that describes the formation and movement of CO2 ganglia in porous media, allowing assessment of the effect of pore size and structural heterogeneity on capillary trapping efficiency. The model results also suggest possible engineering approaches for optimizing trapping capacity and for monitoring ganglion formation in the subsurface.
As with other large healthcare organizations, medical adverse events at the Department of Veterans Affairs (VA) facilities can expose patients to unforeseen negative risks. VHA leadership recognizes that properly handled disclosure of adverse events can minimize potential harm to patients and negative consequences for the effective functioning of the organization. The work documented here seeks to help improve the disclosure process by situating it within the broader theoretical framework of issues management, and to identify opportunities for process improvement through modeling disclosure and reactions to disclosure. The computational model will allow a variety of disclosure actions to be tested across a range of incident scenarios. Our conceptual model will be refined in collaboration with domain experts, especially by continuing to draw on insights from VA Study of the Communication of Adverse Large-Scale Events (SCALE) project researchers.
Corrosion tests at 400, 500, and 680ÀC were performed using four high temperature alloys; 347SS, 321SS In625, and HA230. Molten salt chemistry was monitored over time through analysis of nitrite, carbonate, and dissolved metals. Metallography was performed on alloys at 500 and 680ÀC, due to the relatively thin oxide scale observed at 400ÀC. At 500ÀC, corrosion of iron based alloys took the form of chromium depletion and iron oxides, while nickel based alloys also had chromium depletion and formation of NiO. Chromium was detected in relatively low concentrations at this temperature. At 680ÀC, significant surface corrosion occurred with metal losses greater than 450microns/year after 1025hours of exposure. Iron based alloys formed complex iron, sodium, and chromium oxides. Some data suggests grain boundary chromium depletion of 321SS. Nickel alloys formed NiO and metallic nickel corrosion morphologies, with HA230 displaying significant internal oxidation in the form of chromia. Nickel alloys both exhibited worse corrosion than iron based alloys likely due to preferential dissolution of chromium, molybdenum, and tungsten.
The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.
This report documents work conducted in FY13 on electrical discharge experiments performed to develop predictive computational models of the fundamental processes of surface breakdown in the vicinity of high-permittivity material interfaces. Further, experiments were conducted to determine if free carrier electrons could be excited into the conduction band thus lowering the effective breakdown voltage when UV photons (4.66 eV) from a high energy pulsed laser were incident on the rutile sample. This report documents the numerical approach, the experimental setup, and summarizes the data and simulations. Lastly, it describes the path forward and challenges that must be overcome in order to improve future experiments for characterizing the breakdown behavior for rutile.
Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.
An ideal 3He detector replacement for the near- to medium-term future will use materials that are easy to produce and well understood, while maintaining thermal neutron detection efficiency and gamma rejection close to the 3He standard. Toward this end, we investigated the use of standard alkali halide scintillators interfaced with 6Li and read out with photomultiplier tubes (PMTs). Thermal neutrons are captured on 6Li with high efficiency, emitting high-energy and triton (3H) reaction products. These particles deposit energy in the scintillator, providing a thermal neutron signal; discrimination against gamma interactions is possible via pulse shape discrimination (PSD), since heavy particles produce faster pulses in alkali halide crystals. We constructed and tested two classes of detectors based on this concept. In one case 6Li is used as a dopant in polycrystalline NaI; in the other case a thin Li foil is used as a conversion layer. In the configurations studied here, these systems are sensitive to both gamma and neutron radiation, with discrimination between the two and good energy resolution for gamma spectroscopy. We present results from our investigations, including measurements of the neutron efficiency and gamma rejection for the two detector types. We also show a comparison with Cs2LiYCl6:Ce (CLYC), which is emerging as the standard scintillator for simultaneous gamma and thermal neutron detection, and also allows PSD. We conclude that 6Li foil with CsI scintillating crystals has near-term promise as a thermal neutron detector in applications previously dominated by 3He detectors. The other approach, 6Li-doped alkali halides, has some potential, but require more work to understand material properties and improve fabrication processes.
Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.
This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).
Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.
We created interactive demonstration activities for Take Our Daughters&Sons to Work Day (TODSTWD) 2013 in order to promote general interest in chemistry and also generate awareness of the type of work our laboratories can perform. %E2%80%9CCurious about Mars Rover Curiosity?%E2%80%9D performed an elemental analysis on rocks brought to our lab using the same technique utilized on the planet Mars by the NASA robotic explorer Curiosity. %E2%80%9CFood is Chemistry?%E2%80%9D utilized a mass spectrometer to measure, in seconds, each participant's breath in order to identify the food item consumed for the activity. A total of over 130 children participated in these activities over a 3 hour block, and feedback was positive. This document reports the materials (including handouts), experimental procedures, and lessons learned so that future demonstrations can benefit from the baseline work performed. We also present example results used to prepare the Food activity and example results collected during the Curiosity demo.
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.
Resistive random access memory (ReRAM) has become a promising candidate for next-generation high-performance non-volatile memory that operates by electrically tuning resistance states via modulating vacancy concentrations. Here, we demonstrate a wafer-scale process for resistive switching in tantalum oxide that is completely CMOS compatible. The resulting devices are forming-free and with greater than 1x105 cycle endurance.
We design a space efficient algorithm that approximates the transitivity (global clustering coefficient) and total triangle count with only a single pass through a graph given as a stream of edges. Our procedure is based on the classic probabilistic result, the birthday paradox. When the transitivity is constant and there are more edges than wedges (common properties for social networks), we can prove that our algorithm requires O( p n) space (n is the number of vertices) to provide accurate estimates. We run a detailed set of experiments on a variety of real graphs and demonstrate that the memory requirement of the algorithm is a tiny fraction of the graph. For example, even for a graph with 200 million edges, our algorithm stores just 60,000 edges to give accurate results. Being a single pass streaming algorithm, our procedure also maintains a real-Time estimate of the transitivity/number of triangles of a graph, by storing a miniscule fraction of edges.
Nations using borosilicate glass as an immobilization material for radioactive waste have reinforced the importance of scientific collaboration to obtain a consensus on the mechanisms controlling the long-term dissolution rate of glass. This goal is deemed to be crucial for the development of reliable performance assessment models for geological disposal. The collaborating laboratories all conduct fundamental and/or applied research using modern materials science techniques. This paper briefly reviews the radioactive waste vitrification programs of the six participant nations and summarizes the current state of glass corrosion science, emphasizing the common scientific needs and justifications for on-going initiatives.
Everyday problem solving requires the ability to go beyond experience by efficiently encoding and manipulating new information, i.e., fluid intelligence (Gf) [1]. Performance in tasks involving Gf, such as logical and abstract reasoning, has been shown to rely on distributed neural networks, with a crucial role played by prefrontal regions [2]. Synchronization of neuronal activity in the gamma band is a ubiquitous phenomenon within the brain; however, no evidence of its causal involvement in cognition exists to date [3]. Here, we show an enhancement of Gf ability in a cognitive task induced by exogenous rhythmic stimulation within the gamma band. Imperceptible alternating current [4] delivered through the scalp over the left middle frontal gyrus resulted in a frequency-specific shortening of the time required to find the correct solution in a visuospatial abstract reasoning task classically employed to measure Gf abilities (i.e., Raven’s matrices) [5]. Crucially, gamma-band stimulation (γ-tACS) selectively enhanced performance only on more complex trials involving conditional/logical reasoning. The finding presented here supports a direct involvement of gamma oscillatory activity in the mechanisms underlying higher-order human cognition.
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
We present the results of a two year early career LDRD project, which has focused on the development of ultrafast diagnostics to measure temperature, pressure and chemical change during the shock initiation of energetic materials. We compare two single-shot versions of femtosecond rotational CARS to measure nitrogen temperature: chirped-probe-pulse and ps/fs hybrid CARS thermometry. The applicability of measurements to the combustion of energetic materials will be discussed. We have also demonstrated laser shock and particle velocity measurements in thin film explosives using stretched femtosecond laser pulses. We will discuss preliminary results from Al and PETN thin films. Agreement between our results and previous work will be discussed.
The U.S. Strategic Petroleum Reserve implemented the first stage of a leach plan in 2011-2012 to expand storage volume in the existing Bryan Mound 113 cavern from a starting volume of 7.4 million barrels (MMB) to its design volume of 11.2 MMB. The first stage was terminated several months earlier than expected in August, 2012, as the upper section of the leach zone expanded outward more quickly than design. The oil-brine interface was then re-positioned with the intent to resume leaching in the second stage configuration. This report evaluates the as-built configuration of the cavern at the end of the first stage, and recommends changes to the second stage plan in order to accommodate for the variance between the first stage plan and the as-built cavern. SANSMIC leach code simulations are presented and compared with sonar surveys in order to aid in the analysis and offer projections of likely outcomes from the revised plan for the second stage leach.
Quantitative radiological analysis attempts to determine the quantity of activity or concentration of specific radionuclide(s) in a sample. Based upon the certified standards that are used to calibrate gamma spectral detectors, geometric similarities between sample shape and the calibration standards determine if the analysis results developed are qualitative or quantitative. A sample analyzed that does not mimic a calibrated sample geometry must be reported as a non-standard geometry and thus the results are considered qualitative and not quantitative. MicroShieldR or ISOCSR calibration software can be used to model non-standard geometric sample shapes in an effort to obtain a quantitative analytical result. MicroShieldR and Canberras ISOCSR software contain several geometry templates that can provide accurate quantitative modeling for a variety of sample configurations. Included in the software are computational algorithms that are used to develop and calculate energy efficiency values for the modeled sample geometry which can then be used with conventional analysis methodology to calculate the result. The response of the analytical method and the sensitivity of the mechanical and electronic equipment to the radionuclide of interest must be calibrated, or standardized, using a calibrated radiological source that contains a known and certified amount of activity.
Thermal desorption spectroscopy was used to monitor the decomposition as a function of temperature for the foam and epoxy as a function of temperature in the range of 60C to 170C. Samples were studied with one day holds at each of the studied temperatures. Both new (FoamN and EpoxyN) and aged (FoamP and EpoxyP) samples were studied. During these ~10 day experiments, the foam samples lost 11 to 13% of their weight and the EpoxyN lost 10% of its weight. The amount of weight lost was difficult to quantify for EpoxyP because of its inert filler. The onset of the appearance of organic degradation products from FoamP began at 110C. Similar products did not appear until 120C for FoamN, suggesting some effect of the previous decades of storage for FoamP. In the case of the epoxies, the corresponding temperatures were 120C for EpoxyP and 110C for EpoxyN. Suggestions for why the aged epoxy seems more stable than newer sample include the possibility of incomplete curing or differences in composition. Recommendation to limit use temperature to 90-100C for both epoxy and foam.