The accuracy of digital in-line holography to detect particle position and size within a 3D domain is evaluated with particular focus placed on detection of nonspherical particles. Dimensionless models are proposed for simulation of holograms from single particles, and these models are used to evaluate the uncertainty of existing particle detection methods. From the lessons learned, a new hybrid method is proposed. This method features automatic determination of optimum thresholds, and simulations indicate improved accuracy compared to alternative methods. To validate this, experiments are performed using quasi-stationary, 3D particle fields with imposed translations. For the spherical particles considered in experiments, the proposed hybrid method resolves mean particle concentration and size to within 4% of the actual value, while the standard deviation of particle depth is less than two particle diameters. Initial experimental results for nonspherical particles reveal similar performance.
Tests are ongoing to conduct ~20 MA z-pinch implosions on the Z accelerator at Sandia National Laboratory using Ar, Kr, and D2 gas puffs as the imploding loads. The relatively high cost of operations on a machine of this scale imposes stringent requirements on the functionality, reliability, and safety of gas puff hardware. Here we describe the development of a prototype gas puff system including the multiple-shell nozzles, electromagnetic drivers for each nozzle's valve, a UV pre-ionizer, and an inductive isolator to isolate the ~2.4 MV machine voltage pulse present at the gas load from the necessary electrical and fluid connections made to the puff system from outside the Z vacuum chamber. This paper shows how the assembly couples to the overall Z system and presents data taken to validate the functionality of the overall system.
Proc. of 2nd Int. Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications, BigMine 2013 - Held in Conj. with SIGKDD 2013 Conf.
We present an algorithm to maintain the connected components of a graph that arrives as an infinite stream of edges. We formalize the algorithm on X-stream, a new parallel theoretical computational model for infinite streams. Connectivity-related queries, including component spanning trees, are supported with some latency, returning the state of the graph at the time of the query. Because an infinite stream may eventually exceed the storage limits of any number of finite-memory processors, we assume an aging command or daemon where "uninteresting" edges are removed when the system nears capacity. Following an aging command the system will block queries until its data structures are repaired, but edges will continue to be accepted from the stream, never dropped. The algorithm will not fail unless a model-specific constant fraction of the aggregate memory across all processors is full. In normal operation, it will not fail unless aggregate memory is completely full. Unlike previous theoretical streaming models designed for finite graphs that assume a single shared memory machine or require arbitrary-size intemediate files, X-stream distributes a graph over a ring network of finite-memory processors. Though the model is synchronous and reminiscent of systolic algorithms, our implementation uses an asynchronous message-passing system. We argue the correctness of our X-stream connected components algorithm, and give preliminary experimental results on synthetic and real graph streams.
Low-temperature combustion (LTC) achieved by using exhaust-gas recirculation (EGR) is an operating strategy of current interest for heavy-duty and other compression-ignition (diesel) engines because it offers low nitrogen oxides (NOx) and soot emissions compared to conventional diesel combustion. While the long ignition-delay of EGR-LTC helps increase pre-combustion mixing to reduce soot formation, other emissions, including unburned hydrocarbons (UHC), can be problematic. Particularly an issue at low-load conditions, a considerable portion of UHC emissions in large-bore diesels is often due to overly-lean fuel/air mixtures formed near the injector during the long ignition delay. In this study, we explore the use of multiple post-injection strategies, which have a large main injection and one or two smaller post injections, to help reduce engine-out UHC emissions. The short post-injections closely timed after the end of the main injection help to enrich the overly-lean region near the injector, allowing for more complete combustion of a greater portion of the fuel/air mixture. Optical results from formaldehyde and OH planar laser-induced fluorescence provide evidence of the in-cylinder spatial and temporal progression toward complete combustion.
In this study, two flames of iso-pentanol were stabilized on a 60-mm flat flame burner at a low pressure of 15 Torr and analyzed by a flame-sampling molecular-beam setup coupled to a mass spectrometer (MBMS). Singlephoton ionization by synchrotron-generated vacuum-UV radiation with high energy resolution (E/ΔE ∼0.04 eV) and/or electron ionization was combined with a custom-built reflectron time-of-flight spectrometer providing high mass resolution (m/Δm = 3000). Mole fraction profiles for more than 40 flame species and the temperature profile were determined experimentally. The flame temperatures were measured using OH laser induced fluorescence and used as input parameters for the model calculations. The experimental dataset was used to guide the development of a combustion chemistry model for the high-temperature oxidation chemistry of iso-pentanol. The chemical kinetic model is herein validated for the first time against detailed speciation profiles of combustion intermediates and product species including C5 branched aldehydes, enols, and alkenes. In a separated study, the model was validated against a number of different datasets including low and high temperature ignition delay in rapid compression machines and shock tubes, jet stirred reactor speciation data, premixed laminar flame speed, and opposed-flow diffusion flame strained extinction.
Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insight into how MPI implementations for future hybrid-core processors should be designed.
Imagery from GOES satellites is analyzed to determine how solar variability is related to the NOAA classification of cloud type. Without using a model to convert satellite imagery to average insolation on the ground, this paper investigates using cloud categories to directly model the expected statistical variability of ground irradiance. Hourly cloud classified satellite images are compared to multiple years of ground measured irradiance at two locations to determine if measured irradiance, ramp rates, and variability index are correlated with cloud category. Novel results are presented for ramp rates grouped by the cloud category during the time period. This correlation between satellite cloud classification and solar variability could be used to model the solar variability for a given location and time and could be used to determine the variability of a location based on the prevalence of each cloud category.
This paper describes the MELCOR Accident Consequence Code System, Version 2 (MACCS2) dose-truncation sensitivity of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses unmitigated long-term station blackout severe accident scenario at the Peach Bottom Atomic Power Station. Latent-cancer-fatality (LCF) risk results for this sensitivity study are presented for three dose-response models. LCF risks are reported for circular areas ranging from a 10-to a 50-mile radius centered on the plant. For the linear, no-threshold, sensitivity analysis, all regression methods consistently rank the MACCS2 dry deposition velocity and the MELCOR safety relief valve (SRV) stochastic failure probability, respectively, as the most important input parameters. For the alternative dose-truncation models (i.e., USBGR (0.62 rem/yr) and HPS (5 rem/yr with a lifetime limit of 10 rem)) sensitivity analyses, the regression methods consistently rank the MACCS2 inhalation protection factor for normal activity, the MACCS2 lung lifetime risk factor for cancer death, and the MELCOR SRV stochastic failure probability as the most important input variables. The important MELCOR input parameters are relatively independent of the dose-response model used in MACCS2. However, the MACCS2 input variables depend strongly on the dose-response model. The use of either the USBGR or the HPS dose-response model emphasizes MACCS2 input variables associated with doses received in the first year and deemphasizes MACCS2 input parameters associated with long-term phase doses beyond the first year.
This paper describes the MELCOR Accident Consequence Code System, Version 2(MACCS2), parameters and probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. Consequence results are presented as conditional risk (i.e., assuming the accident occurs) to individuals of the public as a result of the accident - latent-cancer-fatality (LCF) risk per event or prompt-fatality risk per event. For the mean, individual, LCF risk, all regression methods at each of the circular areas around the plant that are analyzed (10-mile to 50-mile radii are considered) consistently rank the MACCS2 dry deposition velocity, the MELCOR safety relief valve (SRV) stochastic failure probability, and the MACCS2 residual cancer risk factor, respectively, as the most important input parameters. For the mean, individual, prompt-fatality risk (which is zero in over 85% of the Monte Carlo realizations) within circular areas with less than a 2-mile radius, the non-rank regression methods consistently rank the MACCS2 wet deposition parameter, the MELCOR SRV stochastic failure probability, the MELCOR SRV open area fraction, the MACCS2 early health effects threshold for red bone marrow, and the MACCS2 crosswind dispersion coefficient, respectively, as the most important input parameters. For the mean, individual prompt-fatality risk within the circular areas with radii between 2.5-miles and 3.5-miles, the regression methods consistently rank the MACCS2 crosswind dispersion coefficient, the MACCS2 early health effects threshold for red bone marrow, the MELCOR SRV stochastic failure probability, and the MELCOR SRV open area fraction, respectively, as the most important input parameters.
One of the characteristics of CO2 that influences the oxy-fuel combustion of pulverized coal char is its low diffusivity, in comparison to N2. To further explore how the gas diffusivity influences the apparent rate of pulverized char combustion, experiments were conducted in a laminar, optical flow reactor that has been extensively used to quantify char particle combustion rates. Helium, nitrogen, and CO2 diluent gases were employed as diluent gases. The diffusivity of oxygen through helium is 3.5 times higher than through nitrogen, tending to supply more oxygen to the particle and accelerating the particle combustion rate and heat release. However, the thermal conductivity of helium is 5 times larger than that of nitrogen, tending to keep the burning char particle temperature close to that of the surrounding gas. The combination of these two factors makes char combustion in helium atmospheres significantly more kinetically controlled than combustion of char particles in nitrogen atmospheres. The char particle combustion temperatures were highest for combustion in N2 environments, with combustion in CO2 and He environments producing nearly identical char combustion temperatures, despite much more rapid particle burnout in helium. Preliminary analysis of the apparent char kinetic burning rate in He yields a rate that is approximately three times greater than the rate in N2, likely reflecting the greater internal penetration of oxygen into char particles burning in helium. Analysis with intrinsic kinetic models is being applied to better understand the data and therefore the role of gas diffusivity on apparent kinetic rates of char combustion.
Soot emissions from internal combustion engines and aviation gas turbine engines face increasingly stringent regulation, but available experimental datasets for sooting turbulent combustion model development and validation are largely lacking, in part due to the difficulty of making quantitative space- and time-resolved measurements in this type of flame. To address this deficiency, we have performed a number of different laser and optical diagnostic measurements in sooting, nonpremixed jet flames fueled by ethylene or a prevaporized JP-8 surrogate. Most laser diagnostic techniques inherently lose their quantitative rigor when significant laser beam and signal attenuation occur in sooting flames. However, the '3-line' approach to simultaneous measurement of soot concentration (on the basis of laser extinction) and soot temperature (on the basis of 2-color pyrometry) actually relies on the presence of significant laser attenuation to yield accurate measurements. In addition, the 3-line approach yields complete time-resolved information. In the work reported here, we have implemented the 3-line diagnostic in well-controlled non-premixed ethylene and JP-8 jet flames with a fuel exit Reynolds number of 20,000 using tapered, uncooled alumina refractory probes with a 10 mm probe end separation. Bandpass filters with center wavelengths of 850 nm and 1000 nm were used for the pyrometry measurement, with calibration provided by a hightemperature blackbody source. Extinction of a 635 nm red diode laser beam was used to determine soot volume fraction. Data were collected along the flame centerline at many different heights and radial traverses were performed at selected heights. A data sampling rate of 5 kHz was used to resolve the turbulent motion of the soot. The results for the ethylene flame show a mean soot volume fraction of 0.4 ppm at mid-height of the flame, with a mean temperature of 1450 K. At any given instant, the soot volume fraction typically falls between 0.2 and 0.6 ppm with a temperature between 1300 and 1650 K. At greater heights in the flame, the soot intermittency increases and its mean concentration decreases while its mean temperature increases. In the JP-8 surrogate flame, the soot concentration reaches a mean value of 1.3 ppm at mid-height of the flame, but the mean soot temperature is only 1270 K. Elevated soot concentrations persist for a range of heights in the JP-8 flame, with a rise in mean temperature to 1360 K, before both soot volume fraction and temperature tail off at the top of this smoking flame.
The increasing global appetite for energy within the transportation sector will inevitably result in the combustion of more fossil fuel. A renewable-derived approach to carbon-neutral synthetic fuels is therefore needed to offset the negative impacts of this trend, which include climate change. In this communication we report the use of nonstoichiometric perovskite oxides in two-step, solar-thermochemical water or carbon dioxide splitting cycles. We find that LaAlO3 doped with Mn and Sr will efficiently split both gases. Moreover the H2 yields are 9× greater, and the CO yields 6× greater, than those produced by the current state-of-the-art material, ceria, when reduced at 1350 °C and re-oxidized at 1000 °C. The temperature at which O2 begins to evolve from the perovskite is fully 300 °C below that of ceria. The materials are also very robust, maintaining their redox activity over at least 80 CO2 splitting cycles. This discovery has profound implications for the development of concentrated solar fuel technologies.
The thermal processing of a proposed durable waste form for 129I was investigated. The waste form is a composite with a matrix of low-temperature sintering glass that encapsulates particles of AgI-mordenite. Ag-mordenite, an ion-exchanged zeolite, is being considered as a capture medium for gaseous 129I2 as part of a spent nuclear fuel reprocessing scheme under development by the US Department of Energy/Nuclear Energy (NE). The thermal processing of the waste form is necessary to densify the glass matrix by viscous sintering so that the final waste form does not have any open porosity. Other processes that can also occur during the thermal treatment include desorption of chemisorbed I2, volatilization of AgI and crystallization of the glass matrix. We have optimized the thermal processing to achieve the desired high density with higher AgI-mordenite loading levels and with minimal loss of iodine. Using these conditions, 625°C for 20 minutes, the matrix crystallizes to form a eulytite phase. Results of durability tests indicate that the matrix crystallization does not significantly decrease the durability in aqueous environments.
In this paper, fusing of a metallic conductor is studied by judiciously using the solution of the one-dimensional heat equation, resulting in an approximate method for determining the threshold fusing current. The action is defined as an integration of the square of the wire current over time. The burst action (the action required to completely vaporize the material) for an exploding wire is then used to estimate the typical wire gapping action (involving wire fusing), from which gapping time can be estimated for a gapping current greater than a factor of two over the fusing current. The test data are used to determine the gapped length as a function of gapping current and to show, for a limited range, that the gapped length is inversely proportional to gapping time. The gapping length can be used as a signature of the fault current level in microelectronic circuits.
Physical security analyses for nuclear reactors have historically sought to ensure that there is an acceptably low probability of success for a "design basis" adversary to accomplish a theft or sabotage objective, even for the adversary's most advantageous path. While some have used probabilistic risk assessment to characterize these risks, the lack of a validated attack frequency, among other things, has made this difficult. Recent work at Sandia National Laboratories (SNL) characterizes a facility's security risk for a scenario in terms of level of difficulty an adversary would encounter in order to be reasonably sure of success (the Risk Informed Management of Enterprise Security (RIMES) methodology). Scenarios with lower levels of difficulty can then be addressed through design changes or improvements to the physical protection system. This work evaluates the level of difficulty of a number of attack scenarios for Small Modular Reactors (SMRs), and provides insight to help designers optimize the protection of their facilities. The methodology and general insights are described here.
The existence of a critical dissipation rate, above which a nonpremixed flame is extinguished, has been known for decades. Recent advances in modeling have allowed the simulation of turbulent nonpremixed flames that include local extinction as a consequence of the stochastic variation in mixing rates. In this paper we present the critical dissipation impulse magnitude that will lead to extinction even if the mean dissipation rate is well below the criteria for a steady flame. This critical impulse magnitude depends on the time-integrated excess dissipation rate, stoichiometric factors and the form of the S-curve describing the steady-state flame. This criteria is evaluated in a diverse set of flames including n-heptane, diluted n-heptane and CO/H2/N2 mixtures.
Carbon fiber composite materials are increasingly being used in the design and fabrication of transportation vehicles. In particular, the aviation industry is increasing transitioning from metals to this class of composites due to the high strength and low weight of the materials. Most aviation structural composites are thermoset, meaning they require thermal processing to harden the epoxy. In the event of a fire, they will behave significantly different than the metals they replace. Because they are not homogeneous, they also differ significantly from homogeneous solid combustibles. Sandia National Laboratories is motivated to study burning composites because we maintain experimental and modeling capabilities for assessing transportation safety. Understanding the thermal environment created by transportation fires is therefore paramount. This type of focus is not typical of the general literature on these materials in the fire environment. A serious issue with the majority of fire performance data found in the open literature is that the length and mass scales are generally orders of magnitude below those used in vehicle design. With a non-traditional perspective on composite fires, Sandia has performed several test series. Together with a review of the work from other institutions as found in the literature, this report presents a phenomenological overview of the relevant work on the behavior of composite materials in a fire environment.
Optimization of new transportation fuels and engine technologies requires the characterization of the combustion chemistry of a wide range of fuel classes. Theoretical studies of elementary reactions - the building blocks of complex reaction mechanisms - are essential to accurately predict important combustion processes such as autoignition of biofuels. The current bottleneck for these calculations is a user-intensive exploration of the underlying potential energy surface (PES), which relies on the "chemical intuition" of the scientist to propose initial guesses for the relevant chemical configurations. For newly emerging fuels, this approach cripples the rate of progress because of the system size and complexity. The KinBot program package aims to accelerate the detailed chemical kinetic description of combustion, and enables large-scale systematic studies on the sub-mechanism level.
Density Functional Theory points to a key role of K+ solvation in the low-energy two-dimensional arrangement of water molecules on the basal surface of muscovite. At a coverage of 9 water molecules per 2 surface potassium ions, there is room to accommodate the ions into wetting layers wherein half of them are hydrated by 3 and the other half by 4 water molecules, with no broken H-bonds, or wherein all are hydrated by 4. Relative to the “fully connected network of H-bonded water molecules” that Odelius et al. found to form “a cage around the potassium ions,” the hydrating arrangements are several tens of meV/H2O better bound. Thus, low-temperature wetting on muscovite is not driven towards “ice-like” hexagonal coordination. Instead, solvation forces dominate.
The Quantum-Kinetic (Q-K) chemical reaction model is implemented in a Navier-Stokes solver, US3D, and tested on the Bow Shock UltraViolet flight experiments. The chemical reaction rates predicted by the Q-K model are compared to a commonly used Park model for flows in thermal non-equilibrium. The results show that in thermal equilibrium the reaction rates between these two models are comparable. The Q-K model predicts greater rates for some chemical reactions and lesser rates for other reactions in an five species air chemistry model. In thermal non-equilibrium, the Q-K model maintains comparable rates near thermal equilibrium, while avoiding issues of strong thermal non-equilibrium seen in the Park model. The application of the Q-K model to the Bow Shock UltraViolet flight experiments show that the model remains consistent with previous Navier-Stokes and DSMC computations over altitudes ranging from 53:5 km up to 87:5 km despite the enforcement of translational-rotational equilibrium. The commonly used Park model was unable to match this performance.
High-frequency pressure sensors were used in conjunction with a high-speed schlieren system to study the growth and breakdown of boundary-layer disturbances into turbulent spots on a 7° cone in the Sandia Hypersonic Wind Tunnel. At Mach 5, intermittent low-frequency disturbances were observed in the schlieren videos. High-frequency secondmode wave packets would develop within these low-frequency disturbances and break down into isolated turbulent spots surrounded by an otherwise smooth, laminar boundary layer. Spanwise pressure measurements showed that these packets have a narrow spanwise extent before they break down. The resulting turbulent fluctuations still had a streaky structure reminiscent of the wave packets. At Mach 8, the boundary layer was dominated by secondmode instabilities that extended much further in the spanwise direction before breaking down into regions of turbulence. The amplitude of the turbulent pressure fluctuations was much lower than those within the second-mode waves. These turbulent patches were surrounded by waves as opposed to the smooth laminar flow seen at Mach 5. At Mach 14, second-mode instability wave packets were also observed. Theses waves had a much lower frequency and larger spanwise extent compared to lower Mach numbers. Only low freestream Reynolds numbers could be obtained, so these waves did not break down into turbulence.
Liquid injection in systems such as liquid rockets where the working fluid exceeds the thermodynamic critical condition of the liquid phase is not well understood. Under some conditions when operating pressures exceed the liquid phase critical pressure, surface tension forces become diminished when the classical low-pressure gas-liquid interface is replaced by a diffusion-dominated mixing layer. Modern theory, however, still lacks a physically-based model to explain the conditions under which this transition occurs. In this paper, we derive a coupled model to obtain a theoretical analysis that quantifies these conditions for general multicomponent liquid injection processes. Our model applies a modified 32-term Benedict-Webb-Rubin equation of state along with corresponding combining and mixing rules that accounts for the relevant thermodynamic non-ideal multicomponent mixture states in the system. This framework is combined with Linear Gradient Theory, which facilitates the calculation of the vapor-liquid molecular structure. Dependent on oxygen and hydrogen injection temperatures, our model shows interfaces with substantially increased thicknesses in comparison to interfaces resulting from lower injection temperatures. Contrary to conventional wisdom, our analysis reveals that LOX-H2 molecular interfaces break down not necessarily because of vanishing surface tension forces, but because of the combination of broadened interfaces and a reduction in mean free molecular path at high pressures. Then, these interfaces enter the continuum length scale regime where, instead of inter-molecular forces, transport processes dominate. Based on this theory, a regime diagram for LOX-H2 mixtures is introduced that quantifies the conditions under which classical sprays transition to dense-fluid jets.