Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.
We present advances with a 32 element scalable, segmented dual mode imager. Scaling up the number of cells results in a 1.4 increase in efficiency over a system we deployed last year. Variable plane separation has been incorporated which further improves the efficiency of the detector. By using 20 cm diameter cells we demonstrate that we could increase sensitivity by a factor of 6. We further demonstrate gamma ray imaging in from Compton scattering. This feature allows for powerful dual mode imaging. Selected results are presented that demonstrate these new capabilities.
The neutron scatter camera was originally developed for a range of SNM detection applications. We are now exploring the feasibility of applications in treaty verification and warhead monitoring using experimentation, maximum likelihood estimation method (MLEM), detector optimization, and MCNP-PoliMi simulations.
A rational approach was used to design polymeric materials for thin-film electronics applications, whereby theoretical modeling was used to determine synthetic targets. Time-dependent density functional theory calculations were used as a tool to predict the electrical properties of conjugated polymer systems. From these results, polymers with desirable energy levels and band-gaps were designed and synthesized. Measurements of optoelectronic properties were performed on the synthesized polymers and the results were compared to those of the theoretical model. From this work, the efficacy of the model was evaluated and new target polymers were identified.
Monge first posed his (L{sup 1}) optimal mass transfer problem: to find a mapping of one distribution into another, minimizing total distance of transporting mass, in 1781. It remained unsolved in R{sup n} until the late 1990's. This result has since been extended to Riemannian manifolds. In both cases, optimal mass transfer relies upon a key lemma providing a Lipschitz control on the directions of geodesics. We will discuss the Lipschitz control of geodesics in the (subRiemannian) Heisenberg group. This provides an important step towards a potential theoretic proof of Monge's problem in the Heisenberg group.
A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator, both prominent world-class software tools.
Decisions made to address climate change must start with an understanding of the risk of an uncertain future to human systems, which in turn means understanding both the consequence as well as the probability of a climate induced impact occurring. In other words, addressing climate change is an exercise in risk-informed policy making, which implies that there is no single correct answer or even a way to be certain about a single answer; the uncertainty in future climate conditions will always be present and must be taken as a working-condition for decision making. In order to better understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions, this study estimates the impacts from responses to climate change on U.S. state- and national-level economic activity by employing a risk-assessment methodology for evaluating uncertain future climatic conditions. Using the results from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report (AR4) as a proxy for climate uncertainty, changes in hydrology over the next 40 years were mapped and then modeled to determine the physical consequences on economic activity and to perform a detailed 70-industry analysis of the economic impacts among the interacting lower-48 states. The analysis determines industry-level effects, employment impacts at the state level, interstate population migration, consequences to personal income, and ramifications for the U.S. trade balance. The conclusions show that the average risk of damage to the U.S. economy from climate change is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs. Further analysis shows that an increase in uncertainty raises this risk. This paper will present the methodology behind the approach, a summary of the underlying models, as well as the path forward for improving the approach.
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
We describe an integrated information management system for an independent spent fuel dry-storage installation (ISFSI) that can provide for (1) secure and authenticated data collection, (2) data analysis, (3) dissemination of information to appropriate stakeholders via a secure network, and (4) increased public confidence and support of the facility licensing and operation through increased transparency. This information management system is part of a collaborative project between Sandia National Laboratories, Taiwan Power Co., and the Fuel Cycle Materials Administration of Taiwan's Atomic Energy Council, which is investigating how to implement this concept.
A novel multiphase shock tube has been constructed to test the interaction of a planar shock wave with a dense gas-solid field of particles. The particle field is generated by a gravity-fed method that results in a spanwise curtain of 100-micron particles producing a volume fraction of about 15%. Interactions with incident shock Mach numbers of 1.67 and 1.95 are reported. High-speed schlieren imaging is used to reveal the complex wave structure associated with the interaction. After the impingement of the incident shock, transmitted and reflected shocks are observed, which lead to differences in flow properties across the streamwise dimension of the curtain. Tens of microseconds after the onset of the interaction, the particle field begins to propagate downstream, and disperse. The spread of the particle field, as a function of its position, is seen to be nearly identical for both Mach numbers. Immediately downstream of the curtain, the peak pressures associated with the Mach 1.67 and 1.95 interactions are about 35% and 45% greater than tests without particles, respectively. For both Mach numbers tested, the energy and momentum fluxes in the induced flow far downstream are reduced by about 30-40% by the presence of the particle field.
The very high temperature reactor (VHTR) concept is being developed by the US Department of Energy (DOE) and other groups around the world for the future generation of electricity at high thermal efficiency (> 48%) and co-generation of hydrogen and process heat. This Generation-IV reactor would operate at elevated exit temperatures of 1,000-1,273 K, and the fueled core would be cooled by forced convection helium gas. For the prismatic-core VHTR, which is the focus of this analysis, the velocity of the hot helium flow exiting the core into the lower plenum (LP) could be 35-70 m/s. The impingement of the resulting gas jets onto the adiabatic plate at the bottom of the LP could develop hot spots and thermal stratification and inadequate mixing of the gas exiting the vessel to the turbo-machinery for energy conversion. The complex flow field in the LP is further complicated by the presence of large cylindrical graphite posts that support the massive core and inner and outer graphite reflectors. Because there are approximately 276 channels in the VHTR core from which helium exits into the LP and a total of 155 support posts, the flow field in the LP includes cross flow, multiple jet flow interaction, flow stagnation zones, vortex interaction, vortex shedding, entrainment, large variation in Reynolds number (Re), recirculation, and mixing enhancement and suppression regions. For such a complex flow field, experimental results at operating conditions are not currently available. Instead, the objective of this paper is to numerically simulate the flow field in the LP of a prismatic core VHTR using the Sandia National Laboratories Fuego, which is a 3D, massively parallel generalized computational fluid dynamics (CFD) code with numerous turbulence and buoyancy models and simulation capabilities for complex gas flow fields, with and without thermal effects. The code predictions for simpler flow fields of single and swirling gas jets, with and without a cross flow, are validated using reported experimental data and theory. The key processes in the LP are identified using phenomena identification and ranking table (PIRT). It may be argued that a CFD code that accurately simulates simplified, single-effect flow fields with increasing complexity is likely to adequately model the complex flow field in the VHTR LP, subject to a future experimental validation. The PIRT process and spatial and temporal discretizations implemented in the present analysis using Fuego established confidence in the validation and verification (V and V) calculations and in the conclusions reached based on the simulation results. The performed calculations included the helicoid vortex swirl model, the dynamic Smagorinsky large eddy simulation (LES) turbulence model, participating media radiation (PMR), and 1D conjugate heat transfer (CHT). The full-scale, half-symmetry LP mesh used in the LP simulation included unstructured hexahedral elements and accounted for the graphite posts, the helium jets, the exterior walls, and the bottom plate with an adiabatic outer surface. Results indicated significant enhancements in heat transfer, flow mixing, and entrainment in the VHTR LP when using swirling inserts at the exit of the helium flow channels into the LP. The impact of using various swirl angles on the flow mixing and heat transfer in the LP is qualified, including the formation of the central recirculation zone (CRZ), and the effect of LP height. Results also showed that in addition to the enhanced mixing, the swirling inserts result in negligible additional pressure losses and are likely to eliminate the formation of hot spots.
The development of turbulent spots in a hypersonic boundary layer was studied on the nozzle wall of the Boeing/AFOSR Mach-6 Quiet Tunnel. Under quiet flow conditions, the nozzle wall boundary layer remains laminar and grows very thick over the long nozzle length. This allows the development of large turbulent spots that can be readily measured with pressure transducers. Measurements of naturally occurring wave packets and developing turbulent spots were made. The peak frequencies of these natural wave packets were in agreement with second-mode computations. For a controlled study, the breakdown of disturbances created by spark and glow perturbations were studied at similar freestream conditions. The spark perturbations were the most effective at creating large wave packets that broke down into turbulent spots. The flow disturbances created by the controlled perturbations were analyzed to obtain amplitude criteria for nonlinearity and breakdown as well as the convection velocities of the turbulent spots. Disturbances first grew into linear instability waves and then quickly became nonlinear. Throughout the nonlinear growth of the wave packets, large harmonics are visible in the power spectra. As breakdown begins, the peak amplitudes of the instability waves and harmonics decrease into the rising broad-band frequencies. Instability waves are still visible on either side of the growing turbulent spots during this breakdown process.
In recent years there has been an unstable supply of the critical diagnostic medical isotope 99Tc. Several concepts and designs have been proposed to produce 99Mo the parent nuclide of 99Tc, at a commercial scale sufficient to stabilize the world supply. This work lays out a testing and experiment plan for a proposed 2 MW open pool reactor fueled by Low Enriched Uranium (LEU) 99Mo targets. The experiments and tests necessary to support licensing of the reactor design are described and how these experiments and tests will help establish the safe operating envelop for a medical isotope production reactor is discussed. The experiments and tests will facilitate a focused and efficient licensing process in order to bring on line a needed production reactor dedicated to supplying medical isotopes. The Target Fuel Isotope Reactor (TFIR) design calls for an active core region that is approximately 40 cm in diameter and 40 cm in fuel height. It contains up to 150 cylindrical, 1-cm diameter, LEU oxide fuel pins clad with Zircaloy (zirconium alloy), in an annular hexagonal array on a {approx}2.0 cm pitch surrounded, radially, by a graphite or a Be reflector. The reactor is similar to U.S. university reactors in power, hardware, and safety/control systems. Fuel/target pin fabrication is based on existing light water reactor fuel fabrication processes. However, as part of licensing process, experiments must be conducted to confirm analytical predictions of steady-state power and accident conditions. The experiment and test plan will be conducted in phases and will utilize existing facilities at the U.S. Department of Energy's Sandia National Laboratories. The first phase is to validate the predicted reactor core neutronics at delayed critical, zero power and very low power. This will be accomplished by using the Sandia Critical Experiment (CX) platform. A full scale TFIR core will be built in the CX and delayed critical measurements will be taken. For low power experiments, fuel pins can be removed after the experiment and using Sandia's metrology lab, relative power profiles (radially and axially) can be determined. In addition to validating neutronic analyses, confirming heat transfer properties of the target/fuel pins and core will be conducted. Fuel/target pin power limits can be verified with out-of-pile (electrical heating) thermal-hydraulic experiments. This will yield data on the heat flux across the Zircaloy clad and establish safety margin and operating limits. Using Sandia's Annular Core Research Reactor (ACRR) a 4 MW TRIGA type research reactor, target/fuel pins can be driven to desired fission power levels for long durations. Post experiment inspection of the pins can be conducted in the Auxiliary Hot Cell Facility to observe changes in the mechanical properties of the LEU matrix and burn-up effects. Transient tests can also be conducted at the ACRR to observe target/fuel pin performance during accident conditions. Target/fuel pins will be placed in double experiment containment and driven by pulsing the ACRR until target/fuel failure is observed. This will allow for extrapolation of analytical work to confirm safety margins.
Although planar heterostructures dominate current solid-state lighting architectures (SSL), 1D nanowires have distinct and advantageous properties that may eventually enable higher efficiency, longer wavelength, and cheaper devices. However, in order to fully realize the potential of nanowire-based SSL, several challenges exist in the areas of controlled nanowire synthesis, nanowire device integration, and understanding and controlling the nanowire electrical, optical, and thermal properties. Here recent results are reported regarding the aligned growth of GaN and III-nitride core-shell nanowires, along with extensive results providing insights into the nanowire properties obtained using cutting-edge structural, electrical, thermal, and optical nanocharacterization techniques. A new top-down fabrication method for fabricating periodic arrays of GaN nanorods and subsequent nanorod LED fabrication is also presented.
Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, and supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.
A conceptual structure for performance assessments (PAs) for radioactive waste disposal facilities and other complex engineered facilities based on the following three basic conceptual entities is described: EN1, a probability space that characterizes aleatory uncertainty; EN2, a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and EN3, a probability space that characterizes epistemic uncertainty. The implementation of this structure is illustrated with results from PAs for the Waste Isolation Pilot Plant and the proposed Yucca Mountain repository for high-level radioactive waste.
In this work, we describe a novel design for a H2SO 4decomposer. The decomposition of H2SO4 to produce SO2is a common processing operation in the sulfur-based thermochemical cycles for hydrogen production where acid decomposition takes place at 850°C in the presence of a catalyst. The combination of high temperature and sulfuric acid creates a very corrosive environment that presents significant design challenges. The new decomposer design is based on a bayonet-type heat exchanger tube with the annular space packed with a catalyst. The unit is constructed of silicon carbide and other highly corrosion resistant materials. The new design integrates acid boiling, superheating, decomposition and heat recuperation into a single process and eliminates problems of corrosion and failure of high temperature seals encountered in previous testing using metallic construction materials. The unit was tested by varying the acid feed rate and decomposition temperature and pressure.
This report summarizes a series of three-dimensional simulations for the Bayou Choctaw Strategic Petroleum Reserve. The U.S. Department of Energy plans to leach two new caverns and convert one of the existing caverns within the Bayou Choctaw salt dome to expand its petroleum reserve storage capacity. An existing finite element mesh from previous analyses is modified by changing the locations of two caverns. The structural integrity of the three expansion caverns and the interaction between all the caverns in the dome are investigated. The impacts of the expansion on underground creep closure, surface subsidence, infrastructure, and well integrity are quantified. Two scenarios were used for the duration and timing of workover conditions where wellhead pressures are temporarily reduced to atmospheric pressure. The three expansion caverns are predicted to be structurally stable against tensile failure for both scenarios. Dilatant failure is not expected within the vicinity of the expansion caverns. Damage to surface structures is not predicted and there is not a marked increase in surface strains due to the presence of the three expansion caverns. The wells into the caverns should not undergo yield. The results show that from a structural viewpoint, the locations of the two newly proposed expansion caverns are acceptable, and all three expansion caverns can be safely constructed and operated.
Magnesium batteries are alternatives to the use of lithium ion and nickel metal hydride secondary batteries due to magnesium's abundance, safety of operation, and lower toxicity of disposal. The divalency of the magnesium ion and its chemistry poses some difficulties for its general and industrial use. This work developed a continuous and fibrous nanoscale network of the cathode material through the use of electrospinning with the goal of enhancing performance and reactivity of the battery. The system was characterized and preliminary tests were performed on the constructed battery cells. We were successful in building and testing a series of electrochemical systems that demonstrated good cyclability maintaining 60-70% of discharge capacity after more than 50 charge-discharge cycles.
This report summarizes the findings of a five-month LDRD project funded through Sandia's NTM Investment Area. The project was aimed at providing the foundation for the development of advanced functional materials through the application of ultrathin coatings of microporous or mesoporous materials onto the surface of substrates such as silicon wafers. Prior art teaches that layers of microporous materials such as zeolites may be applied as, e.g., sensor platforms or gas separation membranes. These layers, however, are typically several microns to several hundred microns thick. For many potential applications, vast improvements in the response of a device could be realized if the thickness of the porous layer were reduced to tens of nanometers. However, a basic understanding of how to synthesize or fabricate such ultra-thin layers is lacking. This report describes traditional and novel approaches to the growth of layers of microporous materials on silicon wafers. The novel approaches include reduction of the quantity of nutrients available to grow the zeolite layer through minimization of solution volume, and reaction of organic base (template) with thermally-oxidized silicon wafers under a steam atmosphere to generate ultra-thin layers of zeolite MFI.
Naturally occurring clay minerals provide a distinctive material for carbon capture and carbon dioxide sequestration. Swelling clay minerals, such as the smectite variety, possess an aluminosilicate structure that is controlled by low-charge layers that readily expand to accommodate water molecules and, potentially, carbon dioxide. Recent experimental studies have demonstrated the efficacy of intercalating carbon dioxide in the interlayer of layered clays but little is known about the molecular mechanisms of the process and the extent of carbon capture as a function of clay charge and structure. A series of molecular dynamics simulations and vibrational analyses have been completed to assess the molecular interactions associated with incorporation of CO2 in the interlayer of montmorillonite clay and to help validate the models with experimental observation.
Many U. S. nuclear power plants are approaching 40 years of age and there is a desire to extend their life for up to 100 total years. Safety-related cables were originally qualified for nuclear power plant applications based on IEEE Standards that were published in 1974. The qualifications involved procedures to simulate 40 years of life under ambient power plant aging conditions followed by simulated loss of coolant accident (LOCA). Over the past 35 years or so, substantial efforts were devoted to determining whether the aging assumptions allowed by the original IEEE Standards could be improved upon. These studies led to better accelerated aging methods so that more confident 40-year lifetime predictions became available. Since there is now a desire to potentially extend the life of nuclear power plants way beyond the original 40 year life, there is an interest in reviewing and critiquing the current state-of-the-art in simulating cable aging. These are two of the goals of this report where the discussion is concentrated on the progress made over the past 15 years or so and highlights the most thorough and careful published studies. An additional goal of the report is to suggest work that might prove helpful in answering some of the questions and dealing with some of the issues that still remain with respect to simulating the aging and predicting the lifetimes of safety-related cable materials.
We present advances with a 32 element scalable, segmented dual mode imager. Scaling up the number of cells results in a 1.4 increase in efficiency over a system we deployed last year. Variable plane separation has been incorporated which further improves the efficiency of the detector. By using 20 cm diameter cells we demonstrate that we could increase sensitivity by a factor of 6. We further demonstrate gamma ray imaging in from Compton scattering. This feature allows for powerful dual mode imaging. Selected results are presented that demonstrate these new capabilities.
In attempting to detect and map out underground facilities, whether they be large-scale hardened deeply-buried targets (HDBT's) or small-scale tunnels for clandestine border or perimeter crossing, seismic imaging using reflections from the tunnel interface has been seen as one of the better ways to both detect and delineate tunnels from the surface. The large seismic impedance contrast at the tunnel/rock boundary should provide a strong, distinguishable seismic response, but in practice, such strong indicators are often lacking. One explanation for the lack of a good seismic reflection at such a strong contrast boundary is that the damage caused by the tunneling itself creates a zone of altered seismic properties that significantly changes the nature of this boundary. This report examines existing geomechanical data that define the extent of an excavation damage zone around underground tunnels, and the potential impact on rock properties such as P-wave and S-wave velocities. The data presented from this report are associated with sites used for the development of underground repositories for the disposal of radioactive waste; these sites have been excavated in volcanic tuff (Yucca Mountain) and granite (HRL in Sweden, URL in Canada). Using the data from Yucca Mountain, a numerical simulation effort was undertaken to evaluate the effects of the damage zone on seismic responses. Calculations were performed using the parallelized version of the time-domain finitedifference seismic wave propagation code developed in the Geophysics Department at Sandia National Laboratories. From these numerical simulations, the damage zone does not have a significant effect upon the tunnel response, either for a purely elastic case or an anelastic case. However, what was discovered is that the largest responses are not true reflections, but rather reradiated Stoneley waves generated as the air/earth interface of the tunnel. Because of this, data processed in the usual way may not correctly image the tunnel. This report represents a preliminary step in the development of a methodology to convert numerical predictions of rock properties to an estimation of the extent of rock damage around an underground facility and its corresponding seismic velocity, and the corresponding application to design a testing methodology for tunnel detection.
We have developed a mature laboratory at Sandia to measure interfacial rheology, using a combination of home-built, commercially available, and customized commercial tools. An Interfacial Shear Rheometer (KSV ISR-400) was modified and the software improved to increase sensitivity and reliability. Another shear rheometer, a TA Instruments AR-G2, was equipped with a du Nouey ring, bicone geometry, and a double wall ring. These interfacial attachments were compared to each other and to the ISR. The best results with the AR-G2 were obtained with the du Nouey ring. A Micro-Interfacial Rheometer (MIR) was developed in house to obtain the much higher sensitivity given by a smaller probe. However, it was found to be difficult to apply this technique for highly elastic surfaces. Interfaces also exhibit dilatational rheology when the interface changes area, such as occurs when bubbles grow or shrink. To measure this rheological response we developed a Surface Dilatational Rheometer (SDR), in which changes in surface tension with surface area are measured during the oscillation of the volume of a pendant drop or bubble. All instruments were tested with various surfactant solutions to determine the limitations of each. In addition, foaming capability and foam stability were tested and compared with the rheology data. It was found that there was no clear correlation of surface rheology with foaming/defoaming with different types of surfactants, but, within a family of surfactants, rheology could predict the foam stability. Diffusion of surfactants to the interface and the behavior of polyelectrolytes were two subjects studied with the new equipment. Finally, surface rheological terms were added to a finite element Navier-Stokes solver and preliminary testing of the code completed. Recommendations for improved implementation were given. When completed we plan to use the computations to better interpret the experimental data and account for the effects of the underlying bulk fluid.
Because of their penetrating power, energetic neutrons and gamma rays ({approx}1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging- a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.
The corrosion behavior of A516 carbon steel was evaluated to determine the effect of the dissolved chloride content in molten binary Solar Salt. Corrosion tests were conducted in a molten salt consisting of a 60-40 weight ratio of NaNO{sub 3} and KNO{sub 3} at 400{sup o}C and 450{sup o}C for up to 800 hours. Chloride concentrations of 0, 0.5 and 1.0 wt.% were investigated to determine the effect on corrosion of this impurity, which can be present in comparable amounts in commercial grades of the constituent salts. Corrosion rates were determined by descaled weight losses, corrosion morphology was examined by metallographic sectioning, and the types of corrosion products were determined by x-ray diffraction. Corrosion proceeded by uniform surface scaling and no pitting or intergranular corrosion was observed. Corrosion rates increased significantly as the concentration of dissolved chloride in the molten salt increased. The adherence of surface scales, and thus their protective properties, was degraded by dissolved chloride, fostering more rapid corrosion. Magnetite was the only corrosion product formed on the carbon steel specimens, regardless of chloride content or temperature.
Technology Readiness Levels (TRLs) have been used extensively from the 1970s, especially in the National Aeronautics and Space Administration (NASA). Their application was recommended by the General Accounting Office in 1999 to be used for major Department of Defense acquisition projects. Manufacturing Readiness Levels (MRLs) have been proposed for improving the way manufacturing risks and readiness are identified; they were introduced to the defense community in 2005, but have not been used as broadly as TRLs. Originally TRLs were used to assess the readiness of a single technology. With the emergence of more complex systems and system of systems, it has been increasingly recognized that TRLs have limitations, especially when considering integration of complex systems. Therefore, it is important to use TRLs in the correct context. Details on TRLs and MRLs are reported in this paper. More recent indices to establish a better understanding of the integrated readiness state of systems are presented. Newer readiness indices, System Readiness Levels (SRLs) and Integration Readiness Levels, are discussed and their limitations and advantages are presented, along with an example of computing SRLs. It is proposed that a modified SRL be considered that explicitly includes the MRLs and a modification of the TRLs to include the Integrated Technology Index (ITI) and/or the Advancement Degree of Difficulty index proposed by NASA. Finally, the use of indices to perform technology assessments are placed into the overall context of technology management, recognizing that factors to transition and manage technology include cost, schedule, manufacturability, integration readiness, and technology maturity.
Sandia National Laboratories (SNL) has conducted radiation effects testing for the Department of Energy (DOE) and other contractors supporting the DOE since the 1960's. Over this period, the research reactor facilities at Sandia have had a primary mission to provide appropriate nuclear radiation environments for radiation testing and qualification of electronic components and other devices. The current generation of reactors includes the Annular Core Research Reactor (ACRR), a water-moderated pool-type reactor, fueled by elements constructed from UO2-BeO ceramic fuel pellets, and the Sandia Pulse Reactor III (SPR-III), a bare metal fast burst reactor utilizing a uranium-molybdenum alloy fuel. The SPR-III is currently defueled. The SPR Facility (SPRF) has hosted a series of critical experiments. A purpose-built critical experiment was first operated at the SPRF in the late 1980's. This experiment, called the Space Nuclear Thermal Propulsion Critical Experiment (CX), was designed to explore the reactor physics of a nuclear thermal rocket motor. This experiment was fueled with highly-enriched uranium carbide fuel in annular water-moderated fuel elements. The experiment program was completed and the fuel for the experiment was moved off-site. A second critical experiment, the Burnup Credit Critical Experiment (BUCCX) was operated at Sandia in 2002. The critical assembly for this experiment was based on the assembly used in the CX modified to accommodate low-enriched pin-type fuel in water moderator. This experiment was designed as a platform in which the reactivity effects of specific fission product poisons could be measured. Experiments were carried out on rhodium, an important fission product poison. The fuel and assembly hardware for the BUCCX remains at Sandia and is available for future experimentation. The critical experiment currently in operation at the SPRF is the Seven Percent Critical Experiment (7uPCX). This experiment is designed to provide benchmark reactor physics data to support validation of the reactor physics codes used to design commercial reactor fuel elements in an enrichment range above the current 5% enrichment cap. A first set of critical experiments in the 7uPCX has been completed. More experiments are planned in the 7uPCX series. The critical experiments at Sandia National Laboratories are currently funded by the US Department of Energy Nuclear Criticality Safety Program (NCSP). The NCSP has committed to maintain the critical experiment capability at Sandia and to support the development of a critical experiments training course at the facility. The training course is intended to provide hands-on experiment experience for the training of new and re-training of practicing Nuclear Criticality Safety Engineers. The current plans are for the development of the course to continue through the first part of fiscal year 2011 with the development culminating is the delivery of a prototype of the course in the latter part of the fiscal year. The course will be available in fiscal year 2012.
When DC or AC electric fields are applied to a thin liquid film, the interface may become unstable and form a series of pillars. We examine how the presence of a second liquid interface influences pillar dynamics and morphologies. For perfect dielectric films, linear stability analysis of a lubrication-approximation-based model shows that the root mean square voltage governs the pillar behavior. For leaky dielectric films, Floquet theory is applied to carry out the linear stability analysis, and reveals that the accumulation of free charge at each interface depends on the conductivities in the adjoining phases and that high frequencies of the AC electric field may be used to control this accumulation at each interface independently. The results presented here may of interest for the controlled creation of surface topographical features in applications such as patterned coatings and microelectronics.
Contaminant sensor placement is often cast as an optimization problem to minimize objectives such as the probability of failed detection or public health impact. In the case of an actual incident, the sensor network data will also be utilized for event characterization to estimate source location, size and hazard areas. We present a sensor placement methodology to optimize for event characterization performance, and also compare the results to traditional placement objectives.
Geochemical Engineering Design (GED) is based on applications of the principles and various computer models that describe the biogeochemistry and physics of removal of contaminants from water by adsorption, precipitation and filtration. It can be used to optimize or evaluate the efficiency of all phases of in situ recovery (ISR). The primary tools of GED are reactive transport models; this talk describes the potential application of the HYDROGEOCHEM family of codes to ISR. The codes can describe a complete suite of equilibrium or kinetic aqueous complexation, adsorption-desorption, precipitation-dissolution, redox, and acid-base reactions in variably saturated media with density-dependent fluid flow. Applications to ISR are illustrated with simulations of (1) the effectiveness of a reactive barrier to prevent off-site uranium migration and (2) evaluation of the effect of sorption hysteresis on natural attenuation. In the first example, it can be seen that the apparent effectiveness of the barrier depends on monitoring location and that it changes over time. This is due to changes in pH, saturation of sorption sites, as well as the geometry of the flow field. The second simulation shows how sorption hysteresis leads to observable attenuation of a uranium contamination plume. Different sorption mechanisms including fast (or reversible), slow, and irreversible sorption were simulated. The migration of the dissolved and total uranium plumes for the different cases are compared and the simulations show that when 50-100% of the sites have slow desorption rates, the center of mass of the dissolved uranium plume begins to move upstream. This would correspond to the case in which the plume boundaries begin to shrink as required for demonstration of natural attenuation.
There is a long literature studying the criticality of space reactors immersed in water/sand after a launch accident; however most of these studies evaluate nominal or uniformly compacted system configurations. There is less research on the reactivity consequences of impact, which can cause large structural deformation of reactor components that can result in changes in the reactivity of the system. Predicting these changes is an important component of launch safety analysis. This paper describes new features added to the DAG-MCNP5 neutronics code that allow the criticality analysis of deformed geometries. A CAD-based solid model of the reactor geometry is used to generate an initial mesh for a structural mechanics impact calculation using the PRONTO3D/PRESTO continuum mechanics codes. Boundary conditions and material specifications for the reactivity analysis are attached to the solid model that is then associated with the initial mesh representation. This geometry is then updated with the deformed finite element mesh to perturb node coordinates. DAG-MCNP5 was extended to accommodate two consequences of the large structural deformations: dead elements representing fracture, and small overlaps between adjacent volumes. The dead elements are removed during geometry initialization and adjustments are made to conseve mass. More challenging, small overlaps where adjacent mesh elements contact cause the geometric queries to become unreliable. A new point membership test was developed that is tolerant of self-intersecting volumes, and the particle tracking algorithm was adjusted to enable transport through small overlaps. These new features enable DAG-MCNP5 to perform particle transport and criticality eigenvalue calculations on both deformed mesh geometry and CAD geometry with small geometric defects. Detailed impact simulations were performed on an 85-pin space reactor model. Iin the most realistic model that included NaK coolant and water in the impact simulation, the eigenvalue was determined to increase 2.7% due to impact.
Development of high energy density dielectrics with low temperature coefficients of capacitance that are systems integrable are needed for extreme environment, defense and automotive applications. The synthesis of high purity chemically prepared Ca(Zr,Ti)O3 powders is described and has resulted in the lowering of conventional firing temperatures by over 100 C. Direct write aerosol spray deposition techniques have been used to fabricate high quality single layer and multilayer capacitors from these powders. The dielectric constants of the direct write capacitors are equivalent to those of fired bulk ceramics. Our presentation emphasizes the synthesis, phase evolution and microstructure development that has resulted in dielectrics with energy densities in excess of 3 J/cm3 with less than 1% change in dielectric constant over a 200 C temperature range.
Geological carbon sequestration relies on the principle that CO{sub 2} injected deep into the subsurface is unable to leak to the atmosphere. Structural trapping by a relatively impermeable caprock (often mudstone such as a shale) is the main trapping mechanism that is currently relied on for the first hundreds of years. Many of the pores of the caprock are of micrometer to nanometer scale. However, the distribution, geometry and volume of porosity at these scales are poorly characterized. Differences in pore shape and size can cause variation in capillary properties and fluid transport resulting in fluid pathways with different capillary entry pressures in the same sample. Prediction of pore network properties for distinct geologic environments would result in significant advancement in our ability to model subsurface fluid flow. Specifically, prediction of fluid flow through caprocks of geologic CO{sub 2} sequestration reservoirs is a critical step in evaluating the risk of leakage to overlying aquifers. The micro- and nanoporosity was analyzed in four mudstones using small angle neutron scattering (SANS). These mudstones are caprocks of formations that are currently under study or being used for carbon sequestration projects and include the Marine Tuscaloosa Group, the Lower Tuscaloosa Group, the upper and lower shale members of the Kirtland Formation, and the Pennsylvanian Gothic shale. Total organic carbon varies from <0.3% to 4% by weight. Expandable clay contents range from 10% to {approx}40% in the Gothic shale and Kirtland Formation, respectively. Neutrons effectively scatter from interfaces between materials with differing scattering length density (i.e. minerals and pores). The intensity of scattered neutrons, I(Q), where Q is the scattering vector, gives information about the volume of pores and their arrangement in the sample. The slope of the scattering data when plotted as log I(Q) vs. log Q provides information about the fractality or geometry of the pore network. Results from this study, combined with high-resolution TEM imaging, provide insight into the differences in volume and geometry of porosity between these various mudstones.
We present a photonic integrated circuit (PIC) composed of two strongly coupled lasers. This PIC utilizes the dynamics of mutual injection locking to increase the relaxation resonance frequency from 3 GHz to beyond 30 GHz.
Interfacial delamination is often the critical failure mode limiting the performance of polymer/metal interfaces. Consequently methods that measure the toughness of such interfaces are of considerable interest. One approach for measuring the toughness of a polymer/metal interface is to use the stressed-overlayer test. In this test a metal substrate is coated with a sub-micron thick polymer film to create the interface of interest. An overlayer, typically a few tenths of a micron of sputtered tungsten, is then deposited on top of the polymer in such a way as to generate a very high residual compressive stress within the sputtered layer ({approx} 1-2 GPa). This highly stressed overlayer induces delamination and blister formation. The measured buckle heights and widths are then used in conjunction with a fracture mechanics analysis to infer interfacial toughness. Here we use a finite element, cohesive-zone-based, fracture analysis to perform the required interfacial crack growth simulation. This analysis shows that calculated crack growth is sensitive to the polymer layer thickness even when the layer is only 10's of nanometers thick. The inward displacement of the overlayer at the buckle edge, which is enabled by the relatively low polymer compliance, is the primary cause of differences from a rigid substrate idealization.
Low Temperature Cofired Ceramic has proven itself in microelectronics, microsystems (including microfluidic systems), sensors, RF features, and various non-electronic applications. We will discuss selected applications and the processing associated with those applications. We will then focus on our recent work in the area of EMI shielding using full tape thickness features (FTTF) and sidewall metallization. The FTTF is very effective in applications with -150 dB isolation requirements, but presents obvious processing difficulties in full-scale fabrication. The FTTF forms a single continuous solid wall around the volume to be shielded by using sequential punching and feature-filling. We discuss the material incompatibilities and manufacturing considerations that need to be addressed for such structures and show preliminary implementations.
The safe handling of reprocessed fuel addresses several scientific goals, especially when considering the capture and long-term storage of volatile radionuclides that are necessary during this process. Despite not being a major component of the off-gas, radioiodine (I{sub 2}) is particularly challenging, because it is a highly mobile gas and {sup 129}I is a long-lived radionuclide (1.57 x 10{sup 7} years). Therefore, its capture and sequestration is of great interest on a societal level. Herein, we explore novel routes toward the effective capture and storage of iodine. In particular, we report on the novel use of a new class of porous solid-state functional materials (metal-organic frameworks, MOFs), as high-capacity adsorbents of molecular iodine. We further describe the formation of novel glass-composite material (GCM) waste forms from the mixing and sintering of the I{sub 2}-containing MOFs with Bi-Zn-O low-temperature sintering glasses and silver metal flakes. Our findings indicate that, upon sintering, a uniform monolith is formed, with no evidence of iodine loss; iodine is sequestered during the heating process by the in situ formation of AgI. Detailed materials characterization analysis is presented for the GCMs. This includes powder X-ray diffraction, scanning electron microscopy coupled with energy-dispersive spectroscopy (SEM-EDS), thermal analysis (thermogravimetric analysis (TGA)), and chemical durability tests including aqueous leach studies (product consistency test (PCT)), with X-ray fluorescence (XRF) and inductively coupled plasma-mass spectrometry (ICP-MS) of the PCT leachate.
Vapor-deposited, exothermic metal-metal multilayer foils are an ideal class of materials for detailed investigations of pulsed laser-ignited chemical reactions. Created in a pristine vacuum environment by sputter deposition, these high purity materials have well-defined reactant layer thicknesses between 1 and 1000 nm, minimal void density and intimate contact between layers. Provided that layer thicknesses are made small, some reactive metal-metal multilayer foils can be ignited at a point by laser irradiation and exhibit subsequent high-temperature, self-propagating synthesis. With this presentation, we describe the pulsed laser-induced ignition characteristics of a single multilayer system (equiatomic Al/Pt) that exhibits self-propagating synthesis. We show that the thresholds for ignition are dependent on (i) multilayer design and (ii) laser pulse duration. With regard to multilayer design effects on ignition, there is a large range of multilayer periodicity over which ignition threshold decreases as layer thicknesses are made small. We attribute this trend of decreased ignition threshold to reduced mass transport diffusion lengths required for rapid exothermic mixing. With regard to pulse duration effects, we have determined how ignition threshold of a single Al/Pt multilayer varies with pulse duration from 10{sup -2} to {approx} 10{sup -13} sec (wavelength and spot size are held constant). A higher laser fluence is required for ignition when using a single laser pulse {approx} 100 fs or 1 ps compared with nanosecond or microsecond exposure, and we attribute this, in part, to the effects of reactive material being ablated when using the shorter pulse durations. To further understand these trends and other pulsed laser-based processes, our discussion concludes with an analysis of the heat-affected depths in multilayers as a function of pulse duration.
Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.
Our ability to field useful, nano-enabled microsystems that capitalize on recent advances in sensor technology is severely limited by the energy density of available power sources. The catalytic nanodiode (reported by Somorjai's group at Berkeley in 2005) was potentially an alternative revolutionary source of micropower. Their first reports claimed that a sizable fraction of the chemical energy may be harvested via hot electrons (a 'chemicurrent') that are created by the catalytic chemical reaction. We fabricated and tested Pt/GaN nanodiodes, which eventually produced currents up to several microamps. Our best reaction yields (electrons/CO{sub 2}) were on the order of 10{sup -3}; well below the 75% values first reported by Somorjai (we note they have also been unable to reproduce their early results). Over the course of this Project we have determined that the whole concept of 'chemicurrent', in fact, may be an illusion. Our results conclusively demonstrate that the current measured from our nanodiodes is derived from a thermoelectric voltage; we have found no credible evidence for true chemicurrent. Unfortunately this means that the catalytic nanodiode has no future as a micropower source.
Performing coupled thermomechanical simulations is becoming an increasingly important aspect of nuclear weapon (NW) safety assessments in abnormal thermal environments. While such capabilities exist in SIERRA, they have thus far been used only in a limited sense to investigate NW safety themes. An important limiting factor is the difficulty associated with developing geometries and meshes appropriate for both thermal and mechanical finite element models, which has limited thermomechanical analysis to simplified configurations. This work addresses the issue of how to perform coupled analyses on models where the underlying geometries and associated meshes are different and tailored to their relevant physics. Such an approach will reduce the model building effort and enable previously developed single-physics models to be leveraged in future coupled simulations. A combined-environment approach is presented in this report using SIERRA tools, with quantitative comparisons made between different options in SIERRA. This report summarizes efforts on running a coupled thermomechanical analysis using the SIERRA Arpeggio code.
Arc-jet wind tunnels produce conditions simulating high-altitude hypersonic flight such as occurs upon entry of space craft into planetary atmospheres. They have traditionally been used to study flight in Earth's atmosphere, which consists mostly of nitrogen and oxygen. NASA is presently using arc jets to study entry into Mars' atmosphere, which consists of carbon dioxide and nitrogen. In both cases, a wide variety of chemical reactions take place among the gas constituents and with test articles placed in the flow. In support of those studies, we made measurements using a residual gas analyzer (RGA) that sampled the exhaust stream of a NASA arc jet. The experiments were conducted at the HYMETS arc jet (Hypersonic Materials Environmental Test System) located at the NASA Langley Research Center, Hampton, VA. This report describes our RGA measurements, which are intended to be used for model validation in combination with similar measurements on other systems.
Conjugated polymers such as poly(p-phenylenevinylene) (PPV) have attracted a great deal of attention due to their optoelectronic properties. The ability to control the lateral spatial resolution of conjugated polymers will allow for improved integration into electronic devices. Here, we present a method for photo-patterning xanthate precursor polymers leading to micron scale spatial control of conjugated poly(p-phenylenevinylene). Our photolithographic process is simple and direct, and should be amenable to a range of other xanthate or dithiocarbamate precursor PPV polymers.
The objectives are: (1) To increase the adoption of Trilinos throughout DOE research communities that principally write Fortran, e.g. climate & combustion researchers; and (2) To maintain the OOP philosophy of the Trilinos project while using idioms that feel natural to Fortran programmers.
When estimating parameters for a material model from experimental data collected during a separate effects physics experiment, the quality of fit is only a part of the required data. Also necessary is the uncertainty in the estimated parameters so that uncertainty quantification and model validatino can be performed at the full system level. The uncertainty and quality of fit of the data are many times not available and should be considered when fitting the data to a specified model. There are many techniques available to fit data to a material model and a few of them are presented in this work using a simple acoustical emission dataset. The estimated parameters and the affiliated uncertainty will be estimated using a variety of techniques and compared.
Techniques appear promising to construct and integrate automated detect-and-characterize technique for epidemics - Working off biosurveillance data, and provides information on the particular/ongoing outbreak. Potential use - in crisis management and planning, resource allocation - Parameter estimation capability ideal for providing the input parameters into an agent-based model, Index Cases, Time of Infection, infection rate. Non-communicable diseases are easier than communicable ones - Small anthrax can be characterized well with 7-10 days of data, post-detection; plague takes longer, Large attacks are very easy.
Sandia National Laboratories has developed a vehicle-scale demonstration hydrogen storage system as part of a Work for Others project funded by General Motors. This Demonstration System was developed based on the properties and characteristics of sodium alanates which are complex metal hydrides. The technology resulting from this program was developed to enable heat and mass management during refueling and hydrogen delivery to an automotive system. During this program the Demonstration System was subjected to repeated hydriding and dehydriding cycles to enable comparison of the vehicle-scale system performance to small-scale sample data. This paper describes the experimental results of life-cycle studies of the Demonstration System. Two of the four hydrogen storage modules of the Demonstration System were used for this study. A well-controlled and repeatable sorption cycle was defined for the repeated cycling, which began after the system had already been cycled forty-one times. After the first nine repeated cycles, a significant hydrogen storage capacity loss was observed. It was suspected that the sodium alanates had been affected either morphologically or by contamination. The mechanisms leading to this initial degradation were investigated and results indicated that water and/or air contamination of the hydrogen supply may have lead to oxidation of the hydride and possibly kinetic deactivation. Subsequent cycles showed continued capacity loss indicating that the mechanism of degradation was gradual and transport or kinetically limited. A materials analysis was then conducted using established methods including treatment with carbon dioxide to react with sodium oxides that may have formed. The module tubes were sectioned to examine chemical composition and morphology as a function of axial position. The results will be discussed.
Many ballistic fibers have been developed and utilized in soft body armors for military and law enforcement personnel. However, it is complex and challenging to evaluate the performance of ballistic resistance for the ballistic fibers. In applications, the fibers are subjected to high speed transverse impact by external objects. It is thus desirable to understand the dynamic response of the fibers under transverse impact. Transverse wave speed has been recognized a critical parameter for ballistic-resistant performance because a faster transverse wave speed dissipates the external impact energy more quickly. In this study, we employed split Hopkinson pressure bar (SHPB) and gas gun to conduct high-speed impact on a Kevlar fiber bundle in the transverse direction at different velocities. The deformation of the fiber bundle was photographed with high-speed digital cameras. Additional sensitive transducers were employed to provide more quantitative information of the fiber response during such a transverse impact. The experimental results were used for quantitative verification of current analytical models.
We evaluate the stability of electron current flow in high-power magnetically insulated transmission lines (MITLs). A detailed model of electron flow in cross-field gaps yields a dispersion relation for electromagnetic (EM) transverse magnetic waves [R. C. Davidson et al., Phys. Fluids 27, 2332 (1984)] which is solved numerically to obtain growth rates for unstable modes in various sheath profiles. These results are compared with two-dimensional (2D) EM particle-in-cell (PIC) simulations of electron flow in high-power MITLs. We find that the macroscopic properties (charge and current densities and self-fields) of the equilibrium profiles observed in the simulations are well represented by the laminar-flow model of Davidson et al. Idealized simulations of sheared flow in electron sheaths yield growth rates for both long (diocotron) and short (magnetron) wavelength instabilities that are in good agreement with the dispersion analysis. We conclude that electron sheaths that evolve self-consistently from space-charged-limited emission of electrons from the cathode in well-resolved 2D EM PIC simulations form stable profiles.
Numerical simulations using the One-Dimensional-Turbulence model are compared to water-tank measurements [B. J. Sayler and R. E. Breidenthal, J. Geophys. Res. 103 (D8), 8827 (1998)] emulating convection and entrainment in stratiform clouds driven by cloud-top cooling. Measured dependences of the entrainment rate on Richardson number, molecular transport coefficients, and other experimental parameters are reproduced. Additional parameter variations suggest more complicated dependences of the entrainment rate than previously anticipated. A simple algebraic model indicates the ways in which laboratory and cloud entrainment behaviors might be similar and different.
Quantifying the flux and energy of charge exchange neutrals to the walls of fusion experiments is important to understanding wall erosion and energy balance. Quantification of these fluxes is made much more difficult because they have very strong poloidal and toroidal variations. To facilitate such measurements, we have been developing compact, palladium metal oxide semiconductor (Pd-MOS) detectors. These devices are dosemetric detectors, which can evaluate differences between plasma discharges. To become widely used, however, such detectors must be made resistant to UV and x-ray induced damage, as well as high energy particle bombardment. We report here on the fabrication of Schottky diode Pd-MOS devices in which we have minimized the oxide thickness (to reduce the production of charges from UV and x-rays) and increased the Pd overlayer (to reduce charge production from high energy particles). The fabrication has been facilitated through use of an array of metallic posts to improve the Pd film adhesion. The efficacy of the film adhesion and comparison with standard detectors will be examined. Testing and calibration of the detectors is reported as a function of hydrogen flux and energy.
This paper describes mitigation technologies that are intended to enable the deployment of advanced hydrogen storage technologies for early market and automotive fuel cell applications. Solid State hydrogen storage materials provide an opportunity for a dramatic increase in gravimetric and volumetric energy storage density. Systems and technologies based on the advanced materials have been developed and demonstrated within the laboratory [1,2], and in some cases, integrated with fuel cell systems. The R&D community will continue to develop these technologies for an ever increasing market of fuel cell technologies, including, forklift, light-cart, APU, and automotive systems. Solid state hydrogen storage materials are designed and developed to readily release, and in some cases, react with diatomic hydrogen. This favorable behavior is often accomplished with morphology design (high surface area), catalytic additives (titanium for example), and high purity metals (such as aluminum, Lanthanum, or alkali metals). These favorable hydrogen reaction characteristics often have a related, yet less-desirable effect: sensitivity and reactivity during exposure to ambient contamination and out-of-design environmental conditions. Accident scenarios resulting in this less-favorable reaction behavior must also be managed by the system developer to enable technology deployment and market acceptance. Two important accident scenarios are identified through hazards and risk analysis methods. The first involves a breach in plumbing or tank resulting from a collision. The possible consequence of this scenario is analyzed though experimentally based chemical kinetic and transport modeling of metal hydride beds. An advancing reaction front between the metal hydride and ambient air is observed to proceed throughout the bed. This exothermic reaction front can result in loss of structural integrity of the containing vessel and lead to un-favorable overheating events. The second important accident scenario considered is a pool fire or impinging fire resulting from a collision between a hydrocarbon or hydrogen fueled vehicle. The possible consequence of this scenario is analyzed with experimentally-based numerical simulation of a metal hydride system. During a fire scenario, the hydrogen storage material will rapidly decompose and release hydrogen at high pressure. Accident scenarios initiated by a vehicular collision leading a pipe break or catastrophic failure of the hydride vessel and by external pool fire with flame engulfing the storage vessel are developed using probabilistic modeling. The chronology of events occurring subsequent to each accident initiator is detailed in the probabilistic models. Technology developed to manage these scenarios includes: (1) the use of polymer supports to reduce the extent and rate of reaction with air and water, (2) thermal radiation shielding. The polymer supported materials are demonstrated to provide mitigation of unwanted reaction while not impacting the hydrogen storage performance of the material. To mitigate the consequence of fire engulfment or impingement, thermal radiation shielding is considered to slow the rate of decomposition and delay the potential for loss-of-containment. In this paper we explore the use of these important mitigation technologies for a variety of accident scenarios.
Experimental results of nested cylindrical wire arrays (NCWA) consisting of brass (70% Cu and 30% Zn) wires on one array and Al (5056, 5% Mg) wires on the other array performed on the UNR Zebra generator at 1.0 MA current are compared and analyzed. Specifically, radiative properties of K-shell Al and Mg ions and L-shell Cu and Zn ions are compared as functions of the placements of the brass and Al wires on the inner and outer arrays. A full diagnostic set which included more than ten different beam-lines was implemented. Identical loads were fielded to allow the timing of time-gated pinhole and x-ray spectrometers to be shifted to get a more complete understanding of the evolution of plasma parameters over the x-ray pulse. The importance of the study of NCWAs with different wire materials is discussed.
The analysis of implosions of Cu and Ag planar wire array (PWA) loads recently performed at the enhanced 1.7 MA Zebra generator at UNR is presented. Experiments were performed with a Load Current Multiplier with a 1cm anode-cathode gap (twice shorter than in a standard 1 MA mode). A full diagnostic set included more than ten different beam-lines with the major focus on time-gated and time-integrated x-ray imaging and spectra, total radiation yields, and fast, filtered x-ray detector data. In particular, the experimental results for a double PWA load consisting of twelve 10 {micro}m Cu wires in each row (total mass M {approx} 175 {micro}g) and a much heavier single PWA load consisting of ten 30 {micro}m Ag wires (M {approx} 750 {micro}g) were analyzed using a set of theoretical codes. The effects of both a decreased a-c gap and an increased current on radiative properties of these loads are discussed.
A series of experiments at the Z Accelerator was performed with 40mm and 50mm diameter nested wire arrays to investigate the interaction of the arrays and assess radiative characteristics. These arrays were fielded with one array as Al:Mg (either the inner or the outer array) and the other array as Ni-clad Ti (the outer or inner array, with respect to location of the Al:Mg). In all the arrays, the mass and radius ratio of the outer:inner was 2:1. The wire number ratio was also 2:1 in some cases, but the Al:Mg wire number was increased in some loads. This presentation will focus on analysis of the emitted radiation (in multiple photon energy bins) and measured plasma conditions (as inferred from x-ray spectra). A discussion on what these results indicate about nested array dynamics will also be presented.
Ionomers--polymers containing a small fraction of covalently bound ionic groups--have potential application as solid electrolytes in batteries. Understanding ion transport in ionomers is essential for such applications. Due to strong electrostatic interactions in these materials, the ions form aggregates, tending to slow counterion diffusion. A key question is how ionomer properties affect ionic aggregation and counterion dynamics on a molecular level. Recent experimental advances have allowed synthesis and extensive characterization of ionomers with a precise, constant spacing of charged groups, making them ideal for controlled measurement and more direct comparison with molecular simulation. We have used coarse-grained molecular dynamics to simulate such ionomers with regularly spaced charged beads. The charged beads are placed either in the polymer backbone or as pendants on the backbone. The polymers, along with the counterions, are simulated at melt densities. The ionic aggregate structure was determined as a function of the dielectric constant, spacing of the charged beads on the polymer, and the sizes of the charged beads and counterions. The pendant ion architecture can yield qualitatively different aggregate structures from those of the linear polymers. For small pendant ions, roughly spherical aggregates have been found above the glass transition temperature. The implications of these aggregates for ion diffusion will be discussed.
The level of energy deposition on future inertial fusion energy (IFE) reactor first walls, particularly in direct-drive scenarios, makes the ultimate survivability of such wall materials a challenge. We investigate the survivability of three-dimensional (3-D) dendritic materials fabricated by chemical vapor deposition (CVD), and exposed to repeated intense helium beam pulses on the RHEPP-1 facility at Sandia National Laboratories. Prior exposures of flat materials have led to what appears to be unacceptable mass loss on timescales insufficient for economical reactor operation. Two potential advantages of such dendritic materials are (a) increased effective surface area, resulting in lowered fluences to most of the wall material surface, and (b) improvement in materials properties for such micro-engineered metals compared to bulk processing. Several dendritic fabrications made with either tungsten and tungsten with rhenium show little or no morphology change after up to 800 pulses of 1 MeV helium at reactor-level thermal wall loading. Since the rhenium is added in a thin surface layer, its use does not appear to raise environmental concerns for fusion designs.
Mitigating and overcoming environmental problems brought about by the current worldwide fossil fuel-based energy infrastructure requires the creation of innovative alternatives. In particular, such alternatives must actively contribute to the reduction of carbon emissions via carbon recycling and a shift to the use of renewable sources of energy. Carbon neutral transformation of biomass to liquid fuels is one of such alternatives, but it is limited by the inherently low energy efficiency of photosynthesis with regard to the net production of biomass. Researchers have thus been looking for alternative, energy-efficient chemical routes inspired in the biological transformation of solar power, CO2 and H2O into useful chemicals; specifically, liquid fuels. Methanol has been the focus of a fair number of publications for its versatility as a fuel, and its use as an intermediate chemical in the synthesis of many compounds. In some of these studies, (e.g. Joo et al., (2004), Mignard and Pritchard (2006), Galindo and Badr (2007)) CO2 and renewable H2 (e.g. electrolytic H2) are considered as the raw materials for the production of methanol and other liquid fuels. Several basic PFD diagrams have been proposed. One of the most promising is the so called CAMERE process (Joo et al., 1999 ). In this process, carbon dioxide and renewable hydrogen are fed to a first reactor and transformed according to: H2 + CO2 <=> H2O + CO Reverse Water Gas Shift (RWGS) After eliminating the produced water the resulting H2/CO2/CO mixture is then feed to a second reactor where it is converted to methanol according to: CO2 + 3.H2 <=> CH3OH + H2O Methanol Synthesis (MS) CO + H2O <=> CO2 + H2 Water Gas Shift (WGS) The approach here is to produce enough CO to eliminate, via WGS, the water produced by MS. This is beneficial since water has been proven to block active sites in the MS catalyst. In this work a different process alternative is presented: One that combines the CO2 recycling of the CAMERE process and the use of solar energy implicit in some of the biomass-based process, but in this case with the potential high energy efficiency of thermo-chemical transformations.
Finite elements for shell structures have been investigated extensively, with numerous formulations offered in the literature. These elements are vital in modern computational solid mechanics due to their computational efficiency and accuracy for thin and moderately thick shell structures, allowing larger and more comprehensive (e.g. multi-scale and multi-physics) simulations. Problems now of interest in the research and development community are routinely pushing our computational capabilities, and thus shell finite elements are being used to deliver efficient yet high quality computations. Much work in the literature is devoted to the formulation of shell elements and their numerical accuracy, but there is little published work on the computational characterization and comparison of shell elements for modern solid mechanics problems. The present study is a comparison of three disparate shell element formulations in the Sandia National Laboratories massively parallel Sierra Solid Mechanics code. A constant membrane and bending stress shell element (Key and Hoff, 1995), a thick shell hex element (Key et al., 2004) and a 7-parameter shell element (Buechter et al., 1994) are available in Sierra Solid Mechanics for explicit transient dynamic, implicit transient dynamic and quasistatic calculations. Herein these three elements are applied to a set of canonical dynamic and quasistatic problems, and their numerical accuracy, computational efficiency and scalability are investigated. The results show the trade-off between the relative inefficiency and improved accuracy of the latter two high quality element types when compared with the highly optimized and more widely used constant membrane and bending stress shell element.