Dilution refrigerator specificiations for formal Request for Quotes from cryogenics companies
Abstract not provided.
Abstract not provided.
Fusion Science and Technology
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories (SNL) participated in a Pilot Study to examine the process and requirements to create a software system to assess the extremely low probability of pipe rupture (xLPR) in nuclear power plants. This project was tasked to develop a prototype xLPR model leveraging existing fracture mechanics models and codes coupled with a commercial software framework to determine the framework, model, and architecture requirements appropriate for building a modular-based code. The xLPR pilot study was conducted to demonstrate the feasibility of the proposed developmental process and framework for a probabilistic code to address degradation mechanisms in piping system safety assessments. The pilot study includes a demonstration problem to assess the probability of rupture of DM pressurizer surge nozzle welds degraded by primary water stress-corrosion cracking (PWSCC). The pilot study was designed to define and develop the framework and model; then construct a prototype software system based on the proposed model. The second phase of the project will be a longer term program and code development effort focusing on the generic, primary piping integrity issues (xLPR code). The results and recommendations presented in this report will be used to help the U.S. Nuclear Regulatory Commission (NRC) define the requirements for the longer term program.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions in Plasma Science (Special issue on %22Images in Plasma Science%22)
Abstract not provided.
Risk Analysis
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The risk assessment approach has been applied to support numerous radioactive waste management activities over the last 30 years. A risk assessment methodology provides a solid and readily adaptable framework for evaluating the risks of CO2 sequestration in geologic formations to prioritize research, data collection, and monitoring schemes. This paper reviews the tasks of a risk assessment, and provides a few examples related to each task. This paper then describes an application of sensitivity analysis to identify important parameters to reduce the uncertainty in the performance of a geologic repository for radioactive waste repository, which because of importance of the geologic barrier, is similar to CO2 sequestration. The paper ends with a simple stochastic analysis of idealized CO2 sequestration site with a leaking abandoned well and a set of monitoring wells in an aquifer above the CO2 sequestration unit in order to evaluate the efficacy of monitoring wells to detect adverse leakage.
Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users' definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users' search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Complex metal hydrides continue to be investigated as solid-materials for hydrogen storage. Traditional interstitial metal hydrides offer favorable thermodynamics and kinetics for hydrogen release but do not meet energy density requires. Anionic metal hydrides, and complex metal hydrides like magnesium borohydride have higher energy densities compared to interstitial metal hydrides, but poor kinetics and/or thermodynamically unfavorable side products limit their deployment as hydrogen storage materials in transportation applications. Main-group anionic materials such as the bis(borane)hypophosphite salt [PH2(BH3)2] have been known for decades, but only recently have we begun to explore their ability to release hydrogen. We have developed a new procedure for synthesizing the lithium and sodium hypophosphite salts. Routes for accessing other metal bis(borane)hypophosphite salts will be discussed. A significant advantage of this class of material is the air and water stability of the anion. Compared to metal borohydrides, which reactive violently with water, these phosphorus-based salts can be dissolved in protic solvents, including water, with little to no decomposition over the course of multiple days. The ability of these salts to release hydrogen upon heating has been assessed. While preliminary results indicate phosphine and boron-containing species are released, hydrogen is also a major component of the volatile species observed during the thermal decomposition. Additives such as NaH or KH mixed with the sodium salt Na[PH2(BH3)2] significantly perturb the decomposition reaction and greatly increase the mass loss as determined by thermal gravimetric analysis (TGA). This symbiotic behavior has the potential to affect the hydrogen storage ability of bis(borane)hypophosphite salts.
A design concept, device layout, and monolithic microfabrication processing sequence have been developed for a dual-metal layer atom chip for next-generation positional control of ultracold ensembles of trapped atoms. Atom chips are intriguing systems for precision metrology and quantum information that use ultracold atoms on microfabricated chips. Using magnetic fields generated by current carrying wires, atoms are confined via the Zeeman effect and controllably positioned near optical resonators. Current state-of-the-art atom chips are single-layer or hybrid-integrated multilayer devices with limited flexibility and repeatability. An attractive feature of multi-level metallization is the ability to construct more complicated conductor patterns and thereby realize the complex magnetic potentials necessary for the more precise spatial and temporal control of atoms that is required. Here, we have designed a true, monolithically integrated, planarized, multi-metal-layer atom chip for demonstrating crossed-wire conductor patterns that trap and controllably transport atoms across the chip surface to targets of interest.
Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Disposal of high-level radioactive waste, including spent nuclear fuel, in deep (3 to 5 km) boreholes is a potential option for safely isolating these wastes from the surface and near-surface environment. Existing drilling technology permits reliable and cost-effective construction of such deep boreholes. Conditions favorable for deep borehole disposal in crystalline basement rocks, including low permeability, high salinity, and geochemically reducing conditions, exist at depth in many locations, particularly in geologically stable continental regions. Isolation of waste depends, in part, on the effectiveness of borehole seals and potential alteration of permeability in the disturbed host rock surrounding the borehole. Coupled thermal-mechanical-hydrologic processes induced by heat from the radioactive waste may impact the disturbed zone near the borehole and borehole wall stability. Numerical simulations of the coupled thermal-mechanical response in the host rock surrounding the borehole were conducted with three software codes or combinations of software codes. Software codes used in the simulations were FEHM, JAS3D, Aria, and Adagio. Simulations were conducted for disposal of spent nuclear fuel assemblies and for the higher heat output of vitrified waste from the reprocessing of fuel. Simulations were also conducted for both isotropic and anisotropic ambient horizontal stress in the host rock. Physical, thermal, and mechanical properties representative of granite host rock at a depth of 4 km were used in the models. Simulation results indicate peak temperature increases at the borehole wall of about 30 C and 180 C for disposal of fuel assemblies and vitrified waste, respectively. Peak temperatures near the borehole occur within about 10 years and decline rapidly within a few hundred years and with distance. The host rock near the borehole is placed under additional compression. Peak mechanical stress is increased by about 15 MPa (above the assumed ambient isotropic stress of 100 MPa) at the borehole wall for the disposal of fuel assemblies and by about 90 MPa for vitrified waste. Simulated peak volumetric strain at the borehole wall is about 420 and 2600 microstrain for the disposal of fuel assemblies and vitrified waste, respectively. Stress and volumetric strain decline rapidly with distance from the borehole and with time. Simulated peak stress at and parallel to the borehole wall for the disposal of vitrified waste with anisotropic ambient horizontal stress is about 440 MPa, which likely exceeds the compressive strength of granite if unconfined by fluid pressure within the borehole. The relatively small simulated displacements and volumetric strain near the borehole suggest that software codes using a nondeforming grid provide an adequate approximation of mechanical deformation in the coupled thermal-mechanical model. Additional modeling is planned to incorporate the effects of hydrologic processes coupled to thermal transport and mechanical deformation in the host rock near the heated borehole.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The controlled self-assembly of polymer thin-films into ordered domains has attracted significant academic and industrial interest. Most work has focused on controlling domain size and morphology through modification of the polymer block-lengths, n, and the Flory-Huggins interaction parameter, {chi}. Models, such as Self-Consistent Field Theory (SCFT), have been successful in describing the experimentally observed morphology of phase-separated polymers. We have developed a computational method which uses SCFT calculations as a predictive tool in order to guide our polymer synthesis. Armed with this capability, we have the ability to select {chi} and then search for an ideal value of n such that a desired morphology is the most thermodynamically favorable. This approach enables us to synthesize new block-polymers with the exactly segment lengths that will undergo self-assembly to the desired morphology. As proof-of-principle we have used our model to predict the gyroidal domain for various block lengths using a fixed {chi} value. To validate our computational model, we have synthesized a series of block-copolymers in which only the total molecular length changes. All of these materials have a predicted thermodynamically favorable gyroidal morphology based on the results of our SCFT calculations. Thin-films of these polymers are cast and annealed in order to equilibrate the structure. Final characterization of the polymer thin-film morphology has been performed. The accuracy of our calculations compared to experimental results is discussed. Extension of this predictive ability to tri-block polymer systems and the implications to making functionalizable nanoporous membranes will be discussed.
This presentation discusses the following questions: (1) What are the Global Problems that require Systems Engineering; (2) Where is Systems Engineering going; (3) What are the boundaries of Systems Engineering; (4) What is the distinction between Systems Thinking and Systems Engineering; (5) Can we use Systems Engineering on Complex Systems; and (6) Can we use Systems Engineering on Wicked Problems?
Abstract not provided.
The area of wind turbine component manufacturing represents a business opportunity in the wind energy industry. Modern wind turbines can provide large amounts of electricity, cleanly and reliably, at prices competitive with any other new electricity source. Over the next twenty years, the US market for wind power is expected to continue to grow, as is the domestic content of installed turbines, driving demand for American-made components. Between 2005 and 2009, components manufactured domestically grew eight-fold to reach 50 percent of the value of new wind turbines installed in the U.S. in 2009. While that growth is impressive, the industry expects domestic content to continue to grow, creating new opportunities for suppliers. In addition, ever-growing wind power markets around the world provide opportunities for new export markets.
The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.
Abstract not provided.
Abstract not provided.
X-ray momentum coupling coefficients, C{sub M}, were determined by measuring stress waveforms in planetary materials subjected to impulsive radiation loading from the Sandia National Laboratories Z-machine. Results from the velocity interferometry (VISAR) diagnostic provided limited equation-of-state data as well. Targets were iron and stone meteorites, magnesium rich olivine (dunite) solid and powder ({approx}5--300 {mu}m), and Si, Al, and Fe calibration targets. All samples were {approx}1 mm thick and, except for Si, backed by LiF single-crystal windows. The x-ray spectrum included a combination of thermal radiation (blackbody 170--237 eV) and line emissions from the pinch material (Cu, Ni, Al, or stainless steel). Target fluences 0.4--1.7 kJ/cm{sup 2} at intensities 43--260 GW/cm{sup 2} produced front surface plasma pressures 2.6--12.4 GPa. Stress waves driven into the samples were attenuating due to the short ({approx}5 ns) duration of the drive pulse. Attenuating wave impulse is constant allowing accurate C{sub M} measurements provided mechanical impedance mismatch between samples and the window are known. Impedance-corrected C{sub M} determined from rear-surface motion was 1.9--3.1 x 10{sup -5} s/m for stony meteorites, 2.7 and 0.5 x 10{sup -5} s/m for solid and powdered dunite, 0.8--1.4 x 10{sup -5}.
We are developing a low-emissivity thermal management coating system to minimize radiative heat losses under a high-vacuum environment. Good adhesion, low outgassing, and good thermal stability of the coating material are essential elements for a long-life, reliable thermal management device. The system of electroplated Au coating on the adhesion-enhancing Wood's Ni strike and 304L substrate was selected due to its low emissivity and low surface chemical reactivity. The physical and chemical properties, interface bonding, thermal aging, and compatibility of the above Au/Ni/304L system were examined extensively. The study shows that the as-plated electroplated Au and Ni samples contain submicron columnar grains, stringers of nanopores, and/or H{sub 2} gas bubbles, as expected. The grain structure of Au and Ni are thermally stable up to 250 C for 63 days. The interface bonding is strong, which can be attributed to good mechanical locking among the Au, the 304L, and the porous Ni strike. However, thermal instability of the nanopore structure (i.e., pore coalescence and coarsening due to vacancy and/or entrapped gaseous phase diffusion) and Ni diffusion were observed. In addition, the study also found that prebaking 304L in the furnace at {ge} 1 x 10{sup -4} Torr promotes surface Cr-oxides on the 304L surface, which reduces the effectiveness of the intended H-removal. The extent of the pore coalescence and coarsening and their effect on the long-term system integrity and outgassing are yet to be understood. Mitigating system outgassing and improving Au adhesion require a further understanding of the process-structure-system performance relationships within the electroplated Au/Ni/304L system.
Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD operators and positioned at the level of the compressional stress components, leads to superior numerical dispersion performance. Phase speeds measured from fixed-frequency synthetic seismograms agree very well with the numerical predictions.
Computer Aided Geometric Design
Abstract not provided.
Abstract not provided.
Shales and other mudstones are the most abundant rock types in sedimentary basins, yet have received comparatively little attention. Common as hydrocarbon seals, these are increasingly being targeted as unconventional gas reservoirs, caprocks for CO2 sequestration, and storage repositories for waste. The small pore and grain size, large specific surface areas, and clay mineral structures lend themselves to rapid reaction rates, high capillary pressures, and semi-permeable membrane behavior accompanying changes in stress, pressure, temperature and chemical conditions. Under far from equilibrium conditions, mudrocks display a variety of spatio-temporal self-organized phenomena arising from nonlinear thermo-mechano-chemo-hydro coupling. Beginning with a detailed examination of nano-scale pore network structures in mudstones, we discuss the dynamics behind such self-organized phenomena as pressure solitons in unconsolidated muds, chemically-induced flow self focusing and permeability transients, localized compaction, time dependent well-bore failure, and oscillatory osmotic fluxes as they occur in clay-bearing sediments. Examples are draw from experiments, numerical simulation, and the field. These phenomena bear on the ability of these rocks to serve as containment barriers.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Review of Scientific Instruments
Abstract not provided.
SIAM Journal on Scientific Computing
Abstract not provided.
Abstract not provided.
Monge first posed his (L{sup 1}) optimal mass transfer problem: to find a mapping of one distribution into another, minimizing total distance of transporting mass, in 1781. It remained unsolved in R{sup n} until the late 1990's. This result has since been extended to Riemannian manifolds. In both cases, optimal mass transfer relies upon a key lemma providing a Lipschitz control on the directions of geodesics. We will discuss the Lipschitz control of geodesics in the (subRiemannian) Heisenberg group. This provides an important step towards a potential theoretic proof of Monge's problem in the Heisenberg group.
Journal of Chemical Theory and Computation
Abstract not provided.
A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator, both prominent world-class software tools.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, WHISPERS 2010 - Workshop Program
The purpose of this study was to asses the utility of a Long Wave Infrared (LWIR) snapshot imager for remote sensing applications. The snapshot imager is made possible by the utilization of a color filter array that selectively allows different wavelengths of light to be collected on separate pixels of the focal plane in same fashion as a typical Bayer array in visible portion of the spectrum [1]. Recent technology developments have made this possible in the LWIR [2]. The primary focus of the study is to develop a band selection technique that is capable of identifying both the optimal number and width of the spectral channels. Once selected, the theoretical sensor performance is used to evaluate the usefulness in a typical remote sensing application. ©2010 IEEE.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Efficient collective operations are a major component of application scalability. Offload of collective operations onto the network interface reduces many of the latencies that are inherent in network communications and, consequently, reduces the time to perform the collective operation. To support offload, it is desirable to expose semantic building blocks that are simple to offload and yet powerful enough to implement a variety of collective algorithms. This paper presents the implementation of barrier and broadcast leveraging triggered operations - a semantic building block for collective offload. Triggered operations are shown to be both semantically powerful and capable of improving performance. © 2010 Springer-Verlag.
Molecular Physics
The dominant method of small molecule quantum chemistry over the last twenty years is CCSD(T). Despite this success, RHF-based CCSD(T) fails for systems away from equilibrium. Work over the last ten years has lead to modifications of CCSD(T) that improve the description of bond breaking. These new methods include CCSD(T), CCSD(2)T, CCSD(2) and CR-CC(2,3), which are new perturbative corrections to single-reference CCSD. We present a unified derivation of these methods and compare them at the level of formal theory and computational accuracy. None of the methods is clearly superior, although formal considerations favour CCSD(T) and computational accuracy for the systems considered favours CR-CC(2,3). © 2010 Taylor & Francis.
Society for Experimental Mechanics - SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2010
The "DaMaGe-Initiated-Reaction" (DMGIR) computational model has been developed to predict the response of ideal high explosives to impulsive loading from non-shock mechanical insults. The distinguishing feature of this model is the introduction of a damage variable, which relates the evolution of damage to the initiation of a reaction in the explosive, and its growth to detonation. This model development effort treats the non-shock initiation behavior of explosives by families; rigid plastic bonded, cast, and moldable plastic explosives. Specifically designed experiments were used to study the initiation process of each explosive family with embedded shock sensors and optical diagnostics. The experimental portion of this model development began with a study of PBXN-5 to develop DMGIR model coefficients for the rigid plastic bonded family, followed by studies of the cast, and bulk-moldable explosive families, including the thermal effects on initiation for the cast explosive family. The experimental results show an initiation mechanism that is related to impulsive energy input and material damage, with well defined initiation thresholds for each explosive family. These initiation details will be used to extend the predictive capability of the DMGIR model from the rigid family into the cast and bulk-moldable families. © 2010 Society for Experimental Mechanics Inc.
Society for Experimental Mechanics - SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2010
Uncertainty quantification (UQ) equations have been derived for predicting matching uncertainty in two-dimensional image correlation a priori. These equations include terms that represent the image noise and image contrast. Researchers at the University of South Carolina have extended previous 1D work to calculate matching errors in 2D. These 2D equations have been coded into a Sandia National Laboratories UQ software package to predict the uncertainty for DIC images. This paper presents those equations and the resulting error surfaces for trial speckle images. Comparison of the UQ results with experimentally subpixel-shifted images is also discussed. © 2010 Society for Experimental Mechanics Inc.
Society for Experimental Mechanics - SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2010
Abstract not provided.
Society for Experimental Mechanics - SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2010
The use of uniaxial strain ramp loading experiments to measure strength at extremely high strain rates is discussed. The technique is outlined and issues associated with it are examined. Results for 6061-T6 aluminum are presented that differ from the conventional view of strain rate sensitivity in aluminum alloys. © 2010 Society for Experimental Mechanics Inc.
Proceedings of SPIE - The International Society for Optical Engineering
Micro-optical 5mm lenses in 50mm sub-arrays illuminate arrays of photovoltaic cells with 49X concentration. Fine tracking over ±10° FOV in sub-array allows coarse tracking by meter-sized solar panels. Plastic prototype demonstrated for 400nm<λ<1600nm. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Proceedings of SPIE - The International Society for Optical Engineering
A discrete Fourier transform (DFT) or the closely related discrete cosine transform (DCT) is often employed as part of a data compression scheme. This paper presents a fast partial Fourier transform (FPFT) algorithm that is useful for calculating a subset of M Fourier transform coefficients for a data set comprised of N points (M < N). This algorithm reduces to the standard DFT when M = 1 and it reduces to the radix-2, decimation-in-time FFT when M = N and N is a power of 2. The DFT requires on the order of MN complex floating point multiplications to calculate M coefficients for N data points, a complete FFT requires on the order of (N/2)log2N multiplications independent of M, and the new FPFT algorithm requires on the order of (N/2)log2M + N multiplications. The FPFT algorithm introduced in this paper could be readily adapted to parallel processing. In addition to data compression, the FPFT algorithm described in this paper might be useful for very narrow band filter operations that pass only a small number of non-zero frequency coefficients such that M ≪ N. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Optics Express
Most demonstrations in silicon photonics are done with single devices that are targeted for use in future systems. One of the costs of operating multiple devices concurrently on a chip in a system application is the power needed to properly space resonant device frequencies on a system's frequency grid. We asses this power requirement by quantifying the source and impact of process induced resonant frequency variation for microdisk resonators across individual die, entire wafers and wafer lots for separate process runs. Additionally we introduce a new technique, utilizing the Transverse Electric (TE) and Transverse Magnetic (TM) modes in microdisks, to extract thickness and width variations across wafers and dice. Through our analysis we find that a standard six inch Silicon on Insulator (SOI) 0.35μm process controls microdisk resonant frequencies for the TE fundamental resonances to within 1THz across a wafer and 105GHz within a single die. Based on demonstrated thermal tuner technology, a stable manufacturing process exhibiting this level of variation can limit the resonance trimming power per resonant device to 231μW. Taken in conjunction with the power to compensate for thermal environmental variations, the expected power requirement to compensate for fabrication-induced non-uniformities is 17% of that total. This leads to the prediction that thermal tuning efficiency is likely to have the most dominant impact on the overall power budget of silicon photonics resonator technology. © 2010 Optical Society of America.
Computing in Science and Engineering
What do the architectures of a future exascale computing system and a future battery-operated embedded system have in common? At first glance, their requirements and challenges seem unrelated. However, discussions and collaboration on the projects revealed not only similar requirements, but many common power and packaging issues as well. © 2006 IEEE.
Abstract not provided.
Metallurgical and Materials Transactions A: Physical Metallurgy and Materials Science
Grain junction angles control microstructural morphology and evolution, but because they are difficult to measure, they are reported rarely. We have developed a method, based on the optimization of the Pearson's correlation coefficient, to measure grain junction angles in planar discretized microstructures without converting or remeshing the original data. We find that the grain junction angle distribution of equiaxed, relatively isotropic, three-dimensional (3D) microstructures is a Gaussian distribution centered about 120 deg, with a larger width than predicted primarily because of boundary energy anisotropy. Short boundary segments, which occur primarily in sections of 3D microstructures, cause anomalous peaks in the grain junction angle distribution that provide a marker for sample dimensionality. The grain junction angle distribution is a characterization metric for digitized microstructures, revealing the effects of grain boundary energy anisotropy, simulation parameters, and dimensionality. © 2010 The Minerals, Metals & Materials Society and ASM International.
Nonproliferation Review
In his 2009 Prague speech and the 2010 Nuclear Posture Review, President Barack Obama committed the United States to take concrete steps toward nuclear disarmament while maintaining a safe, secure, and effective nuclear deterrent. There is an inherent tension between these two goals that is best addressed through improved integration of nuclear weapons objectives with nuclear arms control objectives. This article reviews historical examples of the interaction between the two sets of objectives, develops a framework for analyzing opportunities for future integration, and suggests specific ideas that could benefit the nuclear weapons enterprise as it undergoes transformation and that could make the future enterprise compatible with a variety of arms control futures. © 2010 Monterey Institute of International Studies, James Martin Center for Nonproliferation Studies.
This report summarizes a series of three-dimensional simulations for the Bayou Choctaw Strategic Petroleum Reserve. The U.S. Department of Energy plans to leach two new caverns and convert one of the existing caverns within the Bayou Choctaw salt dome to expand its petroleum reserve storage capacity. An existing finite element mesh from previous analyses is modified by changing the locations of two caverns. The structural integrity of the three expansion caverns and the interaction between all the caverns in the dome are investigated. The impacts of the expansion on underground creep closure, surface subsidence, infrastructure, and well integrity are quantified. Two scenarios were used for the duration and timing of workover conditions where wellhead pressures are temporarily reduced to atmospheric pressure. The three expansion caverns are predicted to be structurally stable against tensile failure for both scenarios. Dilatant failure is not expected within the vicinity of the expansion caverns. Damage to surface structures is not predicted and there is not a marked increase in surface strains due to the presence of the three expansion caverns. The wells into the caverns should not undergo yield. The results show that from a structural viewpoint, the locations of the two newly proposed expansion caverns are acceptable, and all three expansion caverns can be safely constructed and operated.
Computing in Science and Engineering
Creating the next generation of power-efficient parallel computers requires a rethink of the mechanisms and methodology for building parallel applications. Energy constraints have pushed us into a regime where parallelism will be ubiquitous rather than limited to highly specialized high-end supercomputers. New execution models are required to span all scales, from desktop to supercomputer. © 2006 IEEE.
Acta Materialia
The mobility of dislocations is shown to be a size-dependent phenomenon. When dislocations intersect free surfaces, the mobility decreases as the dislocation length decreases, suggesting that dislocation motion in small structures may be more difficult. This increased drag may be related to surface forces acting where the dislocation intersects the free surface or from altered dislocation-phonon interactions. Mobility, however, is not as dependent on the film thickness and converges rapidly to bulk values. © 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Magnesium batteries are alternatives to the use of lithium ion and nickel metal hydride secondary batteries due to magnesium's abundance, safety of operation, and lower toxicity of disposal. The divalency of the magnesium ion and its chemistry poses some difficulties for its general and industrial use. This work developed a continuous and fibrous nanoscale network of the cathode material through the use of electrospinning with the goal of enhancing performance and reactivity of the battery. The system was characterized and preliminary tests were performed on the constructed battery cells. We were successful in building and testing a series of electrochemical systems that demonstrated good cyclability maintaining 60-70% of discharge capacity after more than 50 charge-discharge cycles.