MOS devices are susceptible to damage by ionizing radiation due to charge buildup in gate, field and SOI buried oxides. Under positive bias holes created in the gate oxide will transport to the Si / SiO{sub 2} interface creating oxide-trapped charge. As a result of hole transport and trapping, hydrogen is liberated in the oxide which can create interface-trapped charge. The trapped charge will affect the threshold voltage and degrade the channel mobility. Neutralization of oxidetrapped charge by electron tunneling from the silicon and by thermal emission can take place over long periods of time. Neutralization of interface-trapped charge is not observed at room temperature. Analytical models are developed that account for the principal effects of total dose in MOS devices under different gate bias. The intent is to obtain closed-form solutions that can be used in circuit simulation. Expressions are derived for the aging effects of very low dose rate radiation over long time periods.
This report describes activities conducted in FY07 to mature the MEMS passive shock sensor. The first chapter of the report provides motivation and background on activities that are described in detail in later chapters. The second chapter discusses concepts that are important for integrating the MEMS passive shock sensor into a system. Following these two introductory chapters, the report details modeling and design efforts, packaging, failure analysis and testing and validation. At the end of FY07, the MEMS passive shock sensor was at TRL 4.
The LinguisticBelief%C2%A9 software tool developed by Sandia National Laboratories was applied to provide a qualitative evaluation of the accuracy of various maps that provide information on releases of hazardous material, especially radionuclides. The methodology, %E2%80%9CUncertainty for Qualitative Assessments,%E2%80%9D includes uncertainty in the evaluation. The software tool uses the mathematics of fuzzy sets, approximate reasoning, and the belief/ plausibility measure of uncertainty. SNL worked cooperatively with the Remote Sensing Laboratory (RSL) and the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL) to develop models for three types of maps for use in this study. SNL and RSL developed the maps for %E2%80%9CAccuracy Plot for Area%E2%80%9D and %E2%80%9CAerial Monitoring System (AMS) Product Confidence%E2%80%9D. SNL and LLNL developed the %E2%80%9CLLNL Model%E2%80%9D. For each of the three maps, experts from RSL and LLNL created a model in the LinguisticBelief software. This report documents the three models and provides evaluations of maps associated with the models, using example data. Future applications will involve applying the models to actual graphs to provide a qualitative evaluation of the accuracy of the maps, including uncertainty, for use by decision makers. A %E2%80%9CQuality Thermometer%E2%80%9D technique was developed to rank-order the quality of a set of maps of a given type. A technique for pooling expert option from different experts was provided using the PoolEvidence%C2%A9 software.
One of the most rapidly-growing areas in nanoscience is the ability to artificially manipulate optical and electrical properties at the nanoscale. In particular, nanomaterials such as single-wall carbon nanotubes offer enhanced methods for converting infrared light to electrical energy due to their unique one-dimensional electronic properties. However, in order for this energy conversion to occur, a realistic nanotube device would require high-intensity light to be confined on a nanometer scale. This arises from the fact that the diameter of a single nanotube is on the order of a nanometer, and infrared light from an external source must be tightly focused on the narrow nanotube for efficient energy conversion. To address this problem, I calculate the theoretical photocurrent of a nanotube p-n junction illuminated by a highly-efficient photonic structure. These results demonstrate the utility of using a photonic structure to couple large-scale infrared sources with carbon nanotubes while still retaining all the unique optoelectronic properties found at the nanoscale.
Sandia National Laboratories (SNL) has embarked on a program to develop a methodology to use damage relations techniques (alternative experimental facilities, modeling, and simulation) to understand the time-dependent effects in transistors (and integrated circuits) caused by neutron irradiations in the Sandia Pulse Reactor-III (SPR-III) facility. The development of these damage equivalence techniques is necessary since SPR-III was shutdown in late 2006. As part of this effort, the late time {gamma}-ray sensitivity of a single diffusion lot of 2N2222A transistors has been characterized using one of the {sup 60}Co irradiation cells at the SNL Gamma Irradiation Facility (GIF). This report summarizes the results of the experiments performed at the GIF.
This is the final report on a field evaluation by the Department of the Navy of twenty 5-kW PEM fuel cells carried out during 2004 and 2005 at five Navy sites located in New York, California, and Hawaii. The key objective of the effort was to obtain an engineering assessment of their military applications. Particular issues of interest were fuel cell cost, performance, reliability, and the readiness of commercial fuel cells for use as a standalone (grid-independent) power option. Two corollary objectives of the demonstration were to promote technological advances and to improve fuel performance and reliability. From a cost perspective, the capital cost of PEM fuel cells at this stage of their development is high compared to other power generation technologies. Sandia National Laboratories technical recommendation to the Navy is to remain involved in evaluating successive generations of this technology, particularly in locations with greater environmental extremes, and it encourages their increased use by the Navy.
Low temperature diffusion bonding of beryllium to CuCrZr was investigated for fusion reactor applications. Hot isostatic pressing was accomplished using various metallic interlayers. Diffusion profiles suggest that titanium is effective at preventing Be-Cu intermetallics. Shear strength measurements suggest that acceptable results were obtained at temperatures as low as 540C.
{sm_bullet}Mixing from some thermal process steps thought to drive H,D,T loss - This does not appear to be a problem with the Mo/Er occluder stacks {sm_bullet}Diffusion barriers investigated to prevent mixing
Laser tweezers optical trapping provides a unique noninvasive capability to trap and manipulate particles in solution at the focal point of a laser beam passed through a microscope objective. Additionally, combined with image analysis, interaction forces between colloidal particles can be quantitatively measured. By looking at the displacement of particles within the laser trap due to the presence of a neighboring particle or looking at the relative diffusion of two particles held near each other by optical traps, interparticle interaction forces ranging from pico- to femto-Newtons can be measured. Understanding interaction forces is critical for predicting the behavior of particle dispersions including dispersion stability and flow rheology. Using a new analysis method proposed by Sainis, Germain, and Dufresne, we can simultaneously calculate the interparticle velocity and particle diffusivity which allows direct calculation of the interparticle potential for the particles. By applying this versatile tool, we measure difference in interactions between various phospholipid bilayers that have been coated onto silica spheres as a new type of solid supported liposome. We measure bilayer interactions of several cell membrane lipids under various environmental conditions such as pH and ionic strength and compare the results with those obtained for empty liposomes. These results provide insight into the role of bilayer fluctuations in liposome fusion, which is of fundamental interest to liposome based drug delivery schemes.
A mathematical program is an optimization problem expressed as an objective function of multiple variables subject to set of constraints. When the optimization problem has specific structure, the problem class usually has a special name. A linear program is the optimization of a linear objective function subject to linear constraints. An integer program is a linear program where some of the variables must take only integer values. A semidefinite program is a linear program where the variables are arranged in a matrix and for all feasible solutions, this matrix must be positive semidefinite. There are general-purpose solvers for each of these classes of mathematical program. There are usually many ways to express a problem as a correct, say, linear program. However, equivalent formulations can have significantly different practical tractability. In this poster, we present new formulations for two classic discrete optimization problems, maximum cut (max cut) and the graphical traveling salesman problem (GTSP), that are significantly stronger, and hence more computationally tractable, than any previous formulations of their class. Both partially answer longstanding open theoretical questions in polyhedral combinatorics.
The liquid-liquid interface between semifluorinated alkane diblock copolymers of the form F3C(CF2)n-1-(CH2)m-1CH3 and water, protonated alkanes, and perfluorinated alkanes are studied by fully atomistic molecular dynamics simulations. A modified version of the OPLS-AA (Optimized Parameter for Liquid Simulation All-Atom) force field of Jorgensen et al. has been used to study the interfacial behavior of semifluorinated diblocks. Aqueous interfaces are found to be sharp, with correspondingly large values of the interfacial tension. Due to the reduced hydrophobicity of the protonated block compared to the fluorinated block, hydrogen enhancement is observed at the interface. Water dipoles in the interfacial region are found to be oriented nearly parallel to the liquid-liquid interface. A number of protonated alkanes and perfluorinated alkanes are found to be mutually miscible with the semifluorinated diblocks. For these liquids, interdiffusion follows the expected Fickian behavior, and concentration-dependent diffusivities are determined.
This report is based upon a workshop, called 'CyberFest', held at Sandia National Laboratories on May 27-30, 2008. Participants in the workshop came from organizations both outside and inside Sandia. The premise of the workshop was that thinking about cyber security from a metaphorical perspective could lead to a deeper understanding of current approaches to cyber defense and perhaps to some creative new approaches. A wide range of metaphors was considered, including those relating to: military and other types of conflict, biological, health care, markets, three-dimensional space, and physical asset protection. These in turn led to consideration of a variety of possible approaches for improving cyber security in the future. From the proposed approaches, three were formulated for further discussion. These approaches were labeled 'Heterogeneity' (drawing primarily on the metaphor of biological diversity), 'Motivating Secure Behavior' (taking a market perspective on the adoption of cyber security measures) and 'Cyber Wellness' (exploring analogies with efforts to improve individual and public health).
We present the forensic analysis repository for malware (FARM), a system for automating malware analysis. FARM leverages existing dynamic and static analysis tools and is designed in a modular fashion to provide future extensibility. We present our motivations for designing the system and give an overview of the system architecture. We also present several common scenarios that detail uses for FARM as well as illustrate how automated malware analysis saves time. Finally, we discuss future development of this tool.
Ruby, Douglas S.; Murphy, Brian; Meakin, David; Dominguez, Jason; Hacke, Peter
Back-contact crystalline-silicon photovoltaic solar cells and modules offer a number of advantages, including the elimination of grid shadowing losses, reduced cost through use of thinner silicon substrates, simpler module assembly, and improved aesthetics. While the existing edge tab method for interconnecting and stringing edge-connected back contact cells is acceptably straightforward and reliable, there are further gains to be exploited when you have both contact polarities on one side of the cell. In this work, we produce 'busbarless' emitter wrap-through solar cells that use 41% of the gridline silver (Ag) metallization mass compared to the edge tab design. Further, series resistance power losses are reduced by extraction of current from more places on the cell rear, leading to a fill factor improvement of about 6% (relative) on the module level. Series resistance and current-generation losses associated with large rear bondpads and busbars are eliminated. Use of thin silicon (Si) wafers is enabled because of the reduced Ag metallization mass and by interconnection with conductive adhesives leading to reduced bow. The busbarless cell design interconnected with conductive adhesives passes typical International Electrotechnical Commission damp heat and thermal cycling test.
Water quality often limits the potential uses of scarce water resources in semiarid and arid regions. To best manage water quality one must understand the sources and sinks of both solutes and water to the river system. Nutrient concentration patterns can identify source and sink locations, but cannot always determine biotic processes that affect nutrient concentrations. Modeling tools can provide insight into these large-scale processes. To address questions about large-scale nitrogen removal in the Middle Rio Grande, NM, we created a system dynamics nitrate model using an existing integrated surface water--groundwater model of the region to evaluate our conceptual models of uptake and denitrification as potential nitrate removal mechanisms. We modeled denitrification in groundwater as a first-order process dependent only on concentration and used a 5% denitrification rate. Uptake was assumed to be proportional to transpiration and was modeled as a percentage of the evapotranspiration calculated within the model multiplied by the nitrate concentration in the water being transpired. We modeled riparian uptake as 90% and agricultural uptake as 50% of the respective evapotranspiration rates. Using these removal rates, our model results suggest that riparian uptake, agricultural uptake and denitrification in groundwater are all needed to produce the observed nitrate concentrations in the groundwater, conveyance channels, and river as well as the seasonal concentration patterns. The model results indicate that a total of 497 metric tons of nitrate-N are removed from the Middle Rio Grande annually. Where river nitrate concentrations are low and there are no large nitrate sources, nitrate behaves nearly conservatively and riparian and agricultural uptake are the most important removal mechanisms. Downstream of a large wastewater nitrate source, denitrification and agricultural uptake were responsible for approximately 90% of the nitrogen removal.
Uncertainty in site characterization arises from a lack of data and knowledge about a site and includes uncertainty in the boundary conditions, uncertainty in the characteristics, location, and behavior of major features within an investigation area (e.g., major faults as barriers or conduits), uncertainty in the geologic structure, as well as differences in numerical implementation (e.g., 2-D versus 3-D, finite difference versus finite element, grid resolution, deterministic versus stochastic, etc.). Since the true condition at a site can never be known, selection of the best conceptual model is very difficult. In addition, limiting the understanding to a single conceptualization too early in the process, or before data can support that conceptualization, may lead to confidence in a characterization that is unwarranted as well as to data collection efforts and field investigations that are misdirected and/or redundant. Using a series of numerical modeling experiments, this project examined the application and use of information criteria within the site characterization process. The numerical experiments are based on models of varying complexity that were developed to represent one of two synthetically developed groundwater sites; (1) a fully hypothetical site that represented a complex, multi-layer, multi-faulted site, and (2) a site that was based on the Horonobe site in northern Japan. Each of the synthetic sites were modeled in detail to provide increasingly informative 'field' data over successive iterations to the representing numerical models. The representing numerical models were calibrated to the synthetic site data and then ranked and compared using several different information criteria approaches. Results show, that for the early phases of site characterization, low-parameterized models ranked highest while more complex models generally ranked lowest. In addition, predictive capabilities were also better with the low-parameterized models. For the latter iterations, when more data were available, the information criteria rankings tended to converge on the higher parameterized models. Analysis of the numerical experiments suggest that information criteria rankings can be extremely useful for site characterization, but only when the rankings are placed in context and when the contribution of each bias term is understood.
Highly collimated outflows or jets are produced by a number of astrophysical objects including protostars. The morphology and collimation of these jets is thought to be strongly influenced by the effects of radiative cooling, angular momentum and the interstellar medium surrounding the jet. Astrophysically relevant experiments are performed with conical wire array z-pinches investigating each of these effects. It is possible in each case to enter the appropriate parameter regime, leading the way towards future experiments where these different techniques can be more fully combined.
Sandia National Laboratories has tested, evaluated and reported on the Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication in SAND2008-. One test, Input Terminated Noise, allows us to characterize the self-noise of the Smart24 system. By computing the power spectral density (PSD) of the input terminated noise time series data set and correcting for the instrument response of different seismometers, the resulting spectrum can be compared to the USGS new low noise model (NLNM) of Peterson (1996), and determine the ability of the matched system of seismometer and Smart24 to be quiet enough for any general deployment location. Four seismometer models were evaluated: the Streckeisen STS2-Low and High Gain, Guralp CMG3T and Geotech GS13 models. Each has a unique pass-band as defined by the frequency band of the instrument corrected noise spectrum that falls below the new low-noise model.
Surface scabbling of concrete by laser processing has been demonstrated in the literature for large-area problems ({approx}50 mm wide x 10 deep) using physically large, high-power consumption, multi-kW CW laser systems. With large spot diameters ({approx}50 mm) and low power densities ({approx} 300 W/cm{sup 2}), large volume thermal stresses are induced which promote concrete cracking. This process is highly power-density and heat-input (J/m) dependent. Too high power densities cause melting and generate potentially toxic fumes by vaporizing the cement matrix material. New applications require concrete removal with more portable, lower power equipment, and low particulate and fume generation. Recent results investigating the process for small-area ({approx} 2 x 2 mm) removal are examined and discussed. Tests performed were limited to < 700W output power. Ablation via thermal cracking was observed at larger spot sizes but as the spot size approached 10 mm (with constant power density) ablation ceased and melting predominated. Scaling effects involving temperature gradients through the ITZ (Interfacial Transition Zone), the probability of including an ITZ in the beam path at decreasing spot sizes, and the gradient effects on bulk properties between rock and sand zones will be presented and discussed.
Singlet oxygen generators are multiphase flow chemical reactors used to generate energetic oxygen to be used as a fuel for chemical oxygen iodine lasers. In this paper, a theoretical model of the generator is presented along with its solutions over ranges of parameter space and oxygen maximizing optimizations. The singlet oxygen generator (SOG) is a low-pressure, multiphase flow chemical reactor that is used to produce molecular oxygen in an electronically excited state, i.e. singlet delta oxygen. The primary product of the reactor, the energetic oxygen, is used in a stage immediately succeeding the SOG to dissociate and energize iodine. The gas mixture including the iodine is accelerated to a supersonic speed and lased. Thus the SOG is the fuel generator for the chemical oxygen iodine laser (COIL). The COIL has important application for both military purposes--it was developed by the US Air Force in the 1970s--and, as the infrared beam is readily absorbed by metals, industrial cutting and drilling. The SOG appears in various configurations, but the one in focus here is a crossflow droplet generator SOG. A gas consisting of molecular chlorine and a diluent, usually helium, is pumped through a roughly rectangular channel. An aqueous solution of hydrogen peroxide and potassium hydroxide is pumped through small holes into the channel and perpendicular to the direction of the gas flow. So doing causes the solution to become aerosolized. Dissociation of the potassium hydroxide draws a proton from the hydrogen peroxide generating an HO{sub 2} radical in the liquid. Chlorine diffuses into the liquid and reacts with the HO{sub 2} ion producing the singlet delta oxygen; some of the oxygen diffuses back into the gas phase. The focus of this work is to generate a predictive multiphase flow model of the SOG in order to optimize its design. The equations solved are the so-called Eulerian-Eulerian form of the multiphase flow Navier-Stokes equations wherein one set of the equations represents the gas phase and another equation set of size m represents the liquid phase. In this case, m is representative of the division of the liquid phase into distinct representations of the various droplet sizes distributed in the reactor. A stabilized Galerkin formulation is used to solve the equation set on a computer. The set of equations is large. There are five equations representing the gas phase: continuity, vector momentum, heat. There are 5m representing the liquid phase: number density, vector momentum, heat. Four mass transfer equations represent the gas phase constituents and there are m advection diffusion equations representing the HO{sub 2} ion concentration in the liquid phase. Thus we are taking advantage of and developing algorithms to harness the power of large parallel computing architectures to solve the steady-state form of these equations numerous times so as to explore the large parameter space of the equations via continuation methods and to maximize the generation of singlet delta oxygen via optimization methods. Presented here will be the set of equations that are solved and the methods we are using to solve them. Solutions of the equations will be presented along with solution paths representing varying aerosol loading-the ratio of liquid to gas mass flow rates-and simple optimizations centered around maximizing the oxygen production and minimizing the amount of entrained liquid in the gas exit stream. Gas-entrained liquid is important to minimize as it can destroy the lenses and mirrors present in the lasing cavity.
We describe the development of a novel silicon quantum bit (qubit) device architecture that involves using materials that are compatible with a Sandia National Laboratories (SNL) 0.35 mum complementary metal oxide semiconductor (CMOS) process intended to operate at 100 mK. We describe how the qubit structure can be integrated with CMOS electronics, which is believed to have advantages for critical functions like fast single electron electrometry for readout compared to current approaches using radio frequency techniques. Critical materials properties are reviewed and preliminary characterization of the SNL CMOS devices at 4.2 K is presented.
We consider how to distribute sparse matrices among processes to reduce communication costs in parallel sparse matrix computations, specifically, sparse matrix-vector multiplication. Our main contributions are: (i) an exact graph model for communication with general (two-dimensional) matrix distribution, and (ii) a recursive partitioning algorithm based on nested dissection (substructuring). We show that the communication volume is closely linked to vertex separators. We have implemented our algorithm using hypergraph partitioning software to enable a fair comparison with existing methods. We present numerical results for sparse matrices from several application areas, with up to 9 million nonzeros. The results show that our new approach is superior to traditional 1d partitioning and comparable to a current leading partitioning method, the finegrain hypergraph method, in terms of communication volume. Our nested dissection method has two advantages over the fine-grain method: it is faster to compute, and the resulting distribution requires fewer communication messages.
Optical beam failure analysis methods provide unique capabilities to identify and localize defect types that would be difficult or impossible by other methods. by understanding the physics of signal generation, the user gains the insight necessary to optimize technique performance.
Production of renewable biofuels to displace fossil fuels currently consumed in the transportation sector is a pressing multi-agency national priority. Currently, nearly all fuel ethanol is produced from corn-derived starch. Dedicated 'energy crops' and agricultural waste are preferred long-term solutions for renewable, cheap, and globally available biofuels as they avoid some of the market pressures and secondary greenhouse gas emission challenges currently facing corn ethanol. These sources of lignocellulosic biomass are converted to fermentable sugars using a variety of chemical and thermochemical pretreatments, which disrupt cellulose and lignin cross-links, allowing exogenously added recombinant microbial enzymes to more efficiently hydrolyze the cellulose for 'deconstruction' into glucose. This process is plagued with inefficiencies, primarily due to the recalcitrance of cellulosic biomass, mass transfer issues during deconstruction, and low activity of recombinant deconstruction enzymes. Costs are also high due to the requirement for enzymes and reagents, and energy-intensive and cumbersome pretreatment steps. One potential solution to these problems is found in synthetic biology; they propose to engineer plants that self-produce a suite of cellulase enzymes targeted to the apoplast for cleaving the linkages between lignin and cellulosic fibers; the genes encoding the degradation enzymes, also known as cellulases, are obtained from extremophilic organisms that grow at high temperatures (60-100 C) and acidic pH levels (<5). These enzymes will remain inactive during the life cycle of the plant but become active during hydrothermal pretreatment i.e., elevated temperatures. Deconstruction can be integrated into a one-step process, thereby increasing efficiency (cellulose-cellulase mass-transfer rates) and reducing costs. The proposed disruptive technologies address biomass deconstruction processes by developing transgenic plants encoding a suite of enzymes used in cellulosic deconstruction. The unique aspects of this technology are the rationally engineered, highly productive extremophilic enzymes, targeted to specific cellular locations (apoplast) and their dormancy during normal plant proliferation, which become Trojan horses during pretreatment conditions. They have been leveraging established Sandia's enzyme-engineering and imaging capabilities. Their technical approach not only targets the recalcitrance and mass-transfer problem during biomass degradation but also eliminates the costs associated with industrial-scale production of microbial enzymes added during processing.
Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph winds (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).
The K Area Complex (KAC) at the Savannah River Site (SRS) has been utilizing HiTop hydrogen getter material in 9975 Shipping Containers to prevent the development of flammable environments during storage of moisture-containing plutonium oxides. Previous testing and subsequent reports have been performed and produced by Sandia National Laboratories (SNL) to demonstrate the suitability and longevity of the getter during storage at bounding thermal conditions. To date, results have shown that after 18 months of continuous storage at 70 C, the getter is able to both recombine gaseous hydrogen and oxygen into water when oxygen is available, and irreversibly getter (i.e. scavenge) hydrogen from the vapor space when oxygen is not available, both under a CO{sub 2} environment. [Refs. 1-5] Both of these reactions are catalytically enhanced and thermodynamically favorable. The purpose of this paper is to establish the justification that maintaining the current efforts of biannual testing is no longer necessary due to the robust performance of the getter material, the very unlikely potential that the recombination reaction will fail during storage conditions in KAC, and the insignificant aging effects that have been seen in the testing to date.
Many weapons components (e.g. firing sets) are encapsulated with blown foams. Foam is a strong lightweight material--good compromise between conflicting needs of structural stability and electronic function. Current foaming processes can lead to unacceptable voids, property variations, cracking, and slipped schedules which is a long-standing issue. Predicting the process is not currently possible because the material is polymerizing and multiphase with changing microstructure. The goals of this project is: (1) Produce uniform encapsulant consistently and improve processability; (2) Eliminate metering issues/voids; (3) Lower residual stresses, exotherm to protect electronics; and (4) Maintain desired properties--lightweight, strong, no delamination/cracking, and ease of removal. The summary of achievements in the first year are: (1) Developed patentable chemical foaming chemistry - TA; (2) Developed persistent non-curing foam for systematic evaluation of fundamental physics of foams--Initial testing of non-curing foam shows that surfactants very important; (3) Identified foam stability strategy using a stacked reaction scheme; (4) Developed foam rheology methodologies and shear apparatuses--Began testing candidates for shear stability; (5) Began development of computational model; and (6) Development of methodology and collection of property measurements/boundary conditions for input to computational model.
While models of combustion processes have been successful in developing engines with improved fuel economy, more costly simulations are required to accurately model pollution chemistry. These simulations will also involve significant parametric uncertainties. Computational singular perturbation (CSP) and polynomial chaos-uncertainty quantification (PC-UQ) can be used to mitigate the additional computational cost of modeling combustion with uncertain parameters. PC-UQ was used to interrogate and analyze the Davis-Skodje model, where the deterministic parameter in the model was replaced with an uncertain parameter. In addition, PC-UQ was combined with CSP to explore how model reduction could be combined with uncertainty quantification to understand how reduced models are affected by parametric uncertainty.
Bright, intense x-ray sources with extreme plasma parameters (micropinch plasmas) have previously been characterized at 0.1-0.4 MA, but the scaling of such sources at higher current is poorly understood. The x-ray source size and radiation power of 1 MA X pinches were studied as a function of wire material (Al, Ti, Mo, and W) and number (1-, 2-, 8-, 32-, and 64-wire configurations). The smallest bright spots observed were from 32-wire tungsten X pinches, which produced {le} 11-16 {micro}m, {approx}2 J, 1-10 GW sources of 3-5 keV radiation.
Planar wire arrays are studied at 3-6 MA on the Saturn pulsed power generator as potential drivers of compact hohlraums for inertial confinement fusion studies. Comparison with zero-dimensional modeling suggests that there is significant trailing mass. The modeled energy coupled from the generator cannot generally explain the energy in the main x-ray pulse. Preliminary comparison at 1-6 MA indicates sub-quadratic scaling of x-ray power in a manner similar to compact cylindrical wire arrays. Time-resolved pinhole images are used to study the implosion dynamics.
A fully three-dimensional electromagnetic model of the major pulsed power components of the 26-MA ZR accelerator is presented. This large-scale simulation model tracks the evolution of electromagnetic waves through the intermediate storage capacitors, laser-triggered gas switches, pulse-forming lines, water switches, tri-plate transmission lines, and water convolute to the vacuum insulator stack. The plates at the insulator stack are coupled to a transmission line circuit model of the four-level magnetically-insulated transmission line section and post-hole convolutes. The vacuum section circuit model is terminated by either a short-circuit load or dynamic models of imploding z-pinch loads. The simulations results are compared with electrical measurements made throughout the ZR accelerator and good agreement is found, especially for times before and up to peak load power. This modeling effort represents new opportunities for modeling existing and future large-scale pulsed power systems used in a variety of high energy density physics and radiographic applications.
This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.
A model of the repair operations of the voice telecommunications network is used to study labor management strategies under a disaster scenario where the workforce is overwhelmed. The model incorporates overtime and fatigue functions and optimizes the deployment of the workforce based on the cost of the recovery and the time it takes to recover. The analysis shows that the current practices employed in workforce management in a disaster scenario are not optimal and more strategic deployment of that workforce is beneficial.
We describe the fabrication of silicon three dimensional photonic crystals using polymer templates defined by a single step, two-photon exposure through a layer of photopolymer with relief molded on its surface. The resulting crystals exhibit high structural quality over large areas, displaying geometries consistent with calculation. Spectroscopic measurements of transmission and reflection through the silicon and polymer structures reveal excellent optical properties, approaching properties predicted by simulations that assume ideal layouts.
BiFeO3 thin films have been deposited on (101) DyScO3, (0001) AlGaN/GaN, and (0001) SiC single crystal substrates by reactive molecular-beam epitaxy in an adsorption-controlled growth regime. This is achieved by supplying a bismuth over-pressure and utilizing the differential vapor pressures between bismuth oxides and BiFeO3 to control stoichiometry. Four-circle x-ray diffraction reveals phase-pure, epitaxial films with rocking curve full width at half maximum values as narrow as 7.2 arc seconds. Epitaxial growth of (0001)-oriented BiFeO3 thin films on (0001) GaN, including AlGaN HEMT structures, and (0001) SiC has been realized utilizing intervening epitaxial (111) SrTiO3/(100) TiO2 buffer layers. The epitaxial BiFeO3 thin films have two in-plane orientations: [1120] BiFeO3 [1120] GaN (SiC) plus a twin variant related by a 180{sup o} in-plane rotation. This epitaxial integration of the ferroelectric with the highest known polarization, BiFeO3, with wide band gap semiconductors is an important step toward novel field-effect devices.
The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.
This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.