Publications

Results 77201–77400 of 99,299

Search results

Jump to search filters

Diffusion bonding of beryllium to CuCrZr for ITER applications

Goods, Steven H.

Low temperature diffusion bonding of beryllium to CuCrZr was investigated for fusion reactor applications. Hot isostatic pressing was accomplished using various metallic interlayers. Diffusion profiles suggest that titanium is effective at preventing Be-Cu intermetallics. Shear strength measurements suggest that acceptable results were obtained at temperatures as low as 540C.

More Details

D loss as a function of temperature in ErD2 films on kovar with and without an intermediate Mo diffusion barrier

Kammler, Daniel; Wampler, William R.; Van Deusen, Stuart B.; King, Saskia H.; Tissot, Ralph G.; Brewer, Luke N.; Espada, Loren I.; Goeke, Ronald S.

{sm_bullet}Mixing from some thermal process steps thought to drive H,D,T loss - This does not appear to be a problem with the Mo/Er occluder stacks {sm_bullet}Diffusion barriers investigated to prevent mixing

More Details

Particle interaction measurements using laser tweezers optical trapping

Grillet, Anne M.; Brotherton, Christopher M.; Brinker, C.J.

Laser tweezers optical trapping provides a unique noninvasive capability to trap and manipulate particles in solution at the focal point of a laser beam passed through a microscope objective. Additionally, combined with image analysis, interaction forces between colloidal particles can be quantitatively measured. By looking at the displacement of particles within the laser trap due to the presence of a neighboring particle or looking at the relative diffusion of two particles held near each other by optical traps, interparticle interaction forces ranging from pico- to femto-Newtons can be measured. Understanding interaction forces is critical for predicting the behavior of particle dispersions including dispersion stability and flow rheology. Using a new analysis method proposed by Sainis, Germain, and Dufresne, we can simultaneously calculate the interparticle velocity and particle diffusivity which allows direct calculation of the interparticle potential for the particles. By applying this versatile tool, we measure difference in interactions between various phospholipid bilayers that have been coated onto silica spheres as a new type of solid supported liposome. We measure bilayer interactions of several cell membrane lipids under various environmental conditions such as pH and ionic strength and compare the results with those obtained for empty liposomes. These results provide insight into the role of bilayer fluctuations in liposome fusion, which is of fundamental interest to liposome based drug delivery schemes.

More Details

Better relaxations of classical discrete optimization problems

Carr, Robert D.

A mathematical program is an optimization problem expressed as an objective function of multiple variables subject to set of constraints. When the optimization problem has specific structure, the problem class usually has a special name. A linear program is the optimization of a linear objective function subject to linear constraints. An integer program is a linear program where some of the variables must take only integer values. A semidefinite program is a linear program where the variables are arranged in a matrix and for all feasible solutions, this matrix must be positive semidefinite. There are general-purpose solvers for each of these classes of mathematical program. There are usually many ways to express a problem as a correct, say, linear program. However, equivalent formulations can have significantly different practical tractability. In this poster, we present new formulations for two classic discrete optimization problems, maximum cut (max cut) and the graphical traveling salesman problem (GTSP), that are significantly stronger, and hence more computationally tractable, than any previous formulations of their class. Both partially answer longstanding open theoretical questions in polyhedral combinatorics.

More Details

Liquid-liquid interfaces of semifluorinated alkane diblock copolymers with water, alkanes, and perfluorinated alkanes

Proposed for publication in the Journal of Physical Chemistry B.

Grest, Gary S.

The liquid-liquid interface between semifluorinated alkane diblock copolymers of the form F3C(CF2)n-1-(CH2)m-1CH3 and water, protonated alkanes, and perfluorinated alkanes are studied by fully atomistic molecular dynamics simulations. A modified version of the OPLS-AA (Optimized Parameter for Liquid Simulation All-Atom) force field of Jorgensen et al. has been used to study the interfacial behavior of semifluorinated diblocks. Aqueous interfaces are found to be sharp, with correspondingly large values of the interfacial tension. Due to the reduced hydrophobicity of the protonated block compared to the fluorinated block, hydrogen enhancement is observed at the interface. Water dipoles in the interfacial region are found to be oriented nearly parallel to the liquid-liquid interface. A number of protonated alkanes and perfluorinated alkanes are found to be mutually miscible with the semifluorinated diblocks. For these liquids, interdiffusion follows the expected Fickian behavior, and concentration-dependent diffusivities are determined.

More Details

Metaphors for cyber security

Karas, Thomas H.; Parrott, Lori K.

This report is based upon a workshop, called 'CyberFest', held at Sandia National Laboratories on May 27-30, 2008. Participants in the workshop came from organizations both outside and inside Sandia. The premise of the workshop was that thinking about cyber security from a metaphorical perspective could lead to a deeper understanding of current approaches to cyber defense and perhaps to some creative new approaches. A wide range of metaphors was considered, including those relating to: military and other types of conflict, biological, health care, markets, three-dimensional space, and physical asset protection. These in turn led to consideration of a variety of possible approaches for improving cyber security in the future. From the proposed approaches, three were formulated for further discussion. These approaches were labeled 'Heterogeneity' (drawing primarily on the metaphor of biological diversity), 'Motivating Secure Behavior' (taking a market perspective on the adoption of cyber security measures) and 'Cyber Wellness' (exploring analogies with efforts to improve individual and public health).

More Details

FARM : an automated malware analysis environment

Chiang, Ken C.; Lloyd, Levi; Vanderveen, Keith

We present the forensic analysis repository for malware (FARM), a system for automating malware analysis. FARM leverages existing dynamic and static analysis tools and is designed in a modular fashion to provide future extensibility. We present our motivations for designing the system and give an overview of the system architecture. We also present several common scenarios that detail uses for FARM as well as illustrate how automated malware analysis saves time. Finally, we discuss future development of this tool.

More Details

Enhancing multilingual latent semantic analysis with term alignment information

Chew, Peter A.; Bader, Brett W.

Latent Semantic Analysis (LSA) is based on the Singular Value Decomposition (SVD) of a term-by-document matrix for identifying relationships among terms and documents from co-occurrence patterns. Among the multiple ways of computing the SVD of a rectangular matrix X, one approach is to compute the eigenvalue decomposition (EVD) of a square 2 x 2 composite matrix consisting of four blocks with X and XT in the off-diagonal blocks and zero matrices in the diagonal blocks. We point out that significant value can be added to LSA by filling in some of the values in the diagonal blocks (corresponding to explicit term-to-term or document-to-document associations) and computing a term-by-concept matrix from the EVD. For the case of multilingual LSA, we incorporate information on cross-language term alignments of the same sort used in Statistical Machine Translation (SMT). Since all elements of the proposed EVD-based approach can rely entirely on lexical statistics, hardly any price is paid for the improved empirical results. In particular, the approach, like LSA or SMT, can still be generalized to virtually any language(s); computation of the EVD takes similar resources to that of the SVD since all the blocks are sparse; and the results of EVD are just as economical as those of SVD.

More Details

Public release of optimization of metallization scheme for thin emitter wrap-through solar cells for higher efficiency, reduced precious metal costs, and reduced stress

Ruby, Douglas S.; Murphy, Brian; Meakin, David; Dominguez, Jason; Hacke, Peter

Back-contact crystalline-silicon photovoltaic solar cells and modules offer a number of advantages, including the elimination of grid shadowing losses, reduced cost through use of thinner silicon substrates, simpler module assembly, and improved aesthetics. While the existing edge tab method for interconnecting and stringing edge-connected back contact cells is acceptably straightforward and reliable, there are further gains to be exploited when you have both contact polarities on one side of the cell. In this work, we produce 'busbarless' emitter wrap-through solar cells that use 41% of the gridline silver (Ag) metallization mass compared to the edge tab design. Further, series resistance power losses are reduced by extraction of current from more places on the cell rear, leading to a fill factor improvement of about 6% (relative) on the module level. Series resistance and current-generation losses associated with large rear bondpads and busbars are eliminated. Use of thin silicon (Si) wafers is enabled because of the reduced Ag metallization mass and by interconnection with conductive adhesives leading to reduced bow. The busbarless cell design interconnected with conductive adhesives passes typical International Electrotechnical Commission damp heat and thermal cycling test.

More Details

Use of a dynamic simulation model to understand nitrogen cycling in the middle Rio Grande, NM

Tidwell, Vincent C.

Water quality often limits the potential uses of scarce water resources in semiarid and arid regions. To best manage water quality one must understand the sources and sinks of both solutes and water to the river system. Nutrient concentration patterns can identify source and sink locations, but cannot always determine biotic processes that affect nutrient concentrations. Modeling tools can provide insight into these large-scale processes. To address questions about large-scale nitrogen removal in the Middle Rio Grande, NM, we created a system dynamics nitrate model using an existing integrated surface water--groundwater model of the region to evaluate our conceptual models of uptake and denitrification as potential nitrate removal mechanisms. We modeled denitrification in groundwater as a first-order process dependent only on concentration and used a 5% denitrification rate. Uptake was assumed to be proportional to transpiration and was modeled as a percentage of the evapotranspiration calculated within the model multiplied by the nitrate concentration in the water being transpired. We modeled riparian uptake as 90% and agricultural uptake as 50% of the respective evapotranspiration rates. Using these removal rates, our model results suggest that riparian uptake, agricultural uptake and denitrification in groundwater are all needed to produce the observed nitrate concentrations in the groundwater, conveyance channels, and river as well as the seasonal concentration patterns. The model results indicate that a total of 497 metric tons of nitrate-N are removed from the Middle Rio Grande annually. Where river nitrate concentrations are low and there are no large nitrate sources, nitrate behaves nearly conservatively and riparian and agricultural uptake are the most important removal mechanisms. Downstream of a large wastewater nitrate source, denitrification and agricultural uptake were responsible for approximately 90% of the nitrogen removal.

More Details

SNL-NUMO collaborative : development of a deterministic site characterization tool using multi-model ranking and inference

Arnold, Bill W.; James, Scott; Gray, Genetha A.; Grace, Matthew D.; Ahlmann, Michael

Uncertainty in site characterization arises from a lack of data and knowledge about a site and includes uncertainty in the boundary conditions, uncertainty in the characteristics, location, and behavior of major features within an investigation area (e.g., major faults as barriers or conduits), uncertainty in the geologic structure, as well as differences in numerical implementation (e.g., 2-D versus 3-D, finite difference versus finite element, grid resolution, deterministic versus stochastic, etc.). Since the true condition at a site can never be known, selection of the best conceptual model is very difficult. In addition, limiting the understanding to a single conceptualization too early in the process, or before data can support that conceptualization, may lead to confidence in a characterization that is unwarranted as well as to data collection efforts and field investigations that are misdirected and/or redundant. Using a series of numerical modeling experiments, this project examined the application and use of information criteria within the site characterization process. The numerical experiments are based on models of varying complexity that were developed to represent one of two synthetically developed groundwater sites; (1) a fully hypothetical site that represented a complex, multi-layer, multi-faulted site, and (2) a site that was based on the Horonobe site in northern Japan. Each of the synthetic sites were modeled in detail to provide increasingly informative 'field' data over successive iterations to the representing numerical models. The representing numerical models were calibrated to the synthetic site data and then ranked and compared using several different information criteria approaches. Results show, that for the early phases of site characterization, low-parameterized models ranked highest while more complex models generally ranked lowest. In addition, predictive capabilities were also better with the low-parameterized models. For the latter iterations, when more data were available, the information criteria rankings tended to converge on the higher parameterized models. Analysis of the numerical experiments suggest that information criteria rankings can be extremely useful for site characterization, but only when the rankings are placed in context and when the contribution of each bias term is understood.

More Details

Astrophysical jets with conical wire arrays : radiative cooling, rotation & deflection

Ampleford, David J.; Jennings, Christopher A.

Highly collimated outflows or jets are produced by a number of astrophysical objects including protostars. The morphology and collimation of these jets is thought to be strongly influenced by the effects of radiative cooling, angular momentum and the interstellar medium surrounding the jet. Astrophysically relevant experiments are performed with conical wire array z-pinches investigating each of these effects. It is possible in each case to enter the appropriate parameter regime, leading the way towards future experiments where these different techniques can be more fully combined.

More Details

Geotech Smart24 data acquisition system input terminated noise seismic response adjusted test : StreckeisenSTS2-low and high gain, Guralp CMG3T and Geotech GS13 seismometers

Rembold, Randy K.; Harris, James M.

Sandia National Laboratories has tested, evaluated and reported on the Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication in SAND2008-. One test, Input Terminated Noise, allows us to characterize the self-noise of the Smart24 system. By computing the power spectral density (PSD) of the input terminated noise time series data set and correcting for the instrument response of different seismometers, the resulting spectrum can be compared to the USGS new low noise model (NLNM) of Peterson (1996), and determine the ability of the matched system of seismometer and Smart24 to be quiet enough for any general deployment location. Four seismometer models were evaluated: the Streckeisen STS2-Low and High Gain, Guralp CMG3T and Geotech GS13 models. Each has a unique pass-band as defined by the frequency band of the instrument corrected noise spectrum that falls below the new low-noise model.

More Details

Laser concrete ablation scaling effects

Norris, Jerome T.

Surface scabbling of concrete by laser processing has been demonstrated in the literature for large-area problems ({approx}50 mm wide x 10 deep) using physically large, high-power consumption, multi-kW CW laser systems. With large spot diameters ({approx}50 mm) and low power densities ({approx} 300 W/cm{sup 2}), large volume thermal stresses are induced which promote concrete cracking. This process is highly power-density and heat-input (J/m) dependent. Too high power densities cause melting and generate potentially toxic fumes by vaporizing the cement matrix material. New applications require concrete removal with more portable, lower power equipment, and low particulate and fume generation. Recent results investigating the process for small-area ({approx} 2 x 2 mm) removal are examined and discussed. Tests performed were limited to < 700W output power. Ablation via thermal cracking was observed at larger spot sizes but as the spot size approached 10 mm (with constant power density) ablation ceased and melting predominated. Scaling effects involving temperature gradients through the ITZ (Interfacial Transition Zone), the probability of including an ITZ in the beam path at decreasing spot sizes, and the gradient effects on bulk properties between rock and sand zones will be presented and discussed.

More Details

Multiphase reacting flow modeling of singlet oxygen generators for chemical oxygen iodine lasers

Pawlowski, Roger; Salinger, Andrew G.

Singlet oxygen generators are multiphase flow chemical reactors used to generate energetic oxygen to be used as a fuel for chemical oxygen iodine lasers. In this paper, a theoretical model of the generator is presented along with its solutions over ranges of parameter space and oxygen maximizing optimizations. The singlet oxygen generator (SOG) is a low-pressure, multiphase flow chemical reactor that is used to produce molecular oxygen in an electronically excited state, i.e. singlet delta oxygen. The primary product of the reactor, the energetic oxygen, is used in a stage immediately succeeding the SOG to dissociate and energize iodine. The gas mixture including the iodine is accelerated to a supersonic speed and lased. Thus the SOG is the fuel generator for the chemical oxygen iodine laser (COIL). The COIL has important application for both military purposes--it was developed by the US Air Force in the 1970s--and, as the infrared beam is readily absorbed by metals, industrial cutting and drilling. The SOG appears in various configurations, but the one in focus here is a crossflow droplet generator SOG. A gas consisting of molecular chlorine and a diluent, usually helium, is pumped through a roughly rectangular channel. An aqueous solution of hydrogen peroxide and potassium hydroxide is pumped through small holes into the channel and perpendicular to the direction of the gas flow. So doing causes the solution to become aerosolized. Dissociation of the potassium hydroxide draws a proton from the hydrogen peroxide generating an HO{sub 2} radical in the liquid. Chlorine diffuses into the liquid and reacts with the HO{sub 2} ion producing the singlet delta oxygen; some of the oxygen diffuses back into the gas phase. The focus of this work is to generate a predictive multiphase flow model of the SOG in order to optimize its design. The equations solved are the so-called Eulerian-Eulerian form of the multiphase flow Navier-Stokes equations wherein one set of the equations represents the gas phase and another equation set of size m represents the liquid phase. In this case, m is representative of the division of the liquid phase into distinct representations of the various droplet sizes distributed in the reactor. A stabilized Galerkin formulation is used to solve the equation set on a computer. The set of equations is large. There are five equations representing the gas phase: continuity, vector momentum, heat. There are 5m representing the liquid phase: number density, vector momentum, heat. Four mass transfer equations represent the gas phase constituents and there are m advection diffusion equations representing the HO{sub 2} ion concentration in the liquid phase. Thus we are taking advantage of and developing algorithms to harness the power of large parallel computing architectures to solve the steady-state form of these equations numerous times so as to explore the large parameter space of the equations via continuation methods and to maximize the generation of singlet delta oxygen via optimization methods. Presented here will be the set of equations that are solved and the methods we are using to solve them. Solutions of the equations will be presented along with solution paths representing varying aerosol loading-the ratio of liquid to gas mass flow rates-and simple optimizations centered around maximizing the oxygen production and minimizing the amount of entrained liquid in the gas exit stream. Gas-entrained liquid is important to minimize as it can destroy the lenses and mirrors present in the lasing cavity.

More Details

Steps toward fabricating cryogenic CMOS compatible single electron devices for future qubits

Ten Eyck, Gregory A.; Tracy, Lisa A.; Wendt, Joel R.; Childs, Kenton D.; Stevens, Jeffrey; Lilly, Michael; Carroll, M.S.; Eng, Kevin E.

We describe the development of a novel silicon quantum bit (qubit) device architecture that involves using materials that are compatible with a Sandia National Laboratories (SNL) 0.35 mum complementary metal oxide semiconductor (CMOS) process intended to operate at 100 mK. We describe how the qubit structure can be integrated with CMOS electronics, which is believed to have advantages for critical functions like fast single electron electrometry for readout compared to current approaches using radio frequency techniques. Critical materials properties are reviewed and preliminary characterization of the SNL CMOS devices at 4.2 K is presented.

More Details

A nested dissection approach to sparse matrix partitioning for parallel computations

Proposed for publication in SIAM Journal on Scientific Computing.

Boman, Erik G.

We consider how to distribute sparse matrices among processes to reduce communication costs in parallel sparse matrix computations, specifically, sparse matrix-vector multiplication. Our main contributions are: (i) an exact graph model for communication with general (two-dimensional) matrix distribution, and (ii) a recursive partitioning algorithm based on nested dissection (substructuring). We show that the communication volume is closely linked to vertex separators. We have implemented our algorithm using hypergraph partitioning software to enable a fair comparison with existing methods. We present numerical results for sparse matrices from several application areas, with up to 9 million nonzeros. The results show that our new approach is superior to traditional 1d partitioning and comparable to a current leading partitioning method, the finegrain hypergraph method, in terms of communication volume. Our nested dissection method has two advantages over the fine-grain method: it is faster to compute, and the resulting distribution requires fewer communication messages.

More Details

"Trojan Horse" strategy for deconstruction of biomass for biofuels production

Timlin, Jerilyn A.; Tran-Gyamfi, Mary; Sapra, Rajat S.; Sinclair, Michael B.; Simmons, Blake

Production of renewable biofuels to displace fossil fuels currently consumed in the transportation sector is a pressing multi-agency national priority. Currently, nearly all fuel ethanol is produced from corn-derived starch. Dedicated 'energy crops' and agricultural waste are preferred long-term solutions for renewable, cheap, and globally available biofuels as they avoid some of the market pressures and secondary greenhouse gas emission challenges currently facing corn ethanol. These sources of lignocellulosic biomass are converted to fermentable sugars using a variety of chemical and thermochemical pretreatments, which disrupt cellulose and lignin cross-links, allowing exogenously added recombinant microbial enzymes to more efficiently hydrolyze the cellulose for 'deconstruction' into glucose. This process is plagued with inefficiencies, primarily due to the recalcitrance of cellulosic biomass, mass transfer issues during deconstruction, and low activity of recombinant deconstruction enzymes. Costs are also high due to the requirement for enzymes and reagents, and energy-intensive and cumbersome pretreatment steps. One potential solution to these problems is found in synthetic biology; they propose to engineer plants that self-produce a suite of cellulase enzymes targeted to the apoplast for cleaving the linkages between lignin and cellulosic fibers; the genes encoding the degradation enzymes, also known as cellulases, are obtained from extremophilic organisms that grow at high temperatures (60-100 C) and acidic pH levels (<5). These enzymes will remain inactive during the life cycle of the plant but become active during hydrothermal pretreatment i.e., elevated temperatures. Deconstruction can be integrated into a one-step process, thereby increasing efficiency (cellulose-cellulase mass-transfer rates) and reducing costs. The proposed disruptive technologies address biomass deconstruction processes by developing transgenic plants encoding a suite of enzymes used in cellulosic deconstruction. The unique aspects of this technology are the rationally engineered, highly productive extremophilic enzymes, targeted to specific cellular locations (apoplast) and their dormancy during normal plant proliferation, which become Trojan horses during pretreatment conditions. They have been leveraging established Sandia's enzyme-engineering and imaging capabilities. Their technical approach not only targets the recalcitrance and mass-transfer problem during biomass degradation but also eliminates the costs associated with industrial-scale production of microbial enzymes added during processing.

More Details

Finite element analyses of continuous filament ties for masonry applications : final report for the Arquin Corporation

Ho, Clifford K.

Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph winds (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).

More Details

Savannah River Site/K Area Complex getter life extension report

Shepodd, Timothy J.

The K Area Complex (KAC) at the Savannah River Site (SRS) has been utilizing HiTop hydrogen getter material in 9975 Shipping Containers to prevent the development of flammable environments during storage of moisture-containing plutonium oxides. Previous testing and subsequent reports have been performed and produced by Sandia National Laboratories (SNL) to demonstrate the suitability and longevity of the getter during storage at bounding thermal conditions. To date, results have shown that after 18 months of continuous storage at 70 C, the getter is able to both recombine gaseous hydrogen and oxygen into water when oxygen is available, and irreversibly getter (i.e. scavenge) hydrogen from the vapor space when oxygen is not available, both under a CO{sub 2} environment. [Refs. 1-5] Both of these reactions are catalytically enhanced and thermodynamically favorable. The purpose of this paper is to establish the justification that maintaining the current efforts of biannual testing is no longer necessary due to the robust performance of the getter material, the very unlikely potential that the recombination reaction will fail during storage conditions in KAC, and the insignificant aging effects that have been seen in the testing to date.

More Details

Pressure-driven and free-rise foam flow

Mondy, Lisa A.; Kropka, Jamie M.; Celina, Mathew C.; Rao, Rekha R.; Brotherton, Christopher M.; Bourdon, Christopher; Noble, David R.; Moffat, Harry K.; Grillet, Anne M.; Kraynik, Andrew M.; Leming, Sarah L.

Many weapons components (e.g. firing sets) are encapsulated with blown foams. Foam is a strong lightweight material--good compromise between conflicting needs of structural stability and electronic function. Current foaming processes can lead to unacceptable voids, property variations, cracking, and slipped schedules which is a long-standing issue. Predicting the process is not currently possible because the material is polymerizing and multiphase with changing microstructure. The goals of this project is: (1) Produce uniform encapsulant consistently and improve processability; (2) Eliminate metering issues/voids; (3) Lower residual stresses, exotherm to protect electronics; and (4) Maintain desired properties--lightweight, strong, no delamination/cracking, and ease of removal. The summary of achievements in the first year are: (1) Developed patentable chemical foaming chemistry - TA; (2) Developed persistent non-curing foam for systematic evaluation of fundamental physics of foams--Initial testing of non-curing foam shows that surfactants very important; (3) Identified foam stability strategy using a stacked reaction scheme; (4) Developed foam rheology methodologies and shear apparatuses--Began testing candidates for shear stability; (5) Began development of computational model; and (6) Development of methodology and collection of property measurements/boundary conditions for input to computational model.

More Details

Analysis and reduction of chemical models under uncertainty

Debusschere, Bert; Najm, Habib N.

While models of combustion processes have been successful in developing engines with improved fuel economy, more costly simulations are required to accurately model pollution chemistry. These simulations will also involve significant parametric uncertainties. Computational singular perturbation (CSP) and polynomial chaos-uncertainty quantification (PC-UQ) can be used to mitigate the additional computational cost of modeling combustion with uncertain parameters. PC-UQ was used to interrogate and analyze the Davis-Skodje model, where the deterministic parameter in the model was replaced with an uncertain parameter. In addition, PC-UQ was combined with CSP to explore how model reduction could be combined with uncertainty quantification to understand how reduced models are affected by parametric uncertainty.

More Details

Bright spots in 1 MA X pinches as a function of wire number and material

Proposed for publication in Physics of Plasmas.

Ampleford, David J.; Cuneo, Michael E.; Wenger, D.F.

Bright, intense x-ray sources with extreme plasma parameters (micropinch plasmas) have previously been characterized at 0.1-0.4 MA, but the scaling of such sources at higher current is poorly understood. The x-ray source size and radiation power of 1 MA X pinches were studied as a function of wire material (Al, Ti, Mo, and W) and number (1-, 2-, 8-, 32-, and 64-wire configurations). The smallest bright spots observed were from 32-wire tungsten X pinches, which produced {le} 11-16 {micro}m, {approx}2 J, 1-10 GW sources of 3-5 keV radiation.

More Details

Planar wire array dynamics and radiation scaling at multi-MA levels on the Saturn pulsed power generator

Jones, Brent M.; Cuneo, Michael E.; Ampleford, David J.; Coverdale, Christine A.; Vesey, Roger A.; Jones, Michael

Planar wire arrays are studied at 3-6 MA on the Saturn pulsed power generator as potential drivers of compact hohlraums for inertial confinement fusion studies. Comparison with zero-dimensional modeling suggests that there is significant trailing mass. The modeled energy coupled from the generator cannot generally explain the energy in the main x-ray pulse. Preliminary comparison at 1-6 MA indicates sub-quadratic scaling of x-ray power in a manner similar to compact cylindrical wire arrays. Time-resolved pinhole images are used to study the implosion dynamics.

More Details

Electromagnetic wave propagation through the ZR Z-pinch accelerator

Stygar, William A.; Struve, Kenneth

A fully three-dimensional electromagnetic model of the major pulsed power components of the 26-MA ZR accelerator is presented. This large-scale simulation model tracks the evolution of electromagnetic waves through the intermediate storage capacitors, laser-triggered gas switches, pulse-forming lines, water switches, tri-plate transmission lines, and water convolute to the vacuum insulator stack. The plates at the insulator stack are coupled to a transmission line circuit model of the four-level magnetically-insulated transmission line section and post-hole convolutes. The vacuum section circuit model is terminated by either a short-circuit load or dynamic models of imploding z-pinch loads. The simulations results are compared with electrical measurements made throughout the ZR accelerator and good agreement is found, especially for times before and up to peak load power. This modeling effort represents new opportunities for modeling existing and future large-scale pulsed power systems used in a variety of high energy density physics and radiographic applications.

More Details

Modeling and design optimization of adhesion between surfaces at the microscale

Sylves, Kevin T.

This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

More Details

Workforce management strategies in a disaster scenario

Kelic, Andjelka; Turk, Adam L.

A model of the repair operations of the voice telecommunications network is used to study labor management strategies under a disaster scenario where the workforce is overwhelmed. The model incorporates overtime and fatigue functions and optimizes the deployment of the workforce based on the cost of the recovery and the time it takes to recover. The analysis shows that the current practices employed in workforce management in a disaster scenario are not optimal and more strategic deployment of that workforce is beneficial.

More Details

Three dimensional silicon photonic crystals fabricated by two photon phase mask lithography

Proposed for publication in Applied Physics Letters.

Bogart, Katherine H.A.

We describe the fabrication of silicon three dimensional photonic crystals using polymer templates defined by a single step, two-photon exposure through a layer of photopolymer with relief molded on its surface. The resulting crystals exhibit high structural quality over large areas, displaying geometries consistent with calculation. Spectroscopic measurements of transmission and reflection through the silicon and polymer structures reveal excellent optical properties, approaching properties predicted by simulations that assume ideal layouts.

More Details

Adsorption-controlled growth of BiFeO3 by MBE and integration with wide band gap semiconductors

Proposed for publciation in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control.

Ihlefeld, Jon F.

BiFeO3 thin films have been deposited on (101) DyScO3, (0001) AlGaN/GaN, and (0001) SiC single crystal substrates by reactive molecular-beam epitaxy in an adsorption-controlled growth regime. This is achieved by supplying a bismuth over-pressure and utilizing the differential vapor pressures between bismuth oxides and BiFeO3 to control stoichiometry. Four-circle x-ray diffraction reveals phase-pure, epitaxial films with rocking curve full width at half maximum values as narrow as 7.2 arc seconds. Epitaxial growth of (0001)-oriented BiFeO3 thin films on (0001) GaN, including AlGaN HEMT structures, and (0001) SiC has been realized utilizing intervening epitaxial (111) SrTiO3/(100) TiO2 buffer layers. The epitaxial BiFeO3 thin films have two in-plane orientations: [1120] BiFeO3 [1120] GaN (SiC) plus a twin variant related by a 180{sup o} in-plane rotation. This epitaxial integration of the ferroelectric with the highest known polarization, BiFeO3, with wide band gap semiconductors is an important step toward novel field-effect devices.

More Details

Integration of the advanced transparency framework to advanced nuclear systems : enhancing Safety, Operations, Security and Safeguards (SOSS)

Cleary, Virginia D.; Rochau, Gary E.

The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.

More Details

SOI-Enabled MEMS Processes Lead to Novel Mechanical Optical and Atomic Physics Devices

Herrera, Gilbert V.; Mccormick, Frederick B.; Nielson, Gregory N.; Nordquist, Christopher D.; Okandan, Murat; Olsson, Roy H.; Ortiz, Keith; Platzbecker, Mark R.; Resnick, Paul; Shul, Randy J.; Bauer, Todd M.; Sullivan, Charles T.; Watts, Michael W.; Blain, Matthew G.; Dodd, Paul E.; Dondero, Richard; Garcia, Ernest J.; Galambos, Paul C.; Hetherington, Dale L.; Hudgens, James J.

Abstract not provided.

TOA/FOA geolocation error analysis

Mason, John J.

This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

More Details

OpenCV and TYZX : video surveillance for tracking

He, Jim H.

As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processing solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.

More Details

Parallel computing in enterprise modeling

Heath, Zach; Shneider, Max S.; Vanderveen, Keith; Allan, Benjamin A.; Ray, Jaideep

This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

More Details

Three-dimensional visualization of surface defects in core-shell nanowires

Journal of Physical Chemistry C

Arslan, Ilke; Talin, Albert A.; Wang, George T.

The high surface to volume ratio of nanowires makes them attractive for exploiting exotic materials properties and nanoengineering new device structures. To realize these goals, a fundamental understanding of the morphology and growth of the nanowires must be attained in three dimensions, because a two-dimensional projection image of these complex three-dimensional nanomaterials is not sufficient to describe their properties. Scanning transmission electron tomography is used here to obtain three-dimensional tomograms of GaN/AIN core-shell nanowires. This technique reveals the overall morphology and triangular shape of the nanowires, as well as their relation to the catalyst particle, with a resolution of ∼1 nm in all three spatial dimensions. Defects that appear to be in the core of the nanowires in two-dimensional images are shown to be surface defects induced during growth, demonstrating the importance of this three-dimensional technique in analyzing nanomaterials. © 2008 American Chemical Society.

More Details

Limits on the maximum attainable efficiency for solid-state lighting

Proceedings of SPIE - The International Society for Optical Engineering

Coltrin, Michael E.; Tsao, Jeffrey Y.

Artificial lighting for general illumination purposes accounts for over 8% of global primary energy consumption. However, the traditional lighting technologies in use today, i.e., incandescent, fluorescent, and high-intensity discharge lamps, are not very efficient, with less than about 25% of the input power being converted to useful light. Solid-state lighting is a rapidly evolving, emerging technology whose efficiency of conversion of electricity to visible white light is likely to approach 50% within the next years. This efficiency is significantly higher than that of traditional lighting technologies, with the potential to enable a marked reduction in the rate of world energy consumption., There is no fundamental physical reason why efficiencies well beyond 50% could not be achieved, which could enable even greater world energy savings. The maximum achievable luminous efficacy for a solid-state lighting source depends on many different physical parameters, for example the color rendering quality that is required, the architecture employed to produce the component light colors that are mixed to produce white, and the efficiency of light sources producing each color component. In this article, we discuss in some detail several approaches to solid-state lighting and the maximum luminous efficacy that could be attained, given various constraints such as those listed above.

More Details

Simulation of the mechanical strength of a single collagen molecule

Biophysical Journal

In 't Veld, Pieter J.; Stevens, Mark J.

We perform atomistic simulations on a single collagen molecule to determine its intrinsic molecular strength. A tensile pull simulation to determine the tensile strength and Young's modulus is performed, and a simulation that separates two of the three helices of collagen examines the internal strength of the molecule. The magnitude of the calculated tensile forces is consistent with the strong forces of bond stretching and angle bending that are involved in the tensile deformation. The triple helix unwinds with increasing tensile force. Pulling apart the triple helix has a smaller, oscillatory force. The oscillations are due to the sequential separation of the hydrogen-bonded helices. The force rises due to reorienting the residues in the direction of the separation force. The force drop occurs once the hydrogen bond between residues on different helices break and the residues separate. © 2008 by the Biophysical Society.

More Details

On sub-linear convergence for linearly degenerate waves in capturing schemes

Journal of Computational Physics

Banks, Jeffrey W.; Aslam, T.; Rider, William J.

A common attribute of capturing schemes used to find approximate solutions to the Euler equations is a sub-linear rate of convergence with respect to mesh resolution. Purely nonlinear jumps, such as shock waves produce a first-order convergence rate, but linearly degenerate discontinuous waves, where present, produce sub-linear convergence rates which eventually dominate the global rate of convergence. The classical explanation for this phenomenon investigates the behavior of the exact solution to the numerical method in combination with the finite error terms, often referred to as the modified equation. For a first-order method, the modified equation produces the hyperbolic evolution equation with second-order diffusive terms. In the frame of reference of the traveling wave, the solution of a discontinuous wave consists of a diffusive layer that grows with a rate of t1/2, yielding a convergence rate of 1/2. Self-similar heuristics for higher-order discretizations produce a growth rate for the layer thickness of Δt1/(p+1) which yields an estimate for the convergence rate as p/(p + 1) where p is the order of the discretization. In this paper we show that this estimated convergence rate can be derived with greater rigor for both dissipative and dispersive forms of the discrete error. In particular, the form of the analytical solution for linear modified equations can be solved exactly. These estimates and forms for the error are confirmed in a variety of demonstrations ranging from simple linear waves to multidimensional solutions of the Euler equations. © 2008 Elsevier Inc.

More Details

Analysis of micromixers to reduce biofouling on reverse-osmosis membranes

Environmental Progress

Ho, Clifford K.; Altman, Susan J.; Jones, Howland D.T.; Khalsa, Siri S.; Clem, Paul

Features (micromixers) that promote chaotic mixing were fabricated on reverse-osmosis membrane surfaces and evaluated using computational models and laboratory experiments to determine their effectiveness in reducing biofouling. Computational fluid dynamics models of membrane feed channels were developed using different patterns of micromixers on the membrane surface. The shear-stress distribution along the membrane surface was simulated for steady flows along the different micromixer configurations. In addition, the hypothetical mass transfer of a tracer from the membrane surface was used as a metric to compare the amount of scouring and mixing in configurations with and without micromixers. Epoxy micromixers were printed directly onto membrane surfaces, and different patterns were evaluated experimentally. Fluorescence hyperspectral imaging results showed that regions of simulated high shear stress on the membrane corresponded to regions of lower bacterial growth in the experiments, while regions of simulated low shear stress corresponded to regions of higher bacterial growth. In addition, the presence of the micromixers appeared to reduce the overall biofouling concentration in one series of experiments, but the results were inconclusive in another series of experiments. These results indicate that while the enhancement of mixing and shear stress via micromixers may delay or mitigate the onset of localized membrane fouling from biofilms or other contaminants. the impact of micromixers on the overall performance of reverse-osmosis membranes needs further investigation. © 2008 American Institute of Chemical Engineers.

More Details

Metal oxide coating of carbon supports for supercapacitor applications

Boyle, Timothy; Lambert, Timothy N.

The global market for wireless sensor networks in 2010 will be valued close to $10 B, or 200 M units. TPL, Inc. is a small Albuquerque based business that has positioned itself to be a leader in providing uninterruptible power supplies in this growing market with projected revenues expected to exceed $26 M in 5 years. This project focused on improving TPL, Inc.'s patent-pending EnerPak{trademark} device which converts small amounts of energy from the environment (e.g., vibrations, light or temperature differences) into electrical energy that can be used to charge small energy storage devices. A critical component of the EnerPak{trademark} is the supercapacitor that handles high power delivery for wireless communications; however, optimization and miniaturization of this critical component is required. This proposal aimed to produce prototype microsupercapacitors through the integration of novel materials and fabrication processes developed at New Mexico Technology Research Collaborative (NMTRC) member institutions. In particular, we focused on developing novel ruthenium oxide nanomaterials and placed them into carbon supports to significantly increase the energy density of the supercapacitor. These improvements were expected to reduce maintenance costs and expand the utility of the TPL, Inc.'s device, enabling New Mexico to become the leader in the growing global wireless power supply market. By dominating this niche, new customers were expected to be attracted to TPL, Inc. yielding new technical opportunities and increased job opportunities for New Mexico.

More Details

Hypervelocity impact technology and applications: 2007

Reinhart, William D.

The Hypervelocity Impact Society is devoted to the advancement of the science and technology of hypervelocity impact and related technical areas required to facilitate and understand hypervelocity impact phenomena. Topics of interest include experimental methods, theoretical techniques, analytical studies, phenomenological studies, dynamic material response as related to material properties (e.g., equation of state), penetration mechanics, and dynamic failure of materials, planetary physics and other related phenomena. The objectives of the Society are to foster the development and exchange of technical information in the discipline of hypervelocity impact phenomena, promote technical excellence, encourage peer review publications, and hold technical symposia on a regular basis. It was sometime in 1985, partly in response to the Strategic Defense Initiative (SDI), that a small group of visionaries decided that a conference or symposium on hypervelocity science would be useful and began the necessary planning. A major objective of the first Symposium was to bring the scientists and researchers up to date by reviewing the essential developments of hypervelocity science and technology between 1955 and 1985. This Symposia--HVIS 2007 is the tenth Symposium since that beginning. The papers presented at all the HVIS are peer reviewed and published as a special volume of the archival journal International Journal of Impact Engineering. HVIS 2007 followed the same high standards and its proceedings will add to this body of work.

More Details

The Benefits of Adaptive Partitioning for Parallel AMR Applications

Steensland, Johan

Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, the dynamics inherent in these methods present significant challenges in data partitioning and load balancing. Significant human resources, including time, effort, experience, and knowledge, are required for determining the optimal partitioning technique for each new simulation. In reality, scientists resort to using the on-board partitioner of the computational framework, or to using the partitioning industry standard, ParMetis. Adaptive partitioning refers to repeatedly selecting, configuring and invoking the optimal partitioning technique at run-time, based on the current state of the computer and application. In theory, adaptive partitioning automatically delivers superior performance and eliminates the need for repeatedly spending valuable human resources for determining the optimal static partitioning technique. In practice, however, enabling frameworks are non-existent due to the inherent significant inter-disciplinary research challenges. This paper presents a study of a simple implementation of adaptive partitioning and discusses implied potential benefits from the perspective of common groups of users within computational science. The study is based on a large set of data derived from experiments including six real-life, multi-time-step adaptive applications from various scientific domains, five complementing and fundamentally different partitioning techniques, a large set of parameters corresponding to a wide spectrum of computing environments, and a flexible cost function that considers the relative impact of multiple partitioning metrics and diverse partitioning objectives. The results show that even a simple implementation of adaptive partitioning can automatically generate results statistically equivalent to the best static partitioning. Thus, it is possible to effectively eliminate the problem of determining the best partitioning technique for new simulations. Moreover. the results show that adaptive partitioning can provide a performance gain of about 10 percent on average as compared to routinely using the industry-standard, ParMetis.

More Details

Solar Energy Grid Integration Systems -- Energy Storage (SEGIS-ES)

Hanley, Charles; Huff, Georgianne; Boyes, John D.

This paper describes the concept for augmenting the SEGIS Program (an industry-led effort to greatly enhance the utility of distributed PV systems) with energy storage in residential and small commercial applications (SEGIS-ES). The goal of SEGIS-ES is to develop electrical energy storage components and systems specifically designed and optimized for grid-tied PV applications. This report describes the scope of the proposed SEGIS-ES Program and why it will be necessary to integrate energy storage with PV systems as PV-generated energy becomes more prevalent on the nation's utility grid. It also discusses the applications for which energy storage is most suited and for which it will provide the greatest economic and operational benefits to customers and utilities. Included is a detailed summary of the various storage technologies available, comparisons of their relative costs and development status, and a summary of key R&D needs for PV-storage systems. The report concludes with highlights of areas where further PV-specific R&D is needed and offers recommendations about how to proceed with their development.

More Details
Results 77201–77400 of 99,299
Results 77201–77400 of 99,299