Publications

Results 73201–73400 of 96,771

Search results

Jump to search filters

Practical reliability and uncertainty quantification in complex systems : final report

Grace, Matthew G.; Red-Horse, John R.; Pebay, Philippe P.; Ringland, James T.; Zurn, Rena M.; Diegert, Kathleen V.

The purpose of this project was to investigate the use of Bayesian methods for the estimation of the reliability of complex systems. The goals were to find methods for dealing with continuous data, rather than simple pass/fail data; to avoid assumptions of specific probability distributions, especially Gaussian, or normal, distributions; to compute not only an estimate of the reliability of the system, but also a measure of the confidence in that estimate; to develop procedures to address time-dependent or aging aspects in such systems, and to use these models and results to derive optimal testing strategies. The system is assumed to be a system of systems, i.e., a system with discrete components that are themselves systems. Furthermore, the system is 'engineered' in the sense that each node is designed to do something and that we have a mathematical description of that process. In the time-dependent case, the assumption is that we have a general, nonlinear, time-dependent function describing the process. The major results of the project are described in this report. In summary, we developed a sophisticated mathematical framework based on modern probability theory and Bayesian analysis. This framework encompasses all aspects of epistemic uncertainty and easily incorporates steady-state and time-dependent systems. Based on Markov chain, Monte Carlo methods, we devised a computational strategy for general probability density estimation in the steady-state case. This enabled us to compute a distribution of the reliability from which many questions, including confidence, could be addressed. We then extended this to the time domain and implemented procedures to estimate the reliability over time, including the use of the method to predict the reliability at a future time. Finally, we used certain aspects of Bayesian decision analysis to create a novel method for determining an optimal testing strategy, e.g., we can estimate the 'best' location to take the next test to minimize the risk of making a wrong decision about the fitness of a system. We conclude this report by proposing additional fruitful areas of research.

More Details

Vibrational spectra of nanowires measured using laser doppler vibrometry and STM studies of epitaxial graphene : an LDRD fellowship report

Biedermann, Laura B.

A few of the many applications for nanowires are high-aspect ratio conductive atomic force microscope (AFM) cantilever tips, force and mass sensors, and high-frequency resonators. Reliable estimates for the elastic modulus of nanowires and the quality factor of their oscillations are of interest to help enable these applications. Furthermore, a real-time, non-destructive technique to measure the vibrational spectra of nanowires will help enable sensor applications based on nanowires and the use of nanowires as AFM cantilevers (rather than as tips for AFM cantilevers). Laser Doppler vibrometry is used to measure the vibration spectra of individual cantilevered nanowires, specifically multiwalled carbon nanotubes (MWNTs) and silver gallium nanoneedles. Since the entire vibration spectrum is measured with high frequency resolution (100 Hz for a 10 MHz frequency scan), the resonant frequencies and quality factors of the nanowires are accurately determined. Using Euler-Bernoulli beam theory, the elastic modulus and spring constant can be calculated from the resonance frequencies of the oscillation spectrum and the dimensions of the nanowires, which are obtained from parallel SEM studies. Because the diameters of the nanowires studied are smaller than the wavelength of the vibrometer's laser, Mie scattering is used to estimate the lower diameter limit for nanowires whose vibration can be measured in this way. The techniques developed in this thesis can be used to measure the vibrational spectra of any suspended nanowire with high frequency resolution Two different nanowires were measured - MWNTs and Ag{sub 2}Ga nanoneedles. Measurements of the thermal vibration spectra of MWNTs under ambient conditions showed that the elastic modulus, E, of plasma-enhanced chemical vapor deposition (PECVD) MWNTs is 37 {+-} 26 GPa, well within the range of E previously reported for CVD-grown MWNTs. Since the Ag{sub 2}Ga nanoneedles have a greater optical scattering efficiency than MWNTs, their vibration spectra was more extensively studied. The thermal vibration spectra of Ag{sub 2}Ga nanoneedles was measured under both ambient and low-vacuum conditions. The operational deflection shapes of the vibrating Ag{sub 2}Ga nanoneedles was also measured, allowing confirmation of the eigenmodes of vibration. The modulus of the crystalline nanoneedles was 84.3 {+-} 1.0 GPa. Gas damping is the dominate mechanism of energy loss for nanowires oscillating under ambient conditions. The measured quality factors, Q, of oscillation are in line with theoretical predictions of air damping in the free molecular gas damping regime. In the free molecular regime, Q{sub gas} is linearly proportional to the density and diameter of the nanowire and inversely proportional to the air pressure. Since the density of the Ag{sub 2}Ga nanoneedles is three times that of the MWNTs, the Ag{sub 2}Ga nanoneedles have greater Q at atmospheric pressures. Our initial measurements of Q for Ag{sub 2}Ga nanoneedles in low-vacuum (10 Torr) suggest that the intrinsic Q of these nanoneedles may be on the order of 1000. The epitaxial carbon that grows after heating (000{bar 1}) silicon carbide (SiC) to high temperatures (1450-1600) in vacuum was also studied. At these high temperatures, the surface Si atoms sublime and the remaining C atoms reconstruct to form graphene. X-ray photoelectron spectroscopy (XPS) and scanning tunneling microscopy (STM) were used to characterize the quality of the few-layer graphene (FLG) surface. The XPS studies were useful in confirming the graphitic composition and measuring the thickness of the FLG samples. STM studies revealed a wide variety of nanometer-scale features that include sharp carbon-rich ridges, moire superlattices, one-dimensional line defects, and grain boundaries. By imaging these features with atomic scale resolution, considerable insight into the growth mechanisms of FLG on the carbon-face of SiC is obtained.

More Details

A fully implicit method for 3D quasi-steady state magnetic advection-diffusion

Siefert, Christopher S.; Robinson, Allen C.

We describe the implementation of a prototype fully implicit method for solving three-dimensional quasi-steady state magnetic advection-diffusion problems. This method allows us to solve the magnetic advection diffusion equations in an Eulerian frame with a fixed, user-prescribed velocity field. We have verified the correctness of method and implementation on two standard verification problems, the Solberg-White magnetic shear problem and the Perry-Jones-White rotating cylinder problem.

More Details

Parallelism of the SANDstorm hash algorithm

Schroeppel, Richard C.; Torgerson, Mark D.

Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

More Details

Nanoengineering for solid-state lighting

Crawford, Mary H.; Fischer, Arthur J.; Koleske, Daniel K.; Lee, Stephen R.; Missert, Nancy A.

This report summarizes results from a 3-year Laboratory Directed Research and Development project performed in collaboration with researchers at Rensselaer Polytechnic Institute. Our collaborative effort was supported by Sandia's National Institute for Nanoengineering and focused on the study and application of nanoscience and nanoengineering concepts to improve the efficiency of semiconductor light-emitting diodes for solid-state lighting applications. The project explored LED efficiency advances with two primary thrusts: (1) the study of nanoscale InGaN materials properties, particularly nanoscale crystalline defects, and their impact on internal quantum efficiency, and (2) nanoscale engineering of dielectric and metal materials and integration with LED heterostructures for enhanced light extraction efficiency.

More Details

Low impedance z-pinch drivers without post-hole convolute current adders

Savage, Mark E.

Present-day pulsed-power systems operating in the terawatt regime typically use post-hole convolute current adders to operate at sufficiently low impedance. These adders necessarily involve magnetic nulls that connect the positive and negative electrodes. The resultant loss of magnetic insulation results in electron losses in the vicinity of the nulls that can severely limit the efficiency of the delivery of the system's energy to a load. In this report, we describe an alternate transformer-based approach to obtaining low impedance. The transformer consists of coils whose windings are in parallel rather than in series, and does not suffer from the presence of magnetic nulls. By varying the pitch of the coils windings, the current multiplication ratio can be varied, leading to a more versatile driver. The coupling efficiency of the transformer, its behavior in the presence of electron flow, and its mechanical strength are issues that need to be addressed to evaluate the potential of transformer-based current multiplication as a viable alternative to conventional current adder technology.

More Details

Low dislocation GaN via defect-filtering, self-assembled SiO2-sphere layers

Wang, George T.; Li, Qiming L.

The III-nitride (AlGaInN) materials system forms the foundation for white solid-state lighting, the adoption of which could significantly reduce U.S. energy needs. While the growth of GaN-based devices relies on heteroepitaxy on foreign substrates, the heteroepitaxial layers possess a high density of dislocations due to poor lattice and thermal expansion match. These high dislocation densities have been correlated with reduced internal quantum efficiency and lifetimes for GaN-based LEDs. Here, we demonstrate an inexpensive method for dislocation reduction in GaN grown on sapphire and silicon substrates. This technique, which requires no lithographic patterning, GaN is selectively grown through self-assembled layers of silica microspheres which act to filter out dislocations. Using this method, the threading dislocation density for GaN on sapphire was reduced from 3.3 x 10{sup 9} cm{sup -2} to 4.0 x 10{sup 7} cm{sup -2}, and from the 10{sup 10} cm{sup -2} range to {approx}6.0 x 10{sup 7} cm{sup -2} for GaN on Si(111). This large reduction in dislocation density is attributed to a dislocation blocking and bending by the unique interface between GaN and silica microspheres.

More Details

Nano-engineering by optically directed self-assembly

Grillet, Anne M.; Koehler, Timothy P.; Brotherton, Christopher M.; Bell, Nelson S.; Gorby, Allen D.; Reichert, Matthew D.; Brinker, C.J.; Bogart, Katherine B.

Lack of robust manufacturing capabilities have limited our ability to make tailored materials with useful optical and thermal properties. For example, traditional methods such as spontaneous self-assembly of spheres cannot generate the complex structures required to produce a full bandgap photonic crystals. The goal of this work was to develop and demonstrate novel methods of directed self-assembly of nanomaterials using optical and electric fields. To achieve this aim, our work employed laser tweezers, a technology that enables non-invasive optical manipulation of particles, from glass microspheres to gold nanoparticles. Laser tweezers were used to create ordered materials with either complex crystal structures or using aspherical building blocks.

More Details

Quantifying uncertainty from material inhomogeneity

Battaile, Corbett C.; Brewer, Luke N.; Emery, John M.; Boyce, Brad B.

Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the deformation behavior was found to depend strongly on the character of the nearby microstructure.

More Details

Antibacterial polymer coatings

Hibbs, Michael R.; Allen, Ashley N.; Wilson, Mollye C.; Tucker, Mark D.

A series of poly(sulfone)s with quaternary ammonium groups and another series with aldehyde groups are synthesized and tested for biocidal activity against vegetative bacteria and spores, respectively. The polymers are sprayed onto substrates as coatings which are then exposed to aqueous suspensions of organisms. The coatings are inherently biocidal and do not release any agents into the environment. The coatings adhere well to both glass and CARC-coated coupons and they exhibit significant biotoxicity. The most effective quaternary ammonium polymers kills 99.9% of both gram negative and gram positive bacteria and the best aldehyde coating kills 81% of the spores on its surface.

More Details

Density-functional-theory results for Ga and As vacancies in GaAs obtained using the Socorro code

Wright, Alan F.

The Socorro code has been used to obtain density-functional theory results for the Ga vacancy (V{sub Ga}) and the As vacancy (V{sub As}) in GaAs. Calculations were performed in a nominal 216-atom simulation cell using the local-density approximation for exchange and correlation. The results from these calculations include: (1) the charge states, the atomic configurations of stable and metastable states, (2) energy levels in the gap, and (3) activation energies for migration. Seven charge states were found for the Ga vacancy (-3, -2, -1, 0, +1, +2, +3). The stable structures of the -3, -2, -1, and 0 charge states consist of an empty Ga site with four As neighbors displaying T{sub d} symmetry. The stable structures of the +1, +2, and +3 charge states consist of an As antisite next to an As vacancy; AsGa-V{sub As}. Five charge states were found for the As vacancy (-3, -2, -1, 0, +1). The stable structures of the -1, 0, and +1 charge states consist of an empty As site with four Ga neighbors displaying C{sub 2v} symmetry. The stable structures of the -3 and -2 charge states consist of a Ga antisite next to a Ga vacancy; Ga{sub As}-V{sub Ga}. The energy levels of V{sub Ga} lie below mid-gap while the energy levels of As{sub Ga}-V{sub As} lie above and below mid-gap. All but one of the V{sub As} energy levels lie above mid-gap while the As{sub Ga}-V{sub As} energy level lies below mid-gap. The migration activation energies of the defect states were all found to be larger than 1.35 eV.

More Details

Simultaneous electronic and lattice characterization using coupled femtosecond spectroscopic techniques

Serrano, Justin R.; Hopkins, Patrick E.

High-power electronics are central in the development of radar, solid-state lighting, and laser systems. Large powers, however, necessitate improved heat dissipation as heightened temperatures deleteriously affect both performance and reliability. Heat dissipation, in turn, is determined by the cascade of energy from the electronic to lattice system. Full characterization of the transport then requires analysis of each. In response, this four-month late start effort has developed a transient thermoreflectance (TTR) capability that probes the thermal response of electronic carriers with 100 fs resolution. Simultaneous characterization of the lattice carriers with this electronic assessment was then investigated by equipping the optical arrangement to acquire a Raman signal from radiation discarded during the TTR experiment. Initial results show only tentative acquisition of a Raman response at these timescales. Using simulations of the response, challenges responsible for these difficulties are then examined and indicate that with outlined refinements simultaneous acquisition of TTR/Raman signals remains attainable in the near term.

More Details

Stress-induced chemical detection using flexible metal-organic frameworks

Allendorf, Mark D.; Houk, Ronald H.

In this work we demonstrate the concept of stress-induced chemical detection using metal-organic frameworks (MOFs) by integrating a thin film of the MOF HKUST-1 with a microcantilever surface. The results show that the energy of molecular adsorption, which causes slight distortions in the MOF crystal structure, can be efficiently converted to mechanical energy to create a highly responsive, reversible, and selective sensor. This sensor responds to water, methanol, and ethanol vapors, but yields no response to either N{sub 2} or O{sub 2}. The magnitude of the signal, which is measured by a built-in piezoresistor, is correlated with the concentration and can be fitted to a Langmuir isotherm. Furthermore, we show that the hydration state of the MOF layer can be used to impart selectivity to CO{sub 2}. We also report the first use of surface-enhanced Raman spectroscopy to characterize the structure of a MOF film. We conclude that the synthetic versatility of these nanoporous materials holds great promise for creating recognition chemistries to enable selective detection of a wide range of analytes. A force field model is described that successfully predicts changes in MOF properties and the uptake of gases. This model is used to predict adsorption isotherms for a number of representative compounds, including explosives, nerve agents, volatile organic compounds, and polyaromatic hydrocarbons. The results show that, as a result of relatively large heats of adsorption (> 20 kcal mol{sup -1}) in most cases, we expect an onset of adsorption by MOF as low as 10{sup -6} kPa, suggesting the potential to detect compounds such as RDX at levels as low as 10 ppb at atmospheric pressure.

More Details

Intelligent front-end sample preparation tool using acoustic streaming

Vreeland, Erika C.; Smith, Gennifer T.; Edwards, Thayne L.; James, Conrad D.; McClain, Jaime L.; Murton, Jaclyn K.; Kotulski, J.D.; Clem, Paul G.

We have successfully developed a nucleic acid extraction system based on a microacoustic lysis array coupled to an integrated nucleic acid extraction system all on a single cartridge. The microacoustic lysing array is based on 36{sup o} Y cut lithium niobate, which couples bulk acoustic waves (BAW) into the microchannels. The microchannels were fabricated using Mylar laminates and fused silica to form acoustic-fluidic interface cartridges. The transducer array consists of four active elements directed for cell lysis and one optional BAW element for mixing on the cartridge. The lysis system was modeled using one dimensional (1D) transmission line and two dimensional (2D) FEM models. For input powers required to lyse cells, the flow rate dictated the temperature change across the lysing region. From the computational models, a flow rate of 10 {micro}L/min produced a temperature rise of 23.2 C and only 6.7 C when flowing at 60 {micro}L/min. The measured temperature changes were 5 C less than the model. The computational models also permitted optimization of the acoustic coupling to the microchannel region and revealed the potential impact of thermal effects if not controlled. Using E. coli, we achieved a lysing efficacy of 49.9 {+-} 29.92 % based on a cell viability assay with a 757.2 % increase in ATP release within 20 seconds of acoustic exposure. A bench-top lysing system required 15-20 minutes operating up to 58 Watts to achieve the same level of cell lysis. We demonstrate that active mixing on the cartridge was critical to maximize binding and release of nucleic acid to the magnetic beads. Using a sol-gel silica bead matrix filled microchannel the extraction efficacy was 40%. The cartridge based magnetic bead system had an extraction efficiency of 19.2%. For an electric field based method that used Nafion films, a nucleic acid extraction efficiency of 66.3 % was achieved at 6 volts DC. For the flow rates we tested (10-50 {micro}L/min), the nucleic acid extraction time was 5-10 minutes for a volume of 50 {micro}L. Moreover, a unique feature of this technology is the ability to replace the cartridges for subsequent nucleic acid extractions.

More Details

Feasibility of neuro-morphic computing to emulate error-conflict based decision making

James, Conrad D.

A key aspect of decision making is determining when errors or conflicts exist in information and knowing whether to continue or terminate an action. Understanding the error-conflict processing is crucial in order to emulate higher brain functions in hardware and software systems. Specific brain regions, most notably the anterior cingulate cortex (ACC) are known to respond to the presence of conflicts in information by assigning a value to an action. Essentially, this conflict signal triggers strategic adjustments in cognitive control, which serve to prevent further conflict. The most probable mechanism is the ACC reports and discriminates different types of feedback, both positive and negative, that relate to different adaptations. Unique cells called spindle neurons that are primarily found in the ACC (layer Vb) are known to be responsible for cognitive dissonance (disambiguation between alternatives). Thus, the ACC through a specific set of cells likely plays a central role in the ability of humans to make difficult decisions and solve challenging problems in the midst of conflicting information. In addition to dealing with cognitive dissonance, decision making in high consequence scenarios also relies on the integration of multiple sets of information (sensory, reward, emotion, etc.). Thus, a second area of interest for this proposal lies in the corticostriatal networks that serve as an integration region for multiple cognitive inputs. In order to engineer neurological decision making processes in silicon devices, we will determine the key cells, inputs, and outputs of conflict/error detection in the ACC region. The second goal is understand in vitro models of corticostriatal networks and the impact of physical deficits on decision making, specifically in stressful scenarios with conflicting streams of data from multiple inputs. We will elucidate the mechanisms of cognitive data integration in order to implement a future corticostriatal-like network in silicon devices for improved decision processing.

More Details

Nanomechanics of hard films on compliant substrates

Moody, Neville R.; Reedy, Earl D.; Corona, Edmundo C.; Adams, David P.; Zhou, Xiaowang Z.

Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As a result, our understanding of the critical relationship between adhesion, properties, and fracture for hard films on compliant substrates is limited. To address this issue, we integrated nanomechanical testing and mechanics-based modeling in a program to define the critical relationship between deformation and fracture of nanoscale films on compliant substrates. The approach involved designing model film systems and employing nano-scale experimental characterization techniques to isolate effects of compliance, viscoelasticity, and plasticity on deformation and fracture of thin hard films on substrates that spanned more than two orders of compliance magnitude exhibit different interface structures, have different adhesion strengths, and function differently under stress. The results of this work are described in six chapters. Chapter 1 provides the motivation for this work. Chapter 2 presents experimental results covering film system design, sample preparation, indentation response, and fracture including discussion on the effects of substrate compliance on fracture energies and buckle formation from existing models. Chapter 3 describes the use of analytical and finite element simulations to define the role of substrate compliance and film geometry on the indentation response of thin hard films on compliant substrates. Chapter 4 describes the development and application of cohesive zone model based finite element simulations to determine how substrate compliance affects debond growth. Chapter 5 describes the use of molecular dynamics simulations to define the effects of substrate compliance on interfacial fracture of thin hard tungsten films on silicon substrates. Chapter 6 describes the Workshops sponsored through this program to advance understanding of material and system behavior.

More Details

Highly scalable linear solvers on thousands of processors

Siefert, Christopher S.; Tuminaro, Raymond S.; Domino, Stefan P.; Robinson, Allen C.

In this report we summarize research into new parallel algebraic multigrid (AMG) methods. We first provide a introduction to parallel AMG. We then discuss our research in parallel AMG algorithms for very large scale platforms. We detail significant improvements in the AMG setup phase to a matrix-matrix multiplication kernel. We present a smoothed aggregation AMG algorithm with fewer communication synchronization points, and discuss its links to domain decomposition methods. Finally, we discuss a multigrid smoothing technique that utilizes two message passing layers for use on multicore processors.

More Details

Neural assembly models derived through nano-scale measurements

Fan, Hongyou F.; Forsythe, James C.; Branda, Catherine B.; Warrender, Christina E.; Schiek, Richard S.

This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.

More Details

Automated Monte Carlo biasing for photon-generated electrons near surfaces

Franke, Brian C.; Kensek, Ronald P.

This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

More Details

Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report

Murphy, Richard C.

This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

More Details

Richtmyer-Meshkov instability on a low atwood number interface after reshock

Weber, Chris

The Richtmyer-Meshkov instability after reshock is investigated in shock tube experiments at the Wisconsin Shock Tube Laboratory using planar laser imaging and a new high speed interface tracking technique. The interface is a 50-50% volume fraction mixture of helium and argon stratified over pure argon. This interface has an Atwood number of 0.29 and near single mode, two-dimensional, standing wave perturbation with an average amplitude of 0.35 cm and a wavelength of 19.4 cm. The incident shock wave of Mach number 1.92 accelerates the interface before it is reshocked by a reflected Mach 1.70 shock wave. The amplitude growth after reshock is reported for variations in this initial amplitude, and several amplitude growth rate models are compared to the experimental growth rate after reshock. A new growth model is introduced, based on a model of circulation deposition calculated from one-dimensional gas dynamics parameters. This model is shown to compare well with the amplitude growth rate after reshock and the circulation over a half-wavelength of the interface after the first shock wave and after reshock.

More Details

Plasmonic enhanced ultrafast switch

Shaner, Eric A.; Passmore, Brandon S.; Barrick, Todd A.; Subramania, Ganapathi S.; Reno, J.L.

Ultrafast electronic switches fabricated from defective material have been used for several decades in order to produce picosecond electrical transients and TeraHertz radiation. Due to the ultrashort recombination time in the photoconductor materials used, these switches are inefficient and are ultimately limited by the amount of optical power that can be applied to the switch before self-destruction. The goal of this work is to create ultrafast (sub-picosecond response) photoconductive switches on GaAs that are enhanced through plasmonic coupling structures. Here, the plasmonic coupler primarily plays the role of being a radiation condenser which will cause carriers to be generated adjacent to metallic electrodes where they can more efficiently be collected.

More Details

HOPSPACK 2.0 user manual

Plantenga, Todd P.

HOPSPACK (Hybrid Optimization Parallel Search PACKage) solves derivative-free optimization problems using an open source, C++ software framework. The framework enables parallel operation using MPI or multithreading, and allows multiple solvers to run simultaneously and interact to find solution points. HOPSPACK comes with an asynchronous pattern search solver that handles general optimization problems with linear and nonlinear constraints, and continuous and integer-valued variables. This user manual explains how to install and use HOPSPACK to solve problems, and how to create custom solvers within the framework.

More Details

Final Report on LDRD project 130784 : functional brain imaging by tunable multi-spectral Event-Related Optical Signal (EROS)

Hsu, Alan Y.; Speed, Ann S.

Functional brain imaging is of great interest for understanding correlations between specific cognitive processes and underlying neural activity. This understanding can provide the foundation for developing enhanced human-machine interfaces, decision aides, and enhanced cognition at the physiological level. The functional near infrared spectroscopy (fNIRS) based event-related optical signal (EROS) technique can provide direct, high-fidelity measures of temporal and spatial characteristics of neural networks underlying cognitive behavior. However, current EROS systems are hampered by poor signal-to-noise-ratio (SNR) and depth of measure, limiting areas of the brain and associated cognitive processes that can be investigated. We propose to investigate a flexible, tunable, multi-spectral fNIRS EROS system which will provide up to 10x greater SNR as well as improved spatial and temporal resolution through significant improvements in electronics, optoelectronics and optics, as well as contribute to the physiological foundation of higher-order cognitive processes and provide the technical foundation for miniaturized portable neuroimaging systems.

More Details

LDRD final report : massive multithreading applied to national infrastructure and informatics

Barrett, Brian B.; Hendrickson, Bruce A.; Laviolette, Randall A.; Leung, Vitus J.; Mackey, Greg; Murphy, Richard C.; Phillips, Cynthia A.; Pinar, Ali P.

Large relational datasets such as national-scale social networks and power grids present different computational challenges than do physical simulations. Sandia's distributed-memory supercomputers are well suited for solving problems concerning the latter, but not the former. The reason is that problems such as pattern recognition and knowledge discovery on large networks are dominated by memory latency and not by computation. Furthermore, most memory requests in these applications are very small, and when the datasets are large, most requests miss the cache. The result is extremely low utilization. We are unlikely to be able to grow out of this problem with conventional architectures. As the power density of microprocessors has approached that of a nuclear reactor in the past two years, we have seen a leveling of Moores Law. Building larger and larger microprocessor-based supercomputers is not a solution for informatics and network infrastructure problems since the additional processors are utilized to only a tiny fraction of their capacity. An alternative solution is to use the paradigm of massive multithreading with a large shared memory. There is only one instance of this paradigm today: the Cray MTA-2. The proposal team has unique experience with and access to this machine. The XMT, which is now being delivered, is a Red Storm machine with up to 8192 multithreaded 'Threadstorm' processors and 128 TB of shared memory. For many years, the XMT will be the only way to address very large graph problems efficiently, and future generations of supercomputers will include multithreaded processors. Roughly 10 MTA processor can process a simple short paths problem in the time taken by the Gordon Bell Prize-nominated distributed memory code on 32,000 processors of Blue Gene/Light. We have developed algorithms and open-source software for the XMT, and have modified that software to run some of these algorithms on other multithreaded platforms such as the Sun Niagara and Opteron multi-core chips.

More Details

Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing

Pedretti, Kevin T.T.; Levenhagen, Michael J.; Brightwell, Ronald B.

Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

More Details

A Life Cycle Cost Analysis Framework for Geologic Storage of Hydrogen

Lord, Anna S.; Kobos, Peter H.; Borns, David J.

Large scale geostorage options for fuels including natural gas and petroleum offer substantial buffer capacity to meet or hedge against supply disruptions. This same notion may be applied to large scale hydrogen storage to meet industrial or transportation sector needs. This study develops an assessment tool to calculate the potential ‘gate-to-gate’ life cycle costs for large scale hydrogen geostorage options in salt caverns, and continues to develop modules for depleted oil/gas reservoirs and aquifers. The U.S. Department of Energy has an interest in these types of storage to assess the geological, geomechanical and economic viability for this type of hydrogen storage. Understanding, and looking to quantify, the value of large-scale storage in a larger hydrogen supply and demand infrastructure may prove extremely beneficial for larger infrastructure modeling efforts when looking to identify the most efficient means to fuel a hydrogen demand (e.g., industrial or transportation-centric demand). Drawing from the knowledge gained in the underground large scale storage options for natural gas and petroleum in the U.S., the potential to store relatively large volumes of CO2 in geological formations, the hydrogen storage assessment modeling will continue to build on these strengths while maintaining modeling transparency such that other modeling efforts may draw from this project.

More Details

Land-surface studies with a directional neutron detector

Desilets, Darin M.; Marleau, Peter M.; Brennan, James S.

Direct measurements of cosmic-ray neutron intensity were recorded with a neutron scatter camera developed at SNL. The instrument used in this work is a prototype originally designed for nuclear non-proliferation work, but in this project it was used to characterize the response of ambient neutrons in the 0.5-10 MeV range to water located on or above the land surface. Ambient neutron intensity near the land surface responds strongly to the presence of water, suggesting the possibility of an indirect method for monitoring soil water content, snow water equivalent depth, or canopy intercepted water. For environmental measurements the major advantage of measuring neutrons with the scatter camera is the limited (60{sup o}) field of view that can be obtained, which allows observations to be conducted at a previously unattainable spatial scales. This work is intended to provide new measurements of directional fluxes which can be used in the design of new instruments for passively and noninvasively observing land-surface water. Through measurements and neutron transport modeling we have demonstrated that such a technique is feasible.

More Details

Electrostatic microvalves utilizing conductive nanoparticles for improved speed, lower power, and higher force actuation

Ten Eyck, Gregory A.; Branson, Eric D.; Cook, Adam W.; Collord, Andrew D.; Givler, R.C.

We have designed and built electrostatically actuated microvalves compatible with integration into a PDMS based microfluidic system. The key innovation for electrostatic actuation was the incorporation of carbon nanotubes into the PDMS valve membrane, allowing for electrostatic charging of the PDMS layer and subsequent discharging, while still allowing for significant distention of the valveseat for low voltage control of the system. Nanoparticles were applied to semi-cured PDMS using a stamp transfer method, and then cured fully to make the valve seats. DC actuation in air of these valves yielded operational voltages as low as 15V, by using a supporting structure above the valve seat that allowed sufficient restoring forces to be applied while not enhancing actuation forces to raise the valve actuation potential. Both actuate to open and actuate to close valves have been demonstrated, and integrated into a microfluidic platform, and demonstrated fluidic control using electrostatic valves.

More Details

Radiation microscope for SEE testing using GeV ions

Vizkelethy, Gyorgy V.; Villone, J.; Hattar, Khalid M.; Doyle, Barney L.; Knapp, J.A.

Radiation Effects Microscopy is an extremely useful technique in failure analysis of electronic parts used in radiation environment. It also provides much needed support for development of radiation hard components used in spacecraft and nuclear weapons. As the IC manufacturing technology progresses, more and more overlayers are used; therefore, the sensitive region of the part is getting farther and farther from the surface. The thickness of these overlayers is so large today that the traditional microbeams, which are used for REM are unable to reach the sensitive regions. As a result, higher ion beam energies have to be used (> GeV), which are available only at cyclotrons. Since it is extremely complicated to focus these GeV ion beams, a new method has to be developed to perform REM at cyclotrons. We developed a new technique, Ion Photon Emission Microscopy, where instead of focusing the ion beam we use secondary photons emitted from a fluorescence layer on top of the devices being tested to determine the position of the ion hit. By recording this position information in coincidence with an SEE signal we will be able to indentify radiation sensitive regions of modern electronic parts, which will increase the efficiency of radiation hard circuits.

More Details

Final LDRD report : the physics of 1D and 2D electron gases in III-nitride heterostructure NWs

Wang, George T.; Armstrong, Andrew A.; Li, Qiming L.; Lin, Yong L.

The proposed work seeks to demonstrate and understand new phenomena in novel, freestanding III-nitride core-shell nanowires, including 1D and 2D electron gas formation and properties, and to investigate the role of surfaces and heterointerfaces on the transport and optical properties of nanowires, using a combined experimental and theoretical approach. Obtaining an understanding of these phenomena will be a critical step that will allow development of novel, ultrafast and ultraefficient nanowire-based electronic and photonic devices.

More Details

ALEGRA-HEDP simulations of the dense plasma focus

Flicker, Dawn G.

We have carried out 2D simulations of three dense plasma focus (DPF) devices using the ALEGRA-HEDP code and validated the results against experiments. The three devices included two Mather-type machines described by Bernard et. al. and the Tallboy device currently in operation at NSTec in North Las Vegas. We present simulation results and compare to detailed plasma measurements for one Bernard device and to current and neutron yields for all three. We also describe a new ALEGRA capability to import data from particle-in-cell calculations of initial gas breakdown, which will allow the first ever simulations of DPF operation from the beginning of the voltage discharge to the pinch phase for arbitrary operating conditions and without assumptions about the early sheath structure. The next step in understanding DPF pinch physics must be three-dimensional modeling of conditions going into the pinch, and we have just launched our first 3D simulation of the best-diagnosed Bernard device.

More Details

Ultrathin Optics for Low-Profile Innocuous Imager

Boye, Robert B.; Nelson, C.L.; Brady, Gregory R.; Briggs, R.D.; Jared, Bradley H.; Warren, M.E.

This project demonstrates the feasibility of a novel imager with a thickness measured in microns rather than inches. Traditional imaging systems, i.e. cameras, cannot provide both the necessary resolution and innocuous form factor required in many data acquisition applications. Designing an imaging system with an extremely thin form factor (less than 1 mm) immediately presents several technical challenges. For instance, the thickness of the optical lens must be reduced drastically from currently available lenses. Additionally, the image circle is reduced by a factor equal to the reduction in focal length. This translates to fewer detector pixels across the image. To reduce the optical total track requires the use of specialized micro-optics and the required resolution necessitates the use of a new imaging modality. While a single thin imager will not produce the desired output, several thin imagers can be multiplexed and their low resolution (LR) outputs used together in post-processing to produce a high resolution (HR) image. The utility of an Iterative Back Projection (IBP) algorithm has been successfully demonstrated for performing the required post-processing. Advanced fabrication of a thin lens was also demonstrated and experimental results using this lens as well as commercially available lenses are presented.

More Details

Evaluation of the Geotech SMART24BH 20Vpp/5Vpp data acquisition system with active fortezza crypto card data signing and authentication

Hart, Darren H.; Rembold, Randy K.

Sandia National Laboratories has tested and evaluated Geotech SMART24BH borehole data acquisition system with active Fortezza crypto card data signing and authentication. The test results included in this report were in response to static and tonal-dynamic input signals. Most test methodologies used were based on IEEE Standards 1057 for Digitizing Waveform Recorders and 1241 for Analog to Digital Converters; others were designed by Sandia specifically for infrasound application evaluation and for supplementary criteria not addressed in the IEEE standards. The objective of this work was to evaluate the overall technical performance of two Geotech SMART24BH digitizers with a Fortezza PCMCIA crypto card actively implementing the signing of data packets. The results of this evaluation were compared to relevant specifications provided within manufacturer's documentation notes. The tests performed were chosen to demonstrate different performance aspects of the digitizer under test. The performance aspects tested include determining noise floor, least significant bit (LSB), dynamic range, cross-talk, relative channel-to-channel timing, time-tag accuracy/statistics/drift, analog bandwidth.

More Details

Tuned cavity magnetometer sensitivity

Okandan, Murat O.; Schwindt, Peter S.

We have developed a high sensitivity (<pico Tesla/{radical}Hz), non-cryogenic magnetometer that utilizes a novel optical (interferometric) detection technique. Further miniaturization and low-power operation are key advantages of this magnetometer, when compared to systems using SQUIDs which require liquid Helium temperatures and associated overhead to achieve similar sensitivity levels.

More Details

Room temperature synthesis of Ni-based alloy nanoparticles by radiolysis

Leung, Kevin L.; Hanson, Donald J.; Stumpf, Roland R.; Huang, Jian Y.; Robinson, David R.; Lu, Ping L.; Provencio, P.N.; Jacobs, Benjamin J.

Room temperature radiolysis, density functional theory, and various nanoscale characterization methods were used to synthesize and fully describe Ni-based alloy nanoparticles (NPs) that were synthesized at room temperature. These complementary methods provide a strong basis in understanding and describing metastable phase regimes of alloy NPs whose reaction formation is determined by kinetic rather than thermodynamic reaction processes. Four series of NPs, (Ag-Ni, Pd-Ni, Co-Ni, and W-Ni) were analyzed and characterized by a variety of methods, including UV-vis, TEM/HRTEM, HAADF-STEM and EFTEM mapping. In the first focus of research, AgNi and PdNi were studied. Different ratios of Ag{sub x}- Ni{sub 1-x} alloy NPs and Pd{sub 0.5}- Ni{sub 0.5} alloy NP were prepared using a high dose rate from gamma irradiation. Images from high-angle annular dark-field (HAADF) show that the Ag-Ni NPs are not core-shell structure but are homogeneous alloys in composition. Energy filtered transmission electron microscopy (EFTEM) maps show the homogeneity of the metals in each alloy NP. Of particular interest are the normally immiscible Ag-Ni NPs. All evidence confirmed that homogeneous Ag-Ni and Pd-Ni alloy NPs presented here were successfully synthesized by high dose rate radiolytic methodology. A mechanism is provided to explain the homogeneous formation of the alloy NPs. Furthermore, studies of Pd-Ni NPs by in situ TEM (with heated stage) shows the ability to sinter these NPs at temperatures below 800 C. In the second set of work, CoNi and WNi superalloy NPs were attempted at 50/50 concentration ratios using high dose rates from gamma irradiation. Preliminary results on synthesis and characterization have been completed and are presented. As with the earlier alloy NPs, no evidence of core-shell NP formation occurs. Microscopy results seem to indicate alloying occurred with the CoNi alloys. However, there appears to be incomplete reduction of the Na{sub 2}WO{sub 4} to form the W{sup 2+} ion in solution; the predominance of WO{sup +} appears to have resulted in a W-O-Ni complex that has not yet been fully characterized.

More Details

Quantum Cascade Lasers (QCLs) for standoff explosives detection : LDRD 138733 final report

Theisen, Lisa A.; Linker, Kevin L.

Continued acts of terrorism using explosive materials throughout the world have led to great interest in explosives detection technology, especially technologies that have a potential for remote or standoff detection. This LDRD was undertaken to investigate the benefit of the possible use of quantum cascade lasers (QCLs) in standoff explosives detection equipment. Standoff detection of explosives is currently one of the most difficult problems facing the explosives detection community. Increased domestic and troop security could be achieved through the remote detection of explosives. An effective remote or standoff explosives detection capability would save lives and prevent losses of mission-critical resources by increasing the distance between the explosives and the intended targets and/or security forces. Many sectors of the US government are urgently attempting to obtain useful equipment to deploy to our troops currently serving in hostile environments. This LDRD was undertaken to investigate the potential benefits of utilizing quantum cascade lasers (QCLs) in standoff detection systems. This report documents the potential opportunities that Sandia National Laboratories can contribute to the field of QCL development. The following is a list of areas where SNL can contribute: (1) Determine optimal wavelengths for standoff explosives detection utilizing QCLs; (2) Optimize the photon collection and detection efficiency of a detection system for optical spectroscopy; (3) Develop QCLs with broader wavelength tunability (current technology is a 10% change in wavelength) while maintaining high efficiency; (4) Perform system engineering in the design of a complete detection system and not just the laser head; and (5) Perform real-world testing with explosive materials with commercial prototype detection systems.

More Details

Radiation effects from first principles : the role of excitons in electronic-excited processes

Wong, Bryan M.

Electron-hole pairs, or excitons, are created within materials upon optical excitation or irradiation with X-rays/charged particles. The ability to control and predict the role of excitons in these energetically-induced processes would have a tremendous impact on understanding the effects of radiation on materials. In this report, the excitonic effects in large cycloparaphenylene carbon structures are investigated using various first-principles methods. These structures are particularly interesting since they allow a study of size-scaling properties of excitons in a prototypical semi-conducting material. In order to understand these properties, electron-hole transition density matrices and exciton binding energies were analyzed as a function of size. The transition density matrices allow a global view of electronic coherence during an electronic excitation, and the exciton binding energies give a quantitative measure of electron-hole interaction energies in these structures. Based on overall trends in exciton binding energies and their spatial delocalization, we find that excitonic effects play a vital role in understanding the unique photoinduced dynamics in these systems.

More Details

Polymer/inorganic superhydrophobic surfaces

Branson, Eric D.; Collord, Andrew D.; Apblett, Christopher A.; Brinker, C.J.

We have designed and built electrostatically actuated microvalves compatible with integration into a PDMS based microfluidic system. The key innovation for electrostatic actuation was the incorporation of carbon nanotubes into the PDMS valve membrane, allowing for electrostatic charging of the PDMS layer and subsequent discharging, while still allowing for significant distention of the valveseat for low voltage control of the system. Nanoparticles were applied to semi-cured PDMS using a stamp transfer method, and then cured fully to make the valve seats. DC actuation in air of these valves yielded operational voltages as low as 15V, by using a supporting structure above the valve seat that allowed sufficient restoring forces to be applied while not enhancing actuation forces to raise the valve actuation potential. Both actuate to open and actuate to close valves have been demonstrated, and integrated into a microfluidic platform, and demonstrated fluidic control using electrostatic valves.

More Details

Benchmarks for GADRAS performance validation

Mattingly, John K.; Mitchell, Dean J.; Rhykerd, Charles L.

The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

More Details

Diagnostic development for determining the joint temperature/soot statistics in hydrocarbon-fueled pool fires : LDRD final report

Frederickson, Kraig; Grasser, Thomas W.; Castaneda, Jaime N.; Hewson, John C.; Luketa, Anay L.

A joint temperature/soot laser-based optical diagnostic was developed for the determination of the joint temperature/soot probability density function (PDF) for hydrocarbon-fueled meter-scale turbulent pool fires. This Laboratory Directed Research and Development (LDRD) effort was in support of the Advanced Simulation and Computing (ASC) program which seeks to produce computational models for the simulation of fire environments for risk assessment and analysis. The development of this laser-based optical diagnostic is motivated by the need for highly-resolved spatio-temporal information for which traditional diagnostic probes, such as thermocouples, are ill-suited. The in-flame gas temperature is determined from the shape of the nitrogen Coherent Anti-Stokes Raman Scattering (CARS) signature and the soot volume fraction is extracted from the intensity of the Laser-Induced Incandescence (LII) image of the CARS probed region. The current state of the diagnostic will be discussed including the uncertainty and physical limits of the measurements as well as the future applications of this probe.

More Details

Microstructure-based approach for predicting crack initiation and early growth in metals

Battaile, Corbett C.; Bartel, Timothy J.; Reedy, Earl D.; Cox, James C.; Foulk, James W.; Puskar, J.D.; Boyce, Brad B.; Emery, John M.

Fatigue cracking in metals has been and is an area of great importance to the science and technology of structural materials for quite some time. The earliest stages of fatigue crack nucleation and growth are dominated by the microstructure and yet few models are able to predict the fatigue behavior during these stages because of a lack of microstructural physics in the models. This program has developed several new simulation tools to increase the microstructural physics available for fatigue prediction. In addition, this program has extended and developed microscale experimental methods to allow the validation of new microstructural models for deformation in metals. We have applied these developments to fatigue experiments in metals where the microstructure has been intentionally varied.

More Details

High fidelity nuclear energy system optimization towards an environmentally benign, sustainable, and secure energy source

Rochau, Gary E.; Rodriguez, Salvador B.

The impact associated with energy generation and utilization is immeasurable due to the immense, widespread, and myriad effects it has on the world and its inhabitants. The polar extremes are demonstrated on the one hand, by the high quality of life enjoyed by individuals with access to abundant reliable energy sources, and on the other hand by the global-scale environmental degradation attributed to the affects of energy production and use. Thus, nations strive to increase their energy generation, but are faced with the challenge of doing so with a minimal impact on the environment and in a manner that is self-reliant. Consequently, a revival of interest in nuclear energy has followed, with much focus placed on technologies for transmuting nuclear spent fuel. The performed research investigates nuclear energy systems that optimize the destruction of nuclear waste. In the context of this effort, nuclear energy system is defined as a configuration of nuclear reactors and corresponding fuel cycle components. The proposed system has unique characteristics that set it apart from other systems. Most notably the dedicated High-Energy External Source Transmuter (HEST), which is envisioned as an advanced incinerator used in combination with thermal reactors. The system is configured for examining environmentally benign fuel cycle options by focusing on minimization or elimination of high level waste inventories. Detailed high-fidelity exact-geometry models were developed for representative reactor configurations. They were used in preliminary calculations with Monte Carlo N-Particle eXtented (MCNPX) and Standardized Computer Analysis for Licensing Evaluation (SCALE) code systems. The reactor models have been benchmarked against existing experimental data and design data. Simulink{reg_sign}, an extension of MATLAB{reg_sign}, is envisioned as the interface environment for constructing the nuclear energy system model by linking the individual reactor and fuel component sub-models for overall analysis of the system. It also provides control over key user input parameters and the ability to effectively consolidate vital output results for uncertainty/sensitivity analysis and optimization procedures. The preliminary analysis has shown promising advanced fuel cycle scenarios that include Pressure Water Reactors Pressurized Water Reactors (PWRs), Very High Temperature Reactors (VHTRs) and dedicated HEST waste incineration facilities. If deployed, these scenarios may substantially reduce nuclear waste inventories approaching environmentally benign nuclear energy system characteristics. Additionally, a spent fuel database of the isotopic compositions for multiple design and control parameters has been created for the VHTR-HEST input fuel streams. Computational approaches, analysis metrics, and benchmark strategies have been established for future detailed studies.

More Details

A smoothed two-and three-dimensional interface reconstruction method

Computing and Visualization in Science

Mosso, Stewart; Garasi, Christopher J.; Drake, Richard R.

The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. The algorithm is second-order accurate for smooth volume fraction distributions. © 2008 Springer-Verlag.

More Details

Experience with approximations in the trust-region parallel direct search algorithm

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Shontz, S.M.; Howle, V.E.; Hough, Patricia D.

Recent years have seen growth in the number of algorithms designed to solve challenging simulation-based nonlinear optimization problems. One such algorithm is the Trust-Region Parallel Direct Search (TRPDS) method developed by Hough and Meza. In this paper, we take advantage of the theoretical properties of TRPDS to make use of approximation models in order to reduce the computational cost of simulation-based optimization. We describe the extension, which we call mTRPDS, and present the results of a case study for two earth penetrator design problems. In the case study, we conduct computational experiments with an array of approximations within the mTRPDS algorithm and compare the numerical results to the original TRPDS algorithm and a trust-region method implemented using the speculative gradient approach described by Byrd, Schnabel, and Shultz. The results suggest new ways to improve the algorithm. © 2009 Springer Berlin Heidelberg.

More Details

Hardness assurance test guideline for qualifying devices for use in proton environments

IEEE Transactions on Nuclear Science

Schwank, James R.; Shaneyfelt, Marty R.; Dodd, Paul E.; Felix, James A.; Baggio, J.; Ferlet-Cavrois, V.; Paillet, P.; Label, K.A.; Pease, R.L.; Simons, M.; Cohn, L.M.

Proton-induced singl -event effects hardness assurance guidelines are developed to address issues raised by recent test results in advanced IC technologies for use in space environments. Specifically, guidelines are developed that address the effects of proton energy and angle of incidence on single-event latchup and the effects of total dose on single-event upset. The guidelines address both single-event upset (SEU), single-event latchup (SEL), and combined SEU and total ionizing dose (TID) effects. © 2006 IEEE.

More Details

Modeling of pulsating heat pipes

Givler, R.C.; Martinez, Mario J.

This report summarizes the results of a computer model that describes the behavior of pulsating heat pipes (PHP). The purpose of the project was to develop a highly efficient (as compared to the heat transfer capability of solid copper) thermal groundplane (TGP) using silicon carbide (SiC) as the substrate material and water as the working fluid. The objective of this project is to develop a multi-physics model for this complex phenomenon to assist with an understanding of how PHPs operate and to be able to understand how various parameters (geometry, fill ratio, materials, working fluid, etc.) affect its performance. The physical processes describing a PHP are highly coupled. Understanding its operation is further complicated by the non-equilibrium nature of the interplay between evaporation/condensation, bubble growth and collapse or coalescence, and the coupled response of the multiphase fluid dynamics among the different channels. A comprehensive theory of operation and design tools for PHPs is still an unrealized task. In the following we first analyze, in some detail, a simple model that has been proposed to describe PHP behavior. Although it includes fundamental features of a PHP, it also makes some assumptions to keep the model tractable. In an effort to improve on current modeling practice, we constructed a model for a PHP using some unique features available in FLOW-3D, version 9.2-3 (Flow Science, 2007). We believe that this flow modeling software retains more of the salient features of a PHP and thus, provides a closer representation of its behavior.

More Details

Global situational awareness and early warning of high-consequence climate change

Boslough, Mark B.; Backus, George A.; Carr, Martin J.

Global monitoring systems that have high spatial and temporal resolution, with long observational baselines, are needed to provide situational awareness of the Earth's climate system. Continuous monitoring is required for early warning of high-consequence climate change and to help anticipate and minimize the threat. Global climate has changed abruptly in the past and will almost certainly do so again, even in the absence of anthropogenic interference. It is possible that the Earth's climate could change dramatically and suddenly within a few years. An unexpected loss of climate stability would be equivalent to the failure of an engineered system on a grand scale, and would affect billions of people by causing agricultural, economic, and environmental collapses that would cascade throughout the world. The probability of such an abrupt change happening in the near future may be small, but it is nonzero. Because the consequences would be catastrophic, we argue that the problem should be treated with science-informed engineering conservatism, which focuses on various ways a system can fail and emphasizes inspection and early detection. Such an approach will require high-fidelity continuous global monitoring, informed by scientific modeling.

More Details

SNL evaluation of Gigabit Passive Optical Networks (GPON)

Brenkosh, Joseph P.; Dirks, David H.; Gossage, Steven A.; Pratt, Thomas J.; Schutt, James A.; Heckart, David G.; Rudolfo, Gerald F.; Trujillo, Sandra T.

Gigabit Passive Optical Networks (GPON) is a networking technology which offers the potential to provide significant cost savings to Sandia National Laboratories in the area of network operations. However, a large scale GPON deployment requires a significant investment in equipment and infrastructure. Before a large scale GPON system was acquired and built, a small GPON system manufactured by Motorola was acquired and tested. The testing performed was to determine the suitability of GPON for use at SNL. This report documents that testing. This report presents test results of GPON system consisting of Motorola and Juniper equipment. The GPON system was tested in areas of data throughput, video conferencing, VOIP, security, and operations and management. The GPON system performed well in almost all areas. GPON will not meet the needs of the low percentage of users requiring a true 1-10 Gbps network connection. GPON will also most likely not meet the need of some servers requiring dedicated throughput of 1-10 Gbps. Because of that, there will be some legacy network connections that must remain. If these legacy network connections can not be reduced to a bare minimum and possibly consolidated to a few locations, any cost savings gained by switching to GPON will be negated by maintaining two networks. A contract has been recently awarded for new GPON equipment with larger buffers. This equipment should improve performance and further reduce the need for legacy network connections. Because GPON has fewer components than a typical hierarchical network, it should be easier to manage. For the system tested, the management was performed by using the AXSVison client. Access to the client must be tightly controlled, because if client/server communications are compromised, security will be an issue. As with any network, the reliability of individual components will determine overall system reliability. There were no failures with the routers, OLT, or Sun Workstation Management platform. There were however four ONTs that failed. Because of the small sample size of 64, and the fact that some of the ONTs were used units, no conclusions can be made. However, ONT reliability is an area of concern. Access to the fiber plant that GPON requires must be tightly controlled and all changes documented. The undocumented changes that were performed in the GPON test lab demonstrated the need for tight control and documentation. In summary, GPON should be able to meet the needs of most network users at Sandia National Laboratories. Because it supports voice, video, and data, it positions Sandia National Laboratories to deploy these services to the desktop. For the majority of corporate network users at Sandia National Laboratories GPON should be a suitable replacement for the legacy network.

More Details

On the dissolution of iridium by aluminum

Hewson, John C.

The potential for liquid aluminum to dissolve an iridium solid is examined. Substantial uncertainties exist in material properties, and the available data for the iridium solubility and iridium diffusivity are discussed. The dissolution rate is expressed in terms of the regression velocity of the solid iridium when exposed to the solvent (aluminum). The temperature has the strongest influence in the dissolution rate. This dependence comes primarily from the solubility of iridium in aluminum and secondarily from the temperature dependence of the diffusion coefficient. This dissolution mass flux is geometry dependent and results are provided for simplified geometries at constant temperatures. For situations where there is negligible convective flow, simple time-dependent diffusion solutions are provided. Correlations for mass transfer are also given for natural convection and forced convection. These estimates suggest that dissolution of iridium can be significant for temperatures well below the melting temperature of iridium, but the uncertainties in actual rates are large because of uncertainties in the physical parameters and in the details of the relevant geometries.

More Details

Dose estimates in a loss of lead shielding truck accident

Dennis, Matthew L.; Weiner, Ruth F.; Osborn, Douglas M.

The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

More Details

HPC application fault-tolerance using transparent redundant computation

Ferreira, Kurt; Riesen, Rolf; Oldfield, Ron A.; Brightwell, Ronald B.; Laros, James H.; Pedretti, Kevin P.

As the core count of HPC machines continue to grow in size, issues such as fault tolerance and reliability are becoming limiting factors for application scalability. Current techniques to ensure progress across faults, for example coordinated checkpoint-restart, are unsuitable for machines of this scale due to their predicted high overheads. In this study, we present the design and implementation of a novel system for ensuring reliability which uses transparent, rank-level, redundant computation. Using this system, we show the overheads involved in redundant computation for a number of real-world HPC applications. Additionally, we relate the communication characteristics of an application to the overheads observed.

More Details

Molecule-based approach for computing chemical-reaction rates in upper atmosphere hypersonic flows

Gallis, Michail A.; Bond, Ryan B.; Torczynski, J.R.

This report summarizes the work completed during FY2009 for the LDRD project 09-1332 'Molecule-Based Approach for Computing Chemical-Reaction Rates in Upper-Atmosphere Hypersonic Flows'. The goal of this project was to apply a recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary nonequilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological non-equilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, the difference between the two models can exceed 10 orders of magnitude. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates. Extensions of the model to reactions typically found in combustion flows and ionizing reactions are also found to be in very good agreement with available measurements, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.

More Details

Survey of four damage models for concrete

Leelavanichkul, Seubpong; Brannon, Rebecca M.

Four conventional damage plasticity models for concrete, the Karagozian and Case model (K&C), the Riedel-Hiermaier-Thoma model (RHT), the Brannon-Fossum model (BF1), and the Continuous Surface Cap Model (CSCM) are compared. The K&C and RHT models have been used in commercial finite element programs many years, whereas the BF1 and CSCM models are relatively new. All four models are essentially isotropic plasticity models for which 'plasticity' is regarded as any form of inelasticity. All of the models support nonlinear elasticity, but with different formulations. All four models employ three shear strength surfaces. The 'yield surface' bounds an evolving set of elastically obtainable stress states. The 'limit surface' bounds stress states that can be reached by any means (elastic or plastic). To model softening, it is recognized that some stress states might be reached once, but, because of irreversible damage, might not be achievable again. In other words, softening is the process of collapse of the limit surface, ultimately down to a final 'residual surface' for fully failed material. The four models being compared differ in their softening evolution equations, as well as in their equations used to degrade the elastic stiffness. For all four models, the strength surfaces are cast in stress space. For all four models, it is recognized that scale effects are important for softening, but the models differ significantly in their approaches. The K&C documentation, for example, mentions that a particular material parameter affecting the damage evolution rate must be set by the user according to the mesh size to preserve energy to failure. Similarly, the BF1 model presumes that all material parameters are set to values appropriate to the scale of the element, and automated assignment of scale-appropriate values is available only through an enhanced implementation of BF1 (called BFS) that regards scale effects to be coupled to statistical variability of material properties. The RHT model appears to similarly support optional uncertainty and automated settings for scale-dependent material parameters. The K&C, RHT, and CSCM models support rate dependence by allowing the strength to be a function of strain rate, whereas the BF1 model uses Duvaut-Lion viscoplasticity theory to give a smoother prediction of transient effects. During softening, all four models require a certain amount of strain to develop before allowing significant damage accumulation. For the K&C, RHT, and CSCM models, the strain-to-failure is tied to fracture energy release, whereas a similar effect is achieved indirectly in the BF1 model by a time-based criterion that is tied to crack propagation speed.

More Details

Robust real-time change detection in high jitter

Ma, Tian J.

A new method is introduced for real-time detection of transient change in scenes observed by staring sensors that are subject to platform jitter, pixel defects, variable focus, and other real-world challenges. The approach uses flexible statistical models for the scene background and its variability, which are continually updated to track gradual drift in the sensor's performance and the scene under observation. Two separate models represent temporal and spatial variations in pixel intensity. For the temporal model, each new frame is projected into a low-dimensional subspace designed to capture the behavior of the frame data over a recent observation window. Per-pixel temporal standard deviation estimates are based on projection residuals. The second approach employs a simple representation of jitter to generate pixelwise moment estimates from a single frame. These estimates rely on spatial characteristics of the scene, and are used gauge each pixel's susceptibility to jitter. The temporal model handles pixels that are naturally variable due to sensor noise or moving scene elements, along with jitter displacements comparable to those observed in the recent past. The spatial model captures jitter-induced changes that may not have been seen previously. Change is declared in pixels whose current values are inconsistent with both models.

More Details

Design guidelines for SAR digital receiver/exciter boards

Dudley, Peter A.

High resolution radar systems generally require combining fast analog to digital converters and digital to analog converters with very high performance digital signal processing logic. These mixed analog and digital printed circuit boards present special challenges with respect to electromagnetic interference. This document first describes the mechanisms of interference on such boards then follows up with a discussion of prevention techniques and finally provides a checklist for designers to help avoid common mistakes.

More Details

Aerodynamic and Aeroacoustic Tests of a Flatback Version of the DU97-W-300 Airfoil

Berg, Dale E.

Results from an experimental study of the aerodynamic and aeroacoustic properties of a ftatback version of the TU Delft DU97-W-300 airfoil are presented. Measurements were made for both the original DU97-W-300 and the flatback version. The chord Reynolds number varied from l.6 x 106 to 3.2 x 106. The data were gathered in the Virginia Tech Stability Wind Tunnel, which includes a special aeroacoustic test section to enable measurements of airfoil self-noise. Corrected wind tunnel aerodynamic measurements for the DU97-W-300 are compared to previous solid wall wind tunnel data and are shown to give good agreement. Force coefficient and surface pressure distributions are compared for the flatback and the original airfoil for both free-transition and tripped boundary layer configurations. Aeroacoustic data are presented for the flatback airfoil, with a focus on the amplitude and frequency of noise associated with the vortex-shedding tone from the blunt trailing edge wake. The effect of a splitter plate trailing edge attachment on both drag and noise of the ftacback airfoil is also investigated.

More Details

Multiple phonon processes contributing to inelastic scattering during thermal boundary conductance at solid interfaces

Journal of Applied Physics

Hopkins, Patrick E.

A new model is developed that accounts for multiple phonon processes on interface transmission between two solids. By considering conservation of energy and phonon population, the decay of a high energy phonon in one material into several lower energy phonons in another material is modeled assuming diffuse scattering. The individual contributions of each of the higher order inelastic phonon processes to thermal boundary conductance are calculated and compared to the elastic contribution. The overall thermal boundary conductance from elastic and inelastic (three or more phonon processes) scattering is calculated and compared to experimental data on five different interfaces. Improvement in value and trend is observed by taking into account multiple phonon inelastic scattering. Three phonon interfacial processes are predicted to dominate the inelastic contribution to thermal boundary conductance. © 2009 American Institute of Physics.

More Details

Terascale direct numerical simulations of turbulent combustion using S3D

Computational Science and Discovery

Chen, J.H.; Choudhary, A.; De Supinski, B.; Devries, M.; Hawkes, E.R.; Klasky, S.; Liao, W.K.; Ma, K.L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C.S.

Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics. © 2009 IOP Publishing Ltd.

More Details
Results 73201–73400 of 96,771
Results 73201–73400 of 96,771