Evolutionary optimization of interatomic potentials using genetic programming
Abstract not provided.
Abstract not provided.
This paper describes mitigation technologies that are intended to enable the deployment of advanced hydrogen storage technologies for early market and automotive fuel cell applications. Solid State hydrogen storage materials provide an opportunity for a dramatic increase in gravimetric and volumetric energy storage density. Systems and technologies based on the advanced materials have been developed and demonstrated within the laboratory [1,2], and in some cases, integrated with fuel cell systems. The R&D community will continue to develop these technologies for an ever increasing market of fuel cell technologies, including, forklift, light-cart, APU, and automotive systems. Solid state hydrogen storage materials are designed and developed to readily release, and in some cases, react with diatomic hydrogen. This favorable behavior is often accomplished with morphology design (high surface area), catalytic additives (titanium for example), and high purity metals (such as aluminum, Lanthanum, or alkali metals). These favorable hydrogen reaction characteristics often have a related, yet less-desirable effect: sensitivity and reactivity during exposure to ambient contamination and out-of-design environmental conditions. Accident scenarios resulting in this less-favorable reaction behavior must also be managed by the system developer to enable technology deployment and market acceptance. Two important accident scenarios are identified through hazards and risk analysis methods. The first involves a breach in plumbing or tank resulting from a collision. The possible consequence of this scenario is analyzed though experimentally based chemical kinetic and transport modeling of metal hydride beds. An advancing reaction front between the metal hydride and ambient air is observed to proceed throughout the bed. This exothermic reaction front can result in loss of structural integrity of the containing vessel and lead to un-favorable overheating events. The second important accident scenario considered is a pool fire or impinging fire resulting from a collision between a hydrocarbon or hydrogen fueled vehicle. The possible consequence of this scenario is analyzed with experimentally-based numerical simulation of a metal hydride system. During a fire scenario, the hydrogen storage material will rapidly decompose and release hydrogen at high pressure. Accident scenarios initiated by a vehicular collision leading a pipe break or catastrophic failure of the hydride vessel and by external pool fire with flame engulfing the storage vessel are developed using probabilistic modeling. The chronology of events occurring subsequent to each accident initiator is detailed in the probabilistic models. Technology developed to manage these scenarios includes: (1) the use of polymer supports to reduce the extent and rate of reaction with air and water, (2) thermal radiation shielding. The polymer supported materials are demonstrated to provide mitigation of unwanted reaction while not impacting the hydrogen storage performance of the material. To mitigate the consequence of fire engulfment or impingement, thermal radiation shielding is considered to slow the rate of decomposition and delay the potential for loss-of-containment. In this paper we explore the use of these important mitigation technologies for a variety of accident scenarios.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Experimental results of nested cylindrical wire arrays (NCWA) consisting of brass (70% Cu and 30% Zn) wires on one array and Al (5056, 5% Mg) wires on the other array performed on the UNR Zebra generator at 1.0 MA current are compared and analyzed. Specifically, radiative properties of K-shell Al and Mg ions and L-shell Cu and Zn ions are compared as functions of the placements of the brass and Al wires on the inner and outer arrays. A full diagnostic set which included more than ten different beam-lines was implemented. Identical loads were fielded to allow the timing of time-gated pinhole and x-ray spectrometers to be shifted to get a more complete understanding of the evolution of plasma parameters over the x-ray pulse. The importance of the study of NCWAs with different wire materials is discussed.
The analysis of implosions of Cu and Ag planar wire array (PWA) loads recently performed at the enhanced 1.7 MA Zebra generator at UNR is presented. Experiments were performed with a Load Current Multiplier with a 1cm anode-cathode gap (twice shorter than in a standard 1 MA mode). A full diagnostic set included more than ten different beam-lines with the major focus on time-gated and time-integrated x-ray imaging and spectra, total radiation yields, and fast, filtered x-ray detector data. In particular, the experimental results for a double PWA load consisting of twelve 10 {micro}m Cu wires in each row (total mass M {approx} 175 {micro}g) and a much heavier single PWA load consisting of ten 30 {micro}m Ag wires (M {approx} 750 {micro}g) were analyzed using a set of theoretical codes. The effects of both a decreased a-c gap and an increased current on radiative properties of these loads are discussed.
A series of experiments at the Z Accelerator was performed with 40mm and 50mm diameter nested wire arrays to investigate the interaction of the arrays and assess radiative characteristics. These arrays were fielded with one array as Al:Mg (either the inner or the outer array) and the other array as Ni-clad Ti (the outer or inner array, with respect to location of the Al:Mg). In all the arrays, the mass and radius ratio of the outer:inner was 2:1. The wire number ratio was also 2:1 in some cases, but the Al:Mg wire number was increased in some loads. This presentation will focus on analysis of the emitted radiation (in multiple photon energy bins) and measured plasma conditions (as inferred from x-ray spectra). A discussion on what these results indicate about nested array dynamics will also be presented.
Abstract not provided.
Ionomers--polymers containing a small fraction of covalently bound ionic groups--have potential application as solid electrolytes in batteries. Understanding ion transport in ionomers is essential for such applications. Due to strong electrostatic interactions in these materials, the ions form aggregates, tending to slow counterion diffusion. A key question is how ionomer properties affect ionic aggregation and counterion dynamics on a molecular level. Recent experimental advances have allowed synthesis and extensive characterization of ionomers with a precise, constant spacing of charged groups, making them ideal for controlled measurement and more direct comparison with molecular simulation. We have used coarse-grained molecular dynamics to simulate such ionomers with regularly spaced charged beads. The charged beads are placed either in the polymer backbone or as pendants on the backbone. The polymers, along with the counterions, are simulated at melt densities. The ionic aggregate structure was determined as a function of the dielectric constant, spacing of the charged beads on the polymer, and the sizes of the charged beads and counterions. The pendant ion architecture can yield qualitatively different aggregate structures from those of the linear polymers. For small pendant ions, roughly spherical aggregates have been found above the glass transition temperature. The implications of these aggregates for ion diffusion will be discussed.
The level of energy deposition on future inertial fusion energy (IFE) reactor first walls, particularly in direct-drive scenarios, makes the ultimate survivability of such wall materials a challenge. We investigate the survivability of three-dimensional (3-D) dendritic materials fabricated by chemical vapor deposition (CVD), and exposed to repeated intense helium beam pulses on the RHEPP-1 facility at Sandia National Laboratories. Prior exposures of flat materials have led to what appears to be unacceptable mass loss on timescales insufficient for economical reactor operation. Two potential advantages of such dendritic materials are (a) increased effective surface area, resulting in lowered fluences to most of the wall material surface, and (b) improvement in materials properties for such micro-engineered metals compared to bulk processing. Several dendritic fabrications made with either tungsten and tungsten with rhenium show little or no morphology change after up to 800 pulses of 1 MeV helium at reactor-level thermal wall loading. Since the rhenium is added in a thin surface layer, its use does not appear to raise environmental concerns for fusion designs.
Mitigating and overcoming environmental problems brought about by the current worldwide fossil fuel-based energy infrastructure requires the creation of innovative alternatives. In particular, such alternatives must actively contribute to the reduction of carbon emissions via carbon recycling and a shift to the use of renewable sources of energy. Carbon neutral transformation of biomass to liquid fuels is one of such alternatives, but it is limited by the inherently low energy efficiency of photosynthesis with regard to the net production of biomass. Researchers have thus been looking for alternative, energy-efficient chemical routes inspired in the biological transformation of solar power, CO2 and H2O into useful chemicals; specifically, liquid fuels. Methanol has been the focus of a fair number of publications for its versatility as a fuel, and its use as an intermediate chemical in the synthesis of many compounds. In some of these studies, (e.g. Joo et al., (2004), Mignard and Pritchard (2006), Galindo and Badr (2007)) CO2 and renewable H2 (e.g. electrolytic H2) are considered as the raw materials for the production of methanol and other liquid fuels. Several basic PFD diagrams have been proposed. One of the most promising is the so called CAMERE process (Joo et al., 1999 ). In this process, carbon dioxide and renewable hydrogen are fed to a first reactor and transformed according to: H2 + CO2 <=> H2O + CO Reverse Water Gas Shift (RWGS) After eliminating the produced water the resulting H2/CO2/CO mixture is then feed to a second reactor where it is converted to methanol according to: CO2 + 3.H2 <=> CH3OH + H2O Methanol Synthesis (MS) CO + H2O <=> CO2 + H2 Water Gas Shift (WGS) The approach here is to produce enough CO to eliminate, via WGS, the water produced by MS. This is beneficial since water has been proven to block active sites in the MS catalyst. In this work a different process alternative is presented: One that combines the CO2 recycling of the CAMERE process and the use of solar energy implicit in some of the biomass-based process, but in this case with the potential high energy efficiency of thermo-chemical transformations.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Finite elements for shell structures have been investigated extensively, with numerous formulations offered in the literature. These elements are vital in modern computational solid mechanics due to their computational efficiency and accuracy for thin and moderately thick shell structures, allowing larger and more comprehensive (e.g. multi-scale and multi-physics) simulations. Problems now of interest in the research and development community are routinely pushing our computational capabilities, and thus shell finite elements are being used to deliver efficient yet high quality computations. Much work in the literature is devoted to the formulation of shell elements and their numerical accuracy, but there is little published work on the computational characterization and comparison of shell elements for modern solid mechanics problems. The present study is a comparison of three disparate shell element formulations in the Sandia National Laboratories massively parallel Sierra Solid Mechanics code. A constant membrane and bending stress shell element (Key and Hoff, 1995), a thick shell hex element (Key et al., 2004) and a 7-parameter shell element (Buechter et al., 1994) are available in Sierra Solid Mechanics for explicit transient dynamic, implicit transient dynamic and quasistatic calculations. Herein these three elements are applied to a set of canonical dynamic and quasistatic problems, and their numerical accuracy, computational efficiency and scalability are investigated. The results show the trade-off between the relative inefficiency and improved accuracy of the latter two high quality element types when compared with the highly optimized and more widely used constant membrane and bending stress shell element.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Optics Express
Abstract not provided.
Abstract not provided.
Abstract not provided.
Given a graph where each vertex is assigned a generation or consumption volume, we try to bisect the graph so that each part has a significant generation/consumption mismatch, and the cutsize of the bisection is small. Our motivation comes from the vulnerability analysis of distribution systems such as the electric power system. We show that the constrained version of the problem, where we place either the cutsize or the mismatch significance as a constraint and optimize the other, is NP-complete, and provide an integer programming formulation. We also propose an alternative relaxed formulation, which can trade-off between the two objectives and show that the alternative formulation of the problem can be solved in polynomial time by a maximum flow solver. Our experiments with benchmark electric power systems validate the effectiveness of our methods.
Many problems of practical importance involve ductile materials that undergo very large strains, in many cases to the point of failure. Examples include structures subjected to impact or blast loads, energy absorbing devices subjected to significant crushing, cold-forming manufacturing processes and others. One of the most fundamental pieces of data that is required in the analysis of this kind of problems is the fit of the uniaxial stress-strain curve of the material. A series of experiments where mild steel plates were punctured with a conical indenter provided a motivation to characterize the true stress-strain curve until the point of failure of this material, which displayed significant ductility. The hardening curve was obtained using a finite element model of the tensile specimens that included a geometric imperfection in the form of a small reduction in the specimen width to initiate necking. An automated procedure iteratively adjusted the true stress-strain curve fit used as input until the predicted engineering stress-strain curve matched experimental measurements. Whereas the fitting is relatively trivial prior to reaching the ultimate engineering stress, the fit of the softening part of the engineering stress-stain curve is highly dependent on the finite element parameters such as element formulation and initial geometry. Results by two hexahedral elements are compared. The first is a standard, under-integrated, uniform-strain element with hourglass control. The second is a modified selectively-reduced-integration element. In addition, the effects of element size, aspect ratio and hourglass control characteristics are investigated. The effect of adaptively refining the mesh based on the aspect ratio of the deformed elements is also considered. The results of the study indicate that for the plate puncture problem, characterizing the material with the same element formulation and size as used in the plate models is beneficial. On the other hand, using different element formulations, sizes or initial aspect ratios can lead to unreliable results.
Several commercial computational fluid dynamics (CFD) codes now have the capability to analyze Eulerian two-phase flow using the Rohsenow nucleate boiling model. Analysis of boiling due to one-sided heating in plasma facing components (pfcs) is now receiving attention during the design of water-cooled first wall panels for ITER that may encounter heat fluxes as high as 5 MW/m2. Empirical thermalhydraulic design correlations developed for long fission reactor channels are not reliable when applied to pfcs because fully developed flow conditions seldom exist. Star-CCM+ is one of the commercial CFD codes that can model two-phase flows. Like others, it implements the RPI model for nucleate boiling, but it also seamlessly transitions to a volume-of-fluid model for film boiling. By benchmarking the results of our 3d models against recent experiments on critical heat flux for both smooth rectangular channels and hypervapotrons, we determined the six unique input parameters that accurately characterize the boiling physics for ITER flow conditions under a wide range of absorbed heat flux. We can now exploit this capability to predict the onset of critical heat flux in these components. In addition, the results clearly illustrate the production and transport of vapor and its effect on heat transfer in pfcs from nucleate boiling through transition to film boiling. This article describes the boiling physics implemented in CCM+ and compares the computational results to the benchmark experiments carried out independently in the United States and Russia. Temperature distributions agreed to within 10 C for a wide range of heat fluxes from 3 MW/m2 to 10 MW/m2 and flow velocities from 1 m/s to 10 m/s in these devices. Although the analysis is incapable of capturing the stochastic nature of critical heat flux (i.e., time and location may depend on a local materials defect or turbulence phenomenon), it is highly reliable in determining the heat flux where boiling instabilities begin to dominate. Beyond this threshold, higher heat fluxes lead to the boiling crisis and eventual burnout. This predictive capability is essential in determining the critical heat flux margin for the design of complex 3d components.
Abstract not provided.
Abstract not provided.
Microsystems packaging involves physically placing and electrically interconnecting a microelectronic device in a package that protects it from and interfaces it with the outside world. When the device requires a hermetic or controlled microenvironment, it is typically sealed within a cavity in the package. Sealing involves placing and attaching a lid, typically by welding, brazing, or soldering. Materials selection (e.g., the epoxy die attach), and process control (e.g., the epoxy curing temperature and time) are critical for reproducible and reliable microsystems packaging. This paper will review some hermetic and controlled microenvironment packaging at Sandia Labs, and will discuss materials, processes, and equipment used to package environmentally sensitive microelectronics (e.g., MEMS and sensors).
Abstract not provided.
These slides describe different strategies for installing Python software. Although I am a big fan of Python software development, robust strategies for software installation remains a challenge. This talk describes several different installation scenarios. The Good: the user has administrative privileges - Installing on Windows with an installer executable, Installing with Linux application utility, Installing a Python package from the PyPI repository, and Installing a Python package from source. The Bad: the user does not have administrative privileges - Using a virtual environment to isolate package installations, and Using an installer executable on Windows with a virtual environment. The Ugly: the user needs to install an extension package from source - Installing a Python extension package from source, and PyCoinInstall - Managing builds for Python extension packages. The last item referring to PyCoinInstall describes a utility being developed for the COIN-OR software, which is used within the operations research community. COIN-OR includes a variety of Python and C++ software packages, and this script uses a simple plug-in system to support the management of package builds and installation.
It is well known that the continuous Galerkin method (in its standard form) is not locally conservative, yet many stabilized methods are constructed by augmenting the standard Galerkin weak form. In particular, the Variational Multiscale (VMS) method has achieved popularity for combating numerical instabilities that arise for mixed formulations that do not otherwise satisfy the LBB condition. Among alternative methods that satisfy local and global conservation, many employ Raviart-Thomas function spaces. The lowest order Raviart-Thomas finite element formulation (RT0) consists of evaluating fluxes over the midpoint of element edges and constant pressures within the element. Although the RT0 element poses many advantages, it has only been shown viable for triangular or tetrahedral elements (quadrilateral variants of this method do not pass the patch test). In the context of heterogenous materials, both of these methods have been used to model the mixed form of the Darcy equation. This work aims, in a comparative fashion, to evaluate the strengths and weaknesses of either approach for modeling Darcy flow for problems with highly varying material permeabilities and predominantly open flow boundary conditions. Such problems include carbon sequestration and enhanced oil recovery simulations for which the far-field boundary is typically described with some type of pressure boundary condition. We intend to show the degree to which the VMS formulation violates local mass conservation for these types of problems and compare the performance of the VMS and RT0 methods at boundaries between disparate permeabilities.
We present results of molecular dynamics simulations of the flocculation of model algae particles under shear. We study the evolution of the cluster size distribution as well as the steady-state distribution as a function of shear rates and algae interaction parameters. Algal interactions are modeled through a DLVO-type potential, a combination of a HS colloid potential (Everaers) and a yukawa/colloid electrostatic potential. The effect of hydrodynamic interactions on aggregation is explored. Cluster strucuture is determined from the algae-algae radial distribution function as well as the structure factor. DLVO parameters including size, salt concentration, surface potential, initial volume fraction, etc. are varied to model different species of algae under a variety of environmental conditions.
Hydrogen is proposed as an ideal carrier for storage, transport, and conversion of energy. However, its storage is a key problem in the development of hydrogen economy. Metal hydrides hold promise in effectively storing hydrogen. For this reason, metal hydrides have been the focus of intensive research. The chemical bonds in light metal hydrides are predominantly covalent, polar covalent or ionic. These bonds are often strong, resulting in high thermodynamic stability and low equilibrium hydrogen pressures. In addition, the directionality of the covalent/ionic bonds in these systems leads to large activation barriers for atomic motion, resulting in slow hydrogen sorption kinetics and limited reversibility. One method for enhancing reaction kinetics is to reduce the size of the metal hydrides to nano scale. This method exploits the short diffusion distances and constrained environment that exist in nanoscale hydride materials. In order to reduce the particle size of metal hydrides, mechanical ball milling is widely used. However, microscopic mechanisms responsible for the changes in kinetics resulting from ball milling are still being investigated. The objective of this work is to use metal organic frameworks (MOFs) as templates for the synthesis of nano-scale NaAlH4 particles, to measure the H2 desorption kinetics and thermodynamics, and to determine quantitative differences from corresponding bulk properties. Metal-organic frameworks (MOFs) offer an attractive alternative to traditional scaffolds because their ordered crystalline lattice provides a highly controlled and understandable environment. The present work demonstrates that MOFs are stable hosts for metal hydrides and their reactive precursors and that they can be used as templates to form metal hydride nanoclusters on the scale of their pores (1-2 nm). We find that using the MOF HKUST-1 as template, NaAlH4 nanoclusters as small as 8 formula units can be synthesized inside the pores. A detailed picture of the hydrogen desorption is investigated using a simultaneous thermogravimetric modulated-beam mass spectrometry instrument. The hydrogen desorption behavior of NaAlH4 nano-clusters is found to be very different from bulk NaAlH4. The bulk NaAlH4 desorbs about 70 wt% hydrogen {approx}250 C. In contrast, confinement of NaAlH4 within the MOF pores dramatically increases the rate of H2 desorption at lower temperatures. About {approx}80% of the total H2 desorbed from MOF-confined NaAlH4 is observed between 70 to 155 C. In addition to HKUST-1, we find that other MOFs (e.g. MIL-68 and MOF-5) can be infiltrated with hydrides (LiAlH4, LiBH4) or hydride precursors (Mg(C4H9)2 and LiC2H5) without degradation. By varying pore dimensions, metal centers, and the linkers of MOFs, it will be possible to determine whether the destabilization of metal hydrides is dictated only by the size of the metal hydride clusters, their local environment in a confined space, or by catalytic effects of the framework.
Light activated polymers, which are capable of mechanically responding to light, promise to offer exciting, innovative, and unique material capabilities. Such materials include: photo-radical mediated cleavage and reformation of the polymer backbone in cross-linked elastomers that results in local stress relaxation; photo-switching cross-links in shape memory polymers; and photo-isomerization of azobenzene groups contained in liquid crystal elastomers. In this paper, using our recent material model that couples multiphysical processes involved in light-activated polymers, we demonstrate that a variety of patterns can be created on light activated polymer thin films when coupling mechanical deformation with light irradiation. Here, the polymer thin film is first stretched uniaxially or biaxially. Light is then irradiated on the surface of the thin film. After light irradiation, removal external load partially recovers the initial stretching of the polymer thin film and induces patterns. The variation of the geometry of the patterns can be controlled by a variety of parameters such as initial stretching, light intensity, etc. Photo-patterning with light activated polymer therefore offers a novel way to create surface patterns.
We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.
Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runs in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.
Abstract not provided.
Journal of Magnetism and Magnetic Materials
Abstract not provided.
Nano Letters
Abstract not provided.
Abstract not provided.
Nano Letters
Abstract not provided.
Journal of the American Chemical Society
Abstract not provided.
Abstract not provided.
Capillary pinch-off results carried out with the Many-Body Dissipative Particle Dynamics (MDPD) method are compared with the two-phase continuum discretization of hydrodynamics. The MDPD method provides a mesoscale description of the liquid-gas interface -- molecules can be thought of as grouped in particles with modeled Brownian and dissipative effects. No liquid-gas interface is explicitly defined; surface properties, such as surface tension, result from the MDPD interaction parameters. In side-to-side comparisons, the behavior of the MDPD liquid is demonstrated to replicate the macroscale behavior (thin interface assumption) calculated by the Combined Level Set-Volume of Fluid (CLSVOF) method. For instance, in both the continuum and mesoscale discretizations the most unstable wavelength perturbation leads to pinch-off, whereas a smaller wavelength-to-diameter ratio, as expected, does not. The behavior of the virial pressure in MDPD will be discussed in relation to the hydrodynamic capillary pressure that results from the thin interface assumption.
Science
Abstract not provided.
Nature Communications
Abstract not provided.
Abstract not provided.
The use of nanowires for thermoelectric energy generation has gained momentum in recent years as an approach to improve the figure of merit (ZT) due in part to larger phonon scattering at the boundary resulting in reduced thermal conductivity while electrical conductivity is not significantly affected. Silicon-germanium (SiGe) alloy nanowires are promising candidates to further reduce thermal conductivity by phonon scattering because bulk SiGe alloys already have thermal conductivity comparable to reported Si nanowires. In this work, we show that thermal and electrical conductivity can be measured for the same single nanowire eliminating the uncertainties in ZT estimation due to measuring the thermal conduction on one set of wires and the electrical conduction on another set. In order to do so, we use nanomanipulation to place vapor-liquid-solid boron-doped SiGe alloy nanowires on predefined surface structures. Furthermore, we developed a contact-annealing technique to achieve negligible electrical contact resistance for the placed nanowires that allows us, for the first time, to measure electrical and thermal properties on the same device. We observe that thermal conductivity for SiGe nanowires is dominated by alloy scattering for nanowires down to 100 nm in diameter between the temperature range 40-300 K. The estimated electronic contribution of the thermal conductivity as given by the Wiedemann-Franz relationship is about 1 order of magnitude smaller than the measured thermal conductivity which indicates that phonons carry a large portion of the heat even at such small dimensions.
Abstract not provided.
Scripta Materialia
Abstract not provided.
Authentication between mobile devices in ad-hoc computing environments is a challenging problem. Without pre-shared knowledge, existing applications rely on additional communication methods, such as out-of-band or location-limited channels for device authentication. However, no formal analysis has been conducted to determine whether out-of-band channels are actually necessary. We answer this question through formal analysis, and use BAN logic to show that device authentication using a single channel is not possible.
Abstract not provided.
Abstract not provided.
Physics of Fluids
Abstract not provided.
Abstract not provided.
Journal of Fluid Mechanics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Recent work at Sandia National Laboratories has focused on preparing strong predictive models for the simulation of ductile failure in metals. The focus of this talk is on the development of engineering-ready models that use a phenomenological approach to represent the ductile fracture processes. As such, an empirical tearing parameter that accounts for mean stress effects along the crack front is presented. A critical value of the tearing parameter is used in finite element calculations as the criterion for crack growth. Regularization is achieved with three different methods and the results are compared. In the first method, upon reaching the critical tearing, the stress within a solid element is decayed by uniformly shrinking the yield surface over a user specified amount of strain. This yields mesh-size dependent results. As a second method for regularization, cohesive surface elements are inserted using an automatic remeshing technique. In the third method, strain-localization elements are inserted with the automated remeshing.
Abstract not provided.
Abstract not provided.
The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compounds 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.
This project demonstrated the feasibility of a 'pump-probe' optical detection method for standoff sensing of chemicals on surfaces. Such a measurement uses two optical pulses - one to remove the analyte (or a fragment of it) from the surface and the second to sense the removed material. As a particular example, this project targeted photofragmentation laser-induced fluorescence (PF-LIF) to detect of surface deposits of low-volatility chemical warfare agents (LVAs). Feasibility was demonstrated for four agent surrogates on eight realistic surfaces. Its sensitivity was established for measurements on concrete and aluminum. Extrapolations were made to demonstrate relevance to the needs of outside users. Several aspects of the surface PF-LIF physical mechanism were investigated and compared to that of vapor-phase measurements. The use of PF-LIF as a rapid screening tool to 'cue' more specific sensors was recommended. Its sensitivity was compared to that of Raman spectroscopy, which is both a potential 'confirmer' of PF-LIF 'hits' and is also a competing screening technology.
Risk, Hazards&Crisis in Public Policy
Abstract not provided.
Abstract not provided.
Significant challenges exist for achieving peak or even consistent levels of performance when using IO systems at scale. They stem from sharing IO system resources across the processes of single large-scale applications and/or multiple simultaneous programs causing internal and external interference, which in turn, causes substantial reductions in IO performance. This paper presents interference effects measurements for two different file systems at multiple supercomputing sites. These measurements motivate developing a 'managed' IO approach using adaptive algorithms varying the IO system workload based on current levels and use areas. An implementation of these methods deployed for the shared, general scratch storage system on Oak Ridge National Laboratory machines achieves higher overall performance and less variability in both a typical usage environment and with artificially introduced levels of 'noise'. The latter serving to clearly delineate and illustrate potential problems arising from shared system usage and the advantages derived from actively managing it.
Abstract not provided.
Recent experiments on the refurbished Z-machine were conducted using large diameter stainless steel arrays which produced x-ray powers of 260 TW. Follow-up experiments were then conducted utilizing tungsten wires with approximately the same total mass with the hypothesis that the total x-ray power would increase. On the large diameter tungsten experiments, the x-ray power averaged over 300 TW and the total x-ray energy was greater than 2MJ. Different analysis techniques for inferring the x-ray power will be described in detail.
Abstract not provided.
Abstract not provided.
The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.
The peridynamic theory is an extension of traditional solid mechanics that treats discontinuous media, including the evolution of discontinuities due to fracture, on the same mathematical basis as classically smooth media. A recent advance in the linearized peridynamic theory permits the reduction of the number of degrees of freedom modeled within a body. Under equilibrium conditions, this coarse graining method exactly reproduces the internal forces on the coarsened degrees of freedom, including the effect of the omitted material that is no longer explicitly modeled. The method applies to heterogeneous as well as homogeneous media and accounts for defects in the material. The coarse graining procedure can be repeated over and over, resulting in a hierarchically coarsened description that, at each stage, continues to reproduce the exact internal forces present in the original, detailed model. Each coarsening step results in reduced computational cost. This talk will describe the new peridynamic coarsening method and show computational examples.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Because of their penetrating power, energetic neutrons and gamma rays ({approx}1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging - a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.
Abstract not provided.
Abstract not provided.
A new mesh data structure is introduced for the purpose of mesh processing in Application Programming Interface (API) infrastructures. This data structure utilizes a reduced mesh representation to increase its ability to handle significantly larger meshes compared to full mesh representation. In spite of the reduced representation, each mesh entity (vertex, edge, face, and region) is represented using a unique handle, with no extra storage cost, which is a crucial requirement in most API libraries. The concept of mesh layers makes the data structure more flexible for mesh generation and mesh modification operations. This flexibility can have a favorable impact in solver based queries of finite volume and multigrid methods. The capabilities of LBMD make it even more attractive for parallel implementations using Message Passing Interface (MPI) or Graphics Processing Units (GPUs). The data structure is associated with a new classification method to relate mesh entities to their corresponding geometrical entities. The classification technique stores the related information at the node level without introducing any ambiguities. Several examples are presented to illustrate the strength of this new data structure.
Abstract not provided.
AASC is designing multiple-shell gas puff loads for Z. Here we assess the influence of the loads initial gas distribution on its K-shell yield performance. Emphasis is placed on designing an optimal central jet initial gas distribution, since it is believed to have a controlling effect on pinch stability, pinch conditions, and radiation physics. We are looking at distributions that optimize total Ar K-shell emission and high energy (>10 KeV) continuum radiation. This investigation is performed with the Mach2 MHD code with non-LTE kinetics and ray trace based radiation transport.
Fast z-pinches provide intense 1-10 keV photon energy radiation sources. Here, we analyze time-, space-, and spectrally-resolved {approx}2 keV K-shell emissions from Al (5% Mg) wire array implosions on Sandia's Z machine pulsed power driver. The stagnating plasma is modeled as three separate radial zones, and collisional-radiative modeling with radiation transport calculations are used to constrain the temperatures and densities in these regions, accounting for K-shell line opacity and Doppler effects. We discuss plasma conditions and dynamics at the onset of stagnation, and compare inferences from the atomic modeling to three-dimensional magneto-hydrodynamic simulations.
Abstract not provided.
Predicting failure of thin-walled structures from explosive loading is a very complex task. The problem can be divided into two parts; the detonation of the explosive to produce the loading on the structure, and secondly the structural response. First, the factors that affect the explosive loading include: size, shape, stand-off, confinement, and chemistry of the explosive. The goal of the first part of the analysis is predicting the pressure on the structure based on these factors. The hydrodynamic code CTH is used to conduct these calculations. Secondly, the response of a structure from the explosive loading is predicted using a detailed finite element model within the explicit analysis code Presto. Material response, to failure, must be established in the analysis to model the failure of this class of structures; validation of this behavior is also required to allow these analyses to be predictive for their intended use. The presentation will detail the validation tests used to support this program. Validation tests using explosively loaded aluminum thin flat plates were used to study all the aspects mentioned above. Experimental measurements of the pressures generated by the explosive and the resulting plate deformations provided data for comparison against analytical predictions. These included pressure-time histories and digital image correlation of the full field plate deflections. The issues studied in the structural analysis were mesh sensitivity, strain based failure metrics, and the coupling methodologies between the blast and structural models. These models have been successfully validated using these tests, thereby increasing confidence of the results obtained in the prediction of failure thresholds of complex structures, including aircraft.
IEEE SIGMETRICS PER
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia's scientific and engineering expertise in the fields of computational biology, high-performance prosthetic limbs, biodetection, and bioinformatics has been applied to specific problems at the forefront of cancer research. Molecular modeling was employed to design stable mutations of the enzyme L-asparaginase with improved selectivity for asparagine over other amino acids with the potential for improved cancer chemotherapy. New electrospun polymer composites with improved electrical conductivity and mechanical compliance have been demonstrated with the promise of direct interfacing between the peripheral nervous system and the control electronics of advanced prosthetics. The capture of rare circulating tumor cells has been demonstrated on a microfluidic chip produced with a versatile fabrication processes capable of integration with existing lab-on-a-chip and biosensor technology. And software tools have been developed to increase the calculation speed of clustered heat maps for the display of relationships in large arrays of protein data. All these projects were carried out in collaboration with researchers at the University of Texas M. D. Anderson Cancer Center in Houston, TX.
Abstract not provided.
Abstract not provided.
Threshold stress intensity factors were measured in high-pressure hydrogen gas for a variety of low alloy ferritic steels using both constant crack opening displacement and rising crack opening displacement procedures. The sustained load cracking procedures are generally consistent with those in ASME Article KD-10 of Section VIII Division 3 of the Boiler and Pressure Vessel Code, which was recently published to guide design of high-pressure hydrogen vessels. Three definitions of threshold were established for the two test methods: K{sub THi}* is the maximum applied stress intensity factor for which no crack extension was observed under constant displacement; K{sub THa} is the stress intensity factor at the arrest position for a crack that extended under constant displacement; and K{sub JH} is the stress intensity factor at the onset of crack extension under rising displacement. The apparent crack initiation threshold under constant displacement, K{sub THi}*, and the crack arrest threshold, K{sub THa}, were both found to be non-conservative due to the hydrogen exposure and crack-tip deformation histories associated with typical procedures for sustained-load cracking tests under constant displacement. In contrast, K{sub JH}, which is measured under concurrent rising displacement and hydrogen gas exposure, provides a more conservative hydrogen-assisted fracture threshold that is relevant to structural components in which sub-critical crack extension is driven by internal hydrogen gas pressure.
This report documents the results of a Strategic Partnership (aka University Collaboration) LDRD program between Sandia National Laboratories and the University of Illinois at Urbana-Champagne. The project is titled 'Data-Driven Optimization of Dynamic Reconfigurable Systems of Systems' and was conducted during FY 2009 and FY 2010. The purpose of this study was to determine and implement ways to incorporate real-time data mining and information discovery into existing Systems of Systems (SoS) modeling capabilities. Current SoS modeling is typically conducted in an iterative manner in which replications are carried out in order to quantify variation in the simulation results. The expense of many replications for large simulations, especially when considering the need for optimization, sensitivity analysis, and uncertainty quantification, can be prohibitive. In addition, extracting useful information from the resulting large datasets is a challenging task. This work demonstrates methods of identifying trends and other forms of information in datasets that can be used on a wide range of applications such as quantifying the strength of various inputs on outputs, identifying the sources of variation in the simulation, and potentially steering an optimization process for improved efficiency.
Nanostructuring of thermoelectric materials is expected to enhance thermoelectric properties by reducing the thermal conductivity and improving the power factor from that of homogeneous bulk materials. In multiphase, nanostructured thermoelectric materials, an understanding of precipitation mechanisms and phase stability is crucial for engineering systems with optimal thermoelectric performance. In this presentation we will discuss our investigations of the morphological evolution, orientation relationship, and composition of Ag{sub 2}Te precipitates in PbTe using transmission electron microscopy (TEM) and atom probe tomography (APT). Annealing in the region of two phase equilibrium between Ag{sub 2}Te and PbTe results in the formation of monoclinic {beta}-Ag{sub 2}Te precipitates as determined by x-ray and electron diffraction studies. These precipitates are aligned to the PbTe matrix with an orientation relationship that aligns the Te sub-lattices in the monoclinic and rock salt structures. This relationship is the same as we have reported earlier for {beta}-Ag{sub 2}Te precipitates in rocksalt AgSbTe{sub 2}. Observations using TEM and APT suggest that the Ag{sub 2}Te precipitates initially form as coherent spherical precipitates which upon coarsening evolve into flattened semi-coherent disks along the <100>PbTe directions which is consistent with theoretical predictions for elastically strained precipitates in a matrix. Our HRTEM observations show that sufficiently small precipitates are coherently embedded, while larger precipitates exhibit misfit dislocations and multiple monoclinic variants to relieve the elastic strain. Analysis of the composition of both precipitate groups using APT indicates that the larger precipitates exhibit compositions close to equilibrium while the smaller nanoscale precipitates exhibit enhanced Pb compositions. This detailed analysis of the orientation relationship, morphology, composition, and coarsening behavior of embedded Ag{sub 2}Te precipitates may be helpful in understanding the precipitation mechanisms and microstructure of related thermoelectric materials, such as LAST.