The United States is now re-assessing its nuclear waste disposal policy and re-evaluating the option of moving away from the current once-through open fuel cycle to a closed fuel cycle. In a closed fuel cycle, used fuels will be reprocessed and useful components such as uranium or transuranics will be recovered for reuse. During this process, a variety of waste streams will be generated. Immobilizing these waste streams into appropriate waste forms for either interim storage or long-term disposal is technically challenging. Highly volatile or soluble radionuclides such as iodine ({sup 129}I) and technetium ({sup 99}Tc) are particularly problematic, because both have long half-lives and can exist as gaseous or anionic species that are highly soluble and poorly sorbed by natural materials. Under the support of Sandia National Laboratories (SNL) Laboratory-Directed Research & Development (LDRD), we have developed a suite of inorganic nanocomposite materials (SNL-NCP) that can effectively entrap various radionuclides, especially for {sup 129}I and {sup 99}Tc. In particular, these materials have high sorption capabilities for iodine gas. After the sorption of radionuclides, these materials can be directly converted into nanostructured waste forms. This new generation of waste forms incorporates radionuclides as nano-scale inclusions in a host matrix and thus effectively relaxes the constraint of crystal structure on waste loadings. Therefore, the new waste forms have an unprecedented flexibility to accommodate a wide range of radionuclides with high waste loadings and low leaching rates. Specifically, we have developed a general route for synthesizing nanoporous metal oxides from inexpensive inorganic precursors. More than 300 materials have been synthesized and characterized with x-ray diffraction (XRD), BET surface area measurements, and transmission electron microscope (TEM). The sorption capabilities of the synthesized materials have been quantified by using stable isotopes I and Re as analogs to {sup 129}I and {sup 99}Tc. The results have confirmed our original finding that nanoporous Al oxide and its derivatives have high I sorption capabilities due to the combined effects of surface chemistry and nanopore confinement. We have developed a suite of techniques for the fixation of radionuclides in metal oxide nanopores. The key to this fixation is to chemically convert a target radionuclide into a less volatile or soluble form. We have developed a technique to convert a radionuclide-loaded nanoporous material into a durable glass-ceramic waste form through calcination. We have shown that mixing a radionuclide-loaded getter material with a Na-silicate solution can effectively seal the nanopores in the material, thus enhancing radionuclide retention during waste form formation. Our leaching tests have demonstrated the existence of an optimal vitrification temperature for the enhancement of waste form durability. Our work also indicates that silver may not be needed for I immobilization and encapsulation.
Uncertainty quantification in climate models is challenged by the prohibitive cost of a large number of model evaluations for sampling. Another feature that often prevents classical uncertainty analysis from being readily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits a discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. In order to propagate uncertainties from model parameters to model output we use polynomial chaos (PC) expansions to represent the maximum overturning stream function in terms of the uncertain climate sensitivity and CO2 forcing parameters. Since the spectral methodology assumes a certain degree of smoothness, the presence of discontinuities suggests that separate PC expansions on each side of the discontinuity will lead to more accurate descriptions of the climate model output compared to global PC expansions. We propose a methodology that first finds a probabilistic description of the discontinuity given a number of data points. Assuming the discontinuity curve is a polynomial, the algorithm is based on Bayesian inference of its coefficients. Markov chain Monte Carlo sampling is used to obtain joint distributions for the polynomial coefficients, effectively parameterizing the distribution over all possible discontinuity curves. Next, we apply the Rosenblatt transformation to the irregular parameter domains on each side of the discontinuity. This transformation maps a space of uncertain parameters with specific probability distributions to a space of i.i.d standard random variables where orthogonal projections can be used to obtain PC coefficients. In particular, we use uniform random variables that are compatible with PC expansions based on Legendre polynomials. The Rosenblatt transformation and the corresponding PC expansions for the model output on either side of the discontinuity are applied successively for several realizations of the discontinuity curve. The climate model output and its associated uncertainty at specific design points is then computed by taking a quadrature-based integration average over PC expansions corresponding to possible realizations of the discontinuity curve.
Conventional methods for uncertainty quantification are generally challenged in the 'tails' of probability distributions. This is specifically an issue for many climate observables since extensive sampling to obtain a reasonable accuracy in tail regions is especially costly in climate models. Moreover, the accuracy of spectral representations of uncertainty is weighted in favor of more probable ranges of the underlying basis variable, which, in conventional bases does not particularly target tail regions. Therefore, what is ideally desired is a methodology that requires only a limited number of full computational model evaluations while remaining accurate enough in the tail region. To develop such a methodology, we explore the use of surrogate models based on non-intrusive Polynomial Chaos expansions and Galerkin projection. We consider non-conventional and custom basis functions, orthogonal with respect to probability distributions that exhibit fat-tailed regions. We illustrate how the use of non-conventional basis functions, and surrogate model analysis, improves the accuracy of the spectral expansions in the tail regions. Finally, we also demonstrate these methodologies using precipitation data from CCSM simulations.
Research and development of advanced reprocessing plant designs can greatly benefit from the development of a reprocessing plant model capable of transient solvent extraction chemistry. This type of model can be used to optimize the operations of a plant as well as the designs for safeguards, security, and safety. Previous work has integrated a transient solvent extraction simulation module, based on the Solvent Extraction Process Having Interaction Solutes (SEPHIS) code developed at Oak Ridge National Laboratory, with the Separations and Safeguards Performance Model (SSPM) developed at Sandia National Laboratory, as a first step toward creating a more versatile design and evaluation tool. The goal of this work was to strengthen the integration by linking more variables between the two codes. The results from this integrated model show expected operational performance through plant transients. Additionally, ORIGEN source term files were integrated into the SSPM to provide concentrations, radioactivity, neutron emission rate, and thermal power data for various spent fuels. This data was used to generate measurement blocks that can determine the radioactivity, neutron emission rate, or thermal power of any stream or vessel in the plant model. This work examined how the code could be expanded to integrate other separation steps and benchmark the results to other data. Recommendations for future work will be presented.
The fluctuating kinetic energy spectrum in the region near the Richtmyer-Meshkov instability (RMI) is experimentally investigated using particle image velocimetry (PIV). The velocity field is measured at a high spatial resolution in the light gas to observe the effects of turbulence production and dissipation. It is found that the RMI acts as a source of turbulence production near the unstable interface, where energy is transferred from the scales of the perturbation to smaller scales until dissipation. The interface also has an effect on the kinetic energy spectrum farther away by means of the distorted reflected shock wave. The energy spectrum far from the interface initially has a higher energy content than that of similar experiments with a flat interface. These differences are quick to disappear as dissipation dominates the flow far from the interface.
Qubits demonstrated using GaAs double quantum dots (DQD). The qubit basis states are the (1) singlet and (2) triplet stationary states. Long spin decoherence times in silicon spurs translation of GaAs qubit in to silicon. In the near term the goals are: (1) Develop surface gate enhancement mode double quantum dots (MOS & strained-Si/SiGe) to demonstrate few electrons and spin read-out and to examine impurity doped quantum-dots as an alternative architecture; (2) Use mobility, C-V, ESR, quantum dot performance & modeling to feedback and improve upon processing, this includes development of atomic precision fabrication at SNL; (3) Examine integrated electronics approaches to RF-SET; (4) Use combinations of numerical packages for multi-scale simulation of quantum dot systems (NEMO3D, EMT, TCAD, SPICE); and (5) Continue micro-architecture evaluation for different device and transport architectures.
The fixed facilities control everything they can to drive down risk. They control the environment, work processes, work pace and workers. The transportation sector drive the State and US highways with high kinetic energy and less-controllable risks such as: (1) other drivers (beginners, impaired, distracted, etc.); (2) other vehicles (tankers, hazmat, super-heavies); (3) road environments (bridges/tunnels/abutments/construction); and (4) degraded weather.
With the goal of studying the conversion of optical energy to electrical energy at the nanoscale, we developed and tested devices based on single-walled carbon nanotubes functionalized with azobenzene chromophores, where the chromophores serve as photoabsorbers and the nanotube as the electronic read-out. By synthesizing chromophores with specific absorption windows in the visible spectrum and anchoring them to the nanotube surface, we demonstrated the controlled detection of visible light of low intensity in narrow ranges of wavelengths. Our measurements suggested that upon photoabsorption, the chromophores isomerize to give a large change in dipole moment, changing the electrostatic environment of the nanotube. All-electron ab initio calculations were used to study the chromophore-nanotube hybrids, and show that the chromophores bind strongly to the nanotubes without disturbing the electronic structure of either species. Calculated values of the dipole moments supported the notion of dipole changes as the optical detection mechanism.
In this work, we developed a self-organizing map (SOM) technique for using web-based text analysis to forecast when a group is undergoing a phase change. By 'phase change', we mean that an organization has fundamentally shifted attitudes or behaviors. For instance, when ice melts into water, the characteristics of the substance change. A formerly peaceful group may suddenly adopt violence, or a violent organization may unexpectedly agree to a ceasefire. SOM techniques were used to analyze text obtained from organization postings on the world-wide web. Results suggest it may be possible to forecast phase changes, and determine if an example of writing can be attributed to a group of interest.
Three salt compositions for potential use in trough-based solar collectors were tested to determine their mechanical properties as a function of temperature. The mechanical properties determined were unconfined compressive strength, Young's modulus, Poisson's ratio, and indirect tensile strength. Seventeen uniaxial compression and indirect tension tests were completed. It was found that as test temperature increases, unconfined compressive strength and Young's modulus decreased for all salt types. Empirical relationships were developed quantifying the aforementioned behaviors. Poisson's ratio tends to increase with increasing temperature except for one salt type where there is no obvious trend. The variability in measured indirect tensile strength is large, but not atypical for this index test. The average tensile strength for all salt types tested is substantially higher than the upper range of tensile strengths for naturally occurring rock salts. Interest in raising the operating temperature of concentrating solar technologies and the incorporation of thermal storage has motivated studies on the implementation of molten salt as the system working fluid. Recently, salt has been considered for use in trough-based solar collectors and has been shown to offer a reduction in levelized cost of energy as well as increasing availability (Kearney et al., 2003). Concerns regarding the use of molten salt are often related to issues with salt solidification and recovery from freeze events. Differences among salts used for convective heat transfer and storage are typically designated by a comparison of thermal properties. However, the potential for a freeze event necessitates an understanding of salt mechanical properties in order to characterize and mitigate possible detrimental effects. This includes stress imparted by the expanding salt. Samples of solar salt, HITEC salt (Coastal Chemical Co.), and a low melting point quaternary salt were cast for characterization tests to determine unconfined compressive strength, indirect tensile strength, coefficient of thermal expansion (CTE), Young's modulus, and Poisson's ratio. Experiments were conducted at multiple temperatures below the melting point to determine temperature dependence.
Two samples of jacketed Microtherm{reg_sign}HT were hydrostatically pressurized to maximum pressures of 29,000 psi to evaluate both pressure-volume response and change in bulk modulus as a function of density. During testing, each of the two samples exhibited large irreversible compactive volumetric strains with only small increases in pressure; however at volumetric strains of approximately 50%, the Microtherm{reg_sign}HT stiffened noticeably at ever increasing rates. At the maximum pressure of 29,000 psi, the volumetric strains for both samples were approximately 70%. Bulk modulus, as determined from hydrostatic unload/reload loops, increased by more than two-orders of magnitude (from about 4500 psi to over 500,000 psi) from an initial material density of {approx}0.3 g/cc to a final density of {approx}1.1 g/cc. An empirical fit to the density vs. bulk modulus data is K = 492769{rho}{sup 4.6548}, where K is the bulk modulus in psi, and {rho} is the material density in g/cm{sup 3}. The porosity decreased from 88% to {approx}20% indicating that much higher pressures would be required to compact the material fully.
An optical circulator is a multi-port, nonreciprocal device that routes light from one specific port to another. Optical circulators have at least 3 or 4 ports, up to 6 port possible (JDS Uniphase, Huihong Fiber) Circulators do not disregard backward propagating light, but direct it to another port. Optical circulators are commonly found in bi-directional transmission systems, WDM networks, fiber amplifiers, and optical time domain reflectometers (OTDRs). 3-Port optical circulators are commonly used in PDV systems. 1550 nm laser light is launched into Port 1 and will exit out of Port 2 to the target. Doppler-shifted light off the moving surface is reflected back into Port 2 and exits out of Port 3. Surprisingly, a circulator requires a large number of parts to operate efficiently. Transparent circulators offer higher isolation than those of the reflective style using PBSs. A lower PMD is obtained using birefringent crystals rather than PBSs due to the similar path lengths between e and o rays. Many various circulator designs exist, but all achieve the same non-reciprocal results.
Tracing system calls makes debugging easy and fast. On Plan 9, traditionally, system call tracing has been implemented with acid. New systems do not always implement all the capabilities needed for Acid, particularly the ability to rewrite the process code space to insert breakpoints. Architecture support libraries are not always available for Acid, or may not work even on a supported architecture. The requirement that Acid's libraries be available can be a problem on systems with a very small memory footprint, such as High Performance Computing systems where every Kbyte counts. Finally, Acid tracing is inconvenient in the presence of forks, which means tracing shell pipelines is particularly troublesome. The strace program available on most Unix systems is far more convenient to use and more capable than Acid for system call tracing. A similar system on Plan 9 can simplify troubleshooting. We have built a system calling tracing capability into the Plan 9 kernel. It has proven to be more convenient than strace in programming effort. One can write a shell script to implement tracing, and the C code to implement an strace equivalent is several orders of magnitude smaller.
The limiting performance of PDV is determined by power spectrum location resolution - The uncertainty principle overestimates error and peak fit confidences underestimates error. Simulations indicate that PDV is: (1) Inaccurate and imprecise at low frequencies; (2) Accurate and (potentially) precise otherwise; (3) Limiting performance can be tied to sampling rate, noise fraction, and analysis duration. Frequency conversion is a good thing. PDV is competitive with VISAR, despite wavelength difference.
Imported oil exacerabates our trade deficit and funds anti-American regimes. Nuclear Energy (NE) is a demonstrated technology with high efficiency. NE's two biggest political detriments are possible accidents and nuclear waste disposal. For NE policy, proliferation is the biggest obstacle. Nuclear waste can be reduced through reprocessing, where fuel rods are separated into various streams, some of which can be reused in reactors. Current process developed in the 1950s is dirty and expensive, U/Pu separation is the most critical. Fuel rods are sheared and dissolved in acid to extract fissile material in a centrifugal contactor. Plants have many contacts in series with other separations. We have taken a science and simulation-based approach to develop a modern reprocessing plant. Models of reprocessing plants are needed to support nuclear materials accountancy, nonproliferation, plant design, and plant scale-up.
The Building Restoration Operations Optimization Model (BROOM) is a software product developed to assist in the restoration of major transport facilities in the event of an attack involving chemical or biological materials. As shown in Figure 3-1, the objective of this work is to replace a manual, paper-based data entry and tracking system with an electronic system that should be much less error-prone. It will also manage the sampling data efficiently and produce contamination maps in a more timely manner.
Nano-imprinting is an increasingly popular method of creating structured, nanometer scale patterns on a variety of surfaces. Applications are numerous, including non-volatile memory devices, printed flexible circuits, light-management films for displays and sundry energy-conversion devices. While there have been many extensive studies of fluid transport through the individual features of a pattern template, computational models of the entire machine-scale process, where features may number in the trillions per square inch, are currently computationally intractable. In this presentation we discuss a multiscale model aimed at addressing machine-scale issues in a nano-imprinting process. Individual pattern features are coarse-grained and represented as a structured porous medium, and the entire process is modeled using lubrication theory in a two-dimensional finite element method simulation. Machine pressures, optimal initial liquid distributions, pattern fill fractions (shown in figure 1), and final coating distributions of a typical process are investigated. This model will be of interest to those wishing to understand and carefully design the mechanics of nano-imprinting processes.
Controlled assembly in soft-particle colloidal suspensions is a technology poised to advance manufacturing methods for nano-scale templating, coating, and bio-conjugate devices. Applications for soft-particle colloids include photovoltaics, nanoelectronics, functionalized thin-film coatings, and a wide range of bio-conjugate devices such as sensors, assays, and bio-fuel cells. This presentation covers the topics of modeling and simulation of soft-particle colloidal systems over dewetting, evaporation, and irradiation gradients, including deposition of particles to surfaces. By tuning particle/solvent and environmental parameters, we transition from the regime of self-assembly to that of controlled assembly, and enable finer resolution of features at both the nano-scale and meso-scale. We report models of interparticle potentials and order parametrization techniques including results from simulations of colloids utilizing soft-particle field potentials. Using LAMMPS (Large-Scale Atomic/Molecular Massively Parallel Simulator), we demonstrate effects of volume fraction, shear and drag profiles, adsorbed and bulk polymer parameters, solvent chi parameter, and deposition profiles. Results are compared to theoretical models and correlation to TEM images from soft-particle irradiation experiments.
Long chain alcohols possess major advantages over ethanol as bio-components for gasoline, including higher energy content, better engine compatibility, and less water solubility. Rapid developments in biofuel technology have made it possible to produce C{sub 4}-C{sub 5} alcohols efficiently. These higher alcohols could significantly expand the biofuel content and potentially replace ethanol in future gasoline mixtures. This study characterizes some fundamental properties of a C{sub 5} alcohol, isopentanol, as a fuel for homogeneous-charge compression-ignition (HCCI) engines. Wide ranges of engine speed, intake temperature, intake pressure, and equivalence ratio are investigated. The elementary autoignition reactions of isopentanol is investigated by analyzing product formation from laser-photolytic Cl-initiated isopentanol oxidation. Carbon-carbon bond-scission reactions in the low-temperature oxidation chemistry may provide an explanation for the intermediate-temperature heat release observed in the engine experiments. Overall, the results indicate that isopentanol has a good potential as a HCCI fuel, either in neat form or in blend with gasoline.
One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.
We will present a study of the structure-property relations in Reststrahlen materials that possess a band of negative permittivities in the infrared. It will be shown that sub-micron defects strongly affect the optical response, resulting in significantly diminished permittivities. This work has implications on the use of ionic materials in IR-metamaterials.
The Plutonium Air Transportable Package, Model PAT-1, is certified under Title 10, Code of Federal Regulations Part 71 by the U.S. Nuclear Regulatory Commission (NRC) per Certificate of Compliance (CoC) USA/0361B(U)F-96 (currently Revision 9). The purpose of this SAR Addendum is to incorporate plutonium (Pu) metal as a new payload for the PAT-1 package. The Pu metal is packed in an inner container (designated the T-Ampoule) that replaces the PC-1 inner container. The documentation and results from analysis contained in this addendum demonstrate that the replacement of the PC-1 and associated packaging material with the T-Ampoule and associated packaging with the addition of the plutonium metal content are not significant with respect to the design, operating characteristics, or safe performance of the containment system and prevention of criticality when the package is subjected to the tests specified in 10 CFR 71.71, 71.73 and 71.74.
The Plutonium Air Transportable Package, Model PAT-1, is certified under Title 10, Code of Federal Regulations Part 71 by the U.S. Nuclear Regulatory Commission (NRC) per Certificate of Compliance (CoC) USA/0361B(U)F-96 (currently Revision 9). The National Nuclear Security Administration (NNSA) submitted SAND Report SAND2009-5822 to NRC that documented the incorporation of plutonium (Pu) metal as a new payload for the PAT-1 package. NRC responded with a Request for Additional Information (RAI), identifying information needed in connection with its review of the application. The purpose of this SAND report is to provide the authors responses to each RAI. SAND Report SAND2010-6106 containing the proposed changes to the Addendum is provided separately.
This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four sequential steps: meshing, partitioning, solver, and visualization. Not all of these components are necessarily run on the supercomputer. In particular, the meshing and visualization typically happen on smaller but more interactive computing resources. However, the previous decade has seen a growth in both the need and ability to perform scalable parallel analysis, and this gives motivation for coupling the solver and visualization.
The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processing techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.
Cost-effective implementation of microalgae as a solar-to-chemical energy conversion platform requires extensive system optimization; computer modeling can bring this to bear. This work uses modified versions of the U.S. Environmental Protection Agency's (EPA's) Environmental Fluid Dynamics Code (EFDC) in conjunction with the U.S. Army Corp of Engineers water-quality code (CE-QUAL) to simulate hydrodynamics coupled to growth kinetics of algae (Phaeodactylum tricornutum) in open-channel raceways. The model allows the flexibility to manipulate a host of variables associated with raceway-design, algal-growth, water-quality, hydrodynamic, and atmospheric conditions. The model provides realistic results wherein growth rates follow the diurnal fluctuation of solar irradiation and temperature. The greatest benefit that numerical simulation of the flow system offers is the ability to design the raceway before construction, saving considerable cost and time. Moreover, experiment operators can evaluate the impacts of various changes to system conditions (e.g., depth, temperature, flow speeds) without risking the algal biomass under study.
Erbium is known to effectively load with hydrogen when held at high temperature in a hydrogen atmosphere. To make the storage of hydrogen kinetically feasible, a thermal activation step is required. Activation is a routine practice, but very little is known about the physical, chemical, and/or electronic processes that occur during Activation. This work presents in situ characterization of erbium Activation using variable energy photoelectron spectroscopy at various stages of the Activation process. Modification of the passive surface oxide plays a significant role in Activation. The chemical and electronic changes observed from core-level and valence band spectra will be discussed along with corroborating ion scattering spectroscopy measurements.
As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate large scale HPC system understanding. OVIS incorporates an intuitive graphical user interface, an extensive and extendable data analysis suite, and a 3-D visualization engine that allows visual inspection of both raw and derived data on a geometrically correct representation of a HPC system. This talk will cover system instrumentation, data collection (including log files and the complications of meaningful parsing), analysis, visualization of both raw and derived information, and how data can be combined to increase system understanding and efficiency.
One of the tenets of nanotechnology is that the electrical/optical/chemical/biological properties of a material may be changed profoundly when the material is reduced to sufficiently small dimensions - and we can exploit these new properties to achieve novel or greatly improved material's performance. However, there may be mechanical or thermodynamic driving forces that hinder the synthesis of the structure, impair the stability of the structure, or reduce the intended performance of the structure. Examples of these phenomena include de-wetting of films due to high surface tension, thermally-driven instability of nano-grain structure, and defect-related internal dissipation. If we have fundamental knowledge of the mechanical processes at small length scales, we can exploit these new properties to achieve robust nanodevices. To state it simply, the goal of this program is the fundamental understanding of the mechanical properties of materials at small length scales. The research embodied by this program lies at the heart of modern materials science with a guiding focus on structure-property relationships. We have divided this program into three Tasks, which are summarized: (1) Mechanics of Nanostructured Materials (PI Blythe Clark). This task aims to develop a fundamental understanding of the mechanical properties and thermal stability of nanostructured metals, and of the relationship between nano/microstructure and bulk mechanical behavior through a combination of special materials synthesis methods, nanoindentation coupled with finite-element modeling, detailed electron microscopic characterization, and in-situ transmission electron microscopy experiments. (2) Theory of Microstructures and Ensemble Controlled Deformation (PI Elizabeth A. Holm). The goal of this Task is to combine experiment, modeling, and simulation to construct, analyze, and utilize three-dimensional (3D) polycrystalline nanostructures. These full 3D models are critical for elucidating the complete structural geometry, topology, and arrangements that control experimentally-observed phenomena, such as abnormal grain growth, grain rotation, and internal dissipation measured in nanocrystalline metal. (3) Mechanics and Dynamics of Nanostructured and Nanoscale Materials (PI John P. Sullivan). The objective of this Task is to develop atomic-scale understanding of dynamic processes including internal dissipation in nanoscale and nanostructured metals, and phonon transport and boundary scattering in nanoscale structures via internal friction measurements.
Carbon capture and sequestration (CCS) is an option to mitigate impacts of atmospheric carbon emission. Numerous factors are important in determining the overall effectiveness of long-term geologic storage of carbon, including leakage rates, volume of storage available, and system costs. Recent efforts have been made to apply an existing probabilistic performance assessment (PA) methodology developed for deep nuclear waste geologic repositories to evaluate the effectiveness of subsurface carbon storage (Viswanathan et al., 2008; Stauffer et al., 2009). However, to address the most pressing management, regulatory, and scientific concerns with subsurface carbon storage (CS), the existing PA methodology and tools must be enhanced and upgraded. For example, in the evaluation of a nuclear waste repository, a PA model is essentially a forward model that samples input parameters and runs multiple realizations to estimate future consequences and determine important parameters driving the system performance. In the CS evaluation, however, a PA model must be able to run both forward and inverse calculations to support optimization of CO{sub 2} injection and real-time site monitoring as an integral part of the system design and operation. The monitoring data must be continually fused into the PA model through model inversion and parameter estimation. Model calculations will in turn guide the design of optimal monitoring and carbon-injection strategies (e.g., in terms of monitoring techniques, locations, and time intervals). Under the support of Laboratory-Directed Research & Development (LDRD), a late-start LDRD project was initiated in June of Fiscal Year 2010 to explore the concept of an enhanced performance assessment system (EPAS) for carbon sequestration and storage. In spite of the tight time constraints, significant progress has been made on the project: (1) Following the general PA methodology, a preliminary Feature, Event, and Process (FEP) analysis was performed for a hypothetical CS system. Through this FEP analysis, relevant scenarios for CO{sub 2} release were defined. (2) A prototype of EPAS was developed by wrapping an existing multi-phase, multi-component reservoir simulator (TOUGH2) with an uncertainty quantification and optimization code (DAKOTA). (3) For demonstration, a probabilistic PA analysis was successfully performed for a hypothetical CS system based on an existing project in a brine-bearing sandstone. The work lays the foundation for the development of a new generation of PA tools for effective management of CS activities. At a top-level, the work supports energy security and climate change/adaptation by furthering the capability to effectively manage proposed carbon capture and sequestration activities (both research and development as well as operational), and it greatly enhances the technical capability to address this national problem. The next phase of the work will include (1) full capability demonstration of the EPAS, especially for data fusion, carbon storage system optimization, and process optimization of CO{sub 2} injection, and (2) application of the EPAS to actual carbon storage systems.
Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.
The peridynamic model of solid mechanics is a mathematical theory designed to provide consistent mathematical treatment of deformations involving discontinuities, especially cracks. Unlike the partial differential equations (PDEs) of the standard theory, the fundamental equations of the peridynamic theory remain applicable on singularities such as crack surfaces and tips. These basic relations are integro-differential equations that do not require the existence of spatial derivatives of the deformation, or even continuity of the deformation. In the peridynamic theory, material points in a continuous body separated from each other by finite distances can interact directly through force densities. The interaction between each pair of points is called a bond. The dependence of the force density in a bond on the deformation provides the constitutive model for a material. By allowing the force density in a bond to depend on the deformation of other nearby bonds, as well as its own deformation, a wide spectrum of material response can be modelled. Damage is included in the constitutive model through the irreversible breakage of bonds according to some criterion. This criterion determines the critical energy release rate for a peridynamic material. In this talk, we present a general discussion of the peridynamic method and recent progress in its application to penetration and fracture in nonlinearly elastic solids. Constitutive models are presented for rubbery materials, including damage evolution laws. The deformation near a crack tip is discussed and compared with results from the standard theory. Examples demonstrating the spontaneous nucleation and growth of cracks are presented. It is also shown how the method can be applied to anisotropic media, including fiber reinforced composites. Examples show prediction of impact damage in composites and comparison against experimental measurements of damage and delamination.
This work compares the sorption and swelling processes associated with CO2-coal and CO2-clay interactions. We investigated the mechanisms of interaction related to CO2 adsortion in micropores, intercalation into sub-micropores, dissolution in solid matrix, the role of water, and the associated changes in reservoir permeability, for applications in CO2 sequestration and enhanced coal bed methane recovery. The structural changes caused by CO2 have been investigated. A high-pressure micro-dilatometer was equipped to investigate the effect of CO2 pressure on the thermoplastic properties of coal. Using an identical dilatometer, Rashid Khan (1985) performed experiments with CO2 that revealed a dramatic reduction in the softening temperature of coal when exposed to high-pressure CO2. A set of experiments was designed for -20+45-mesh samples of Argonne Premium Pocahontas No.3 coal, which is similar in proximate and ultimate analysis to the Lower Kittanning seam coal that Khan used in his experiments. No dramatic decrease in coal softening temperature has been observed in high-pressure CO2 that would corroborate the prior work of Khan. Thus, conventional polymer (or 'geopolymer') theories may not be directly applicable to CO2 interaction with coals. Clays are similar to coals in that they represent abundant geomaterials with well-developed microporous structure. We evaluated the CO2 sequestration potential of clays relative to coals and investigated the factors that affect the sorption capacity, rates, and permanence of CO2 trapping. For the geomaterials comparison studies, we used source clay samples from The Clay Minerals Society. Preliminary results showed that expandable clays have CO2 sorption capacities comparable to those of coal. We analyzed sorption isotherms, XRD, DRIFTS (infrared reflectance spectra at non-ambient conditions), and TGA-MS (thermal gravimetric analysis) data to compare the effects of various factors on CO2 trapping. In montmorillonite, CO2 molecules may remain trapped for several months following several hours of exposure to high pressure (supercritical conditions), high temperature (above boiling point of water) or both. Such trapping is well preserved in either inert gas or the ambient environment and appears to eventually result in carbonate formation. We performed computer simulations of CO2 interaction with free cations (normal modes of CO2 and Na+CO2 were calculated using B3LYP / aug-cc-pVDZ and MP2 / aug-cc-pVDZ methods) and with clay structures containing interlayer cations (MD simulations with Clayff potentials for clay and a modified CO2 potential). Additionally, interaction of CO2 with hydrated Na-montmorillonite was studied using density functional theory with dispersion corrections. The sorption energies and the swelling behavior were investigated. Preliminary modeling results and experimental observations indicate that the presence of water molecules in the interlayer region is necessary for intercalation of CO2. Our preliminary conclusion is that CO2 molecules may intercalate into interlayer region of swelling clay and stay there via coordination to the interlayer cations.
Reliability-Centered Maintenance (RCM) is a process used to determine what must be done to ensure that any physical asset continues to do whatever its users want it to do in its present operating context. There are 7 basic questions of RCM: (1) what are the functions of the asset; (2) in hwat ways does it fail to fulfill its functions; (3) what causes each functional failure; (4) what happens when each failure occurs; (5) in what way does each failure matter; (6) what can be done to predict or prevent each failure; and (7) what should be done if a suitable proactive task cannot be found. SNL's RCM experiences: (1) acid exhaust system - (a) reduced risk of system failure (safety and operational consequences), (b) reduced annual correctiv maintenance hours from 138 in FY06 to zero in FY07, FY08, FY09, FY10 and FY11 so far, (c) identified single point of failure, mitigated risk, and recommended a permanent solution; (2) fire alarm system - (a) reduced false alarms, which cause costly evacuations, (b) precented 1- to 2-day evacuation by identifying and obtaining a critical spare for a network card; (3) heating water system - (a) reduced PM hours on fire-tube boilers by 60%, (b) developed operator tasks and PM plan for modular boilers, which can be applied to many installations; and (4) GIF source elevator system - (a) reduced frequency of PM tasks from 6 months to 1 year, (b) established predictive maintenance task that identified overheating cabinet and prevented potential electrical failure or fire.
In a (future) quantum computer a single logical quantum bit (qubit) will be made of multiple physical qubits. These extra physical qubits implement mandatory extensive error checking. The efficiency of error correction will fundamentally influence the performance of a future quantum computer, both in latency/speed and in error threshold (the worst error tolerated for an individual gate). Executing this quantum error correction requires scheduling the individual operations subject to architectural constraints. Since our last talk on this subject, a team of researchers at Sandia National Labortories has designed a logical qubit architecture that considers all relevant architectural issues including layout, the effects of supporting classical electronics, and the types of gates that the underlying physical qubit implementation supports most naturally. This is a two-dimensional system where 2-qubit operations occur locally, so there is no need to calculate more complex qubit/information transportation. Using integer programming, we found a schedule of qubit operations that obeys the hardware constraints, implements the local-check code in the native gate set, and minimizes qubit idle periods. Even with an optimal schedule, however, parallel Monte Carlo simulation shows that there is no finite error probability for the native gates such that the error-correction system would be benecial. However, by adding dynamic decoupling, a series of timed pulses that can reverse some errors, we found that there may be a threshold. Thus finding optimal schedules for increasingly-refined scheduling problems has proven critical for the overall design of the logical qubit system. We describe the evolving scheduling problems and the ideas behind the integer programming-based solution methods. This talk assumes no prior knowledge of quantum computing.
Airworthiness Assurance NDI Validation Center (AANC) objectives are: (1) Enhance aircraft safety and reliability; (2) Aid developing advanced aircraft designs and maintenance techniques; (3) Provide our customers with comprehensive, independent, and quantitative/qualitative evaluations of new and enhanced inspection, maintenance, and repair techniques; (4) Facilitate transferring effective technologies into the aviation industry; (5) Support FAA rulemaking process by providing guidance on content & necessary tools to meet requirements or recommendations of FARs, ADs, ACs, SBs, SSIDs, CPCP, and WFD; and (6) Coordinate with and respond to Airworthiness Assurance Working Group (AAWG) in support of FAA Aviation Rulemaking Advisory Committee (ARAC).
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
A slight modification of a package to transport solid metal contents requires inclusion of a thin titanium liner to protect against possible eutectic formation in 10 CFR 71.74 regulatory fire accident conditions. Under severe transport regulatory impact conditions, the package contents could impart high localized loading of the liner, momentarily pinching it between the contents and the thick containment vessel, and inducing some plasticity near the contact point. Actuator and drop table testing of simulated contents impacts against liner/containment vessel structures nearly bounded the potential plastic strain and stress triaxiality conditions, without any ductile tearing of the eutectic barrier. Additional bounding was necessary in some cases beyond the capability of the actuator and drop table tests, and in these cases a stress-modified evolution integral over the plastic strain history was successfully used as a failure criterion to demonstrate that structural integrity was maintained. The Heaviside brackets only allow the evolution integral to accumulate value when the maximum principal stress is positive, since failure is never observed under pure hydrostatic pressure, where the maximum principal stress is negative. Detailed finite element analyses of myriad possible impact orientations and locations between package contents and the thin eutectic barrier under regulatory impact conditions have shown that not even the initiation of a ductile tear occurs. Although localized plasticity does occur in the eutectic barrier, it is not the primary containment boundary and is thus not subject to ASME stress allowables from NRC Regulatory Guide 7.6. These analyses were used to successfully demonstrate that structural integrity of the eutectic barrier was maintained in all 10 CFR 71.73 and 71.74 regulatory accident conditions. The NRC is currently reviewing the Safety Analysis Report.
The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.
Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the identification of optimal recovery strategies that maximize resilience. To this goal, we formulate a bi-level optimization problem for infrastructure network models. In the 'inner' problem, we solve for network flows, and we use the 'outer' problem to identify the optimal recovery modes and sequences. We draw from the literature of multi-mode project scheduling problems to create an effective solution strategy for the resilience optimization model. We demonstrate the application of this approach to a set of network models, including a national railroad model and a supply chain for Army munitions production.
The Smart Grid has come to describe a next-generation electrical power system that is typified by the increased use of communications and information technology in the generation, delivery and consumption of electrical energy. Much of the present Smart Grid analysis focuses on utility and consumer interaction. i.e. smart appliances, home automation systems, rate structures, consumer demand response, etc. An identified need is to assess the upstream and midstream operations of natural gas as a result of the smart grid. The nature of Smart Grid, including the demand response and role of information, may require changes in upstream and midstream natural gas operations to ensure availability and efficiency. Utility reliance on natural gas will continue and likely increase, given the backup requirements for intermittent renewable energy sources. Efficient generation and delivery of electricity on Smart Grid could affect how natural gas is utilized. Things that we already know about Smart Grid are: (1) The role of information and data integrity is increasingly important. (2) Smart Grid includes a fully distributed system with two-way communication. (3) Smart Grid, a complex network, may change the way energy is supplied, stored, and in demand. (4) Smart Grid has evolved through consumer driven decisions. (5) Smart Grid and the US critical infrastructure will include many intermittent renewables.
This report documents research carried out by the author throughout his 3-years Truman fellowship. The overarching goal consisted of developing multiscale schemes which permit not only the predictive description but also the computational design of improved materials. Identifying new materials through changes in atomic composition and configuration requires the use of versatile first principles methods, such as density functional theory (DFT). Using DFT, its predictive reliability has been investigated with respect to pseudopotential construction, band-gap, van-der-Waals forces, and nuclear quantum effects. Continuous variation of chemical composition and derivation of accurate energy gradients in compound space has been developed within a DFT framework for free energies of solvation, reaction energetics, and frontier orbital eigenvalues. Similar variations have been leveraged within classical molecular dynamics in order to address thermal properties of molten salt candidates for heat transfer fluids used in solar thermal power facilities. Finally, a combination of DFT and statistical methods has been used to devise quantitative structure property relationships for the rapid prediction of charge mobilities in polyaromatic hydrocarbons.
MPI is the dominant programming model for distributed memory parallel computers, and is often used as the intra-node programming model on multi-core compute nodes. However, application developers are increasingly turning to hybrid models that use threading within a node and MPI between nodes. In contrast to MPI, most current threaded models do not require application developers to deal explicitly with data locality. With increasing core counts and deeper NUMA hierarchies seen in the upcoming LANL/SNL 'Cielo' capability supercomputer, data distribution poses an upper boundary on intra-node scalability within threaded applications. Data locality therefore has to be identified at runtime using static memory allocation policies such as first-touch or next-touch, or specified by the application user at launch time. We evaluate several existing techniques for managing data distribution using micro-benchmarks on an AMD 'Magny-Cours' system with 24 cores among 4 NUMA domains and argue for the adoption of a dynamic runtime system implemented at the kernel level, employing a novel page table replication scheme to gather per-NUMA domain memory access traces.
Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subset of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.
This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.
The current state-of-the-art in antineutrino detection is such that it is now possible to remotely monitor the operational status, power levels and fissile content of nuclear reactors in real-time. This non-invasive and incorruptible technique has been demonstrated at civilian power reactors in both Russia and the United States and has been of interest to the IAEA Novel Technologies Unit for several years. Expert's meetings were convened at IAEA headquarters in 2003 and again in 2008. The latter produced a report in which antineutrino detection was called a 'highly promising technology for safeguards applications' at nuclear reactors and several near-term goals and suggested developments were identified to facilitate wider applicability. Over the last few years, we have been working to achieve some of these goals and improvements. Specifically, we have already demonstrated the successful operation of non-toxic detectors and most recently, we are testing a transportable, above-ground detector system, which is fully contained within a standard 6 meter ISO container. If successful, such a system could allow easy deployment at any reactor facility around the world. As well, our previously demonstrated ability to remotely monitor the data and respond in real-time to reactor operational changes could allow the verification of operator declarations without the need for costly site-visits. As the global nuclear power industry expands around the world, the burden on maintaining operational histories and safeguarding inventories will increase greatly. Such a system for providing remote data to verify operator's declarations could greatly reduce the need for frequent site inspections while still providing a robust warning of anomalies requiring further investigation.
Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively little consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.
This paper presents a mixed-potential integral-equation formulation for analyzing 1-D periodic leaky-wave antennas in layered media. The structures are periodic in one dimension and finite in the other two dimensions. The unit cell consists of an arbitrary-shaped metallic/dielectric structure. The formulation has been implemented in the EIGER{trademark} code in order to obtain the real and complex propagation wavenumbers of the bound and leaky modes of such structures. Validation results presented here include a 1-D periodic planar leaky-wave antenna and a fully 3-D waveguide test case.
Heightened interest in micro-scale and nano-scale patterning by imprinting, embossing, and nano-particulate suspension coating stems from a recent surge in development of higher-throughput manufacturing methods for integrated devices. Energy-applications addressing alternative, renewable energy sources offer many examples of the need for improved manufacturing technology for micro and nano-structured films. In this presentation we address one approach to micro- and nano-pattering coating using film deposition and differential wetting of nanoparticles suspensions. Rather than print nanoparticle or colloidal inks in discontinuous patches, which typically employs ink jet printing technology, patterns can be formed with controlled dewetting of a continuously coated film. Here we report the dynamics of a volatile organic solvent laden with nanoparticles dispensed on the surfaces of water droplets, whose contact angles (surface energy) and perimeters are defined by lithographic patterning of initially (super)hydrophobic surfaces.. The lubrication flow equation together with averaged particle transport equation are employed to predict the film thickness and particle average concentration profiles during subsequent drying of the organic and water solvents. The predictions are validated by contact angle measurements, in situ grazing incidence small angle x-ray scattering experiments, and TEM images of the final nanoparticle assemblies.
External modifications can transform a conventional photonic doppler velocimetry (PDV) system to other useful configurations - Non-standard probes and Frequency-conversion measurements. This approach is easier than supporting every conceivable measurement in the core PDV design. Circulator specifications may be important - -30 dB isolation (common) probably not be enough, -50 dB isolation is available, and some bench testing may be needed.
This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.
A new experimental technique to measure material shear strength at high pressures has been developed for use on magneto-hydrodynamic (MHD) drive pulsed power platforms. By applying an external static magnetic field to the sample region, the MHD drive directly induces a shear stress wave in addition to the usual longitudinal stress wave. Strength is probed by passing this shear wave through a sample material where the transmissible shear stress is limited to the sample strength. The magnitude of the transmitted shear wave is measured via a transverse VISAR system from which the sample strength is determined.
While individual neurons function at relatively low firing rates, naturally-occurring nervous systems not only surpass manmade systems in computing power, but accomplish this feat using relatively little energy. It is asserted that the next major breakthrough in computing power will be achieved through application of neuromorphic approaches that mimic the mechanisms by which neural systems integrate and store massive quantities of data for real-time decision making. The proposed LDRD provides a conceptual foundation for SNL to make unique advances toward exascale computing. First, a team consisting of experts from the HPC, MESA, cognitive and biological sciences and nanotechnology domains will be coordinated to conduct an exercise with the outcome being a concept for applying neuromorphic computing to achieve exascale computing. It is anticipated that this concept will involve innovative extension and integration of SNL capabilities in MicroFab, material sciences, high-performance computing, and modeling and simulation of neural processes/systems.
Adagio is a Lagrangian, three-dimensional, implicit code for the analysis of solids and structures. It uses a multi-level iterative solver, which enables it to solve problems with large deformations, nonlinear material behavior, and contact. It also has a versatile library of continuum and structural elements, and an extensive library of material models. Adagio is written for parallel computing environments, and its solvers allow for scalable solutions of very large problems. Adagio uses the SIERRA Framework, which allows for coupling with other SIERRA mechanics codes. This document describes the functionality and input structure for Adagio.
Presto is a Lagrangian, three-dimensional explicit, transient dynamics code that is used to analyze solids subjected to large, suddenly applied loads. The code is designed for a parallel computing environment and for problems with large deformations, nonlinear material behavior, and contact. Presto also has a versatile element library that incorporates both continuum elements and structural elements. This user's guide describes the input for Presto that gives users access to all the current functionality in the code. The environment in which Presto is built allows it to be coupled with other engineering analysis codes. Using a concept called scope, the input structure reflects the fact that Presto can be used in a coupled environment. The user's guide describes how scope is implemented from the outermost to the innermost scopes. Within a given scope, the descriptions of input commands are grouped based on functionality of the code. For example, all material input command lines are described in a chapter of the user's guide for all the material models that can be used in Presto.
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.
This report summarizes activities undertaken during FY08-FY10 for the LDRD Peridynamics as a Rigorous Coarse-Graining of Atomistics for Multiscale Materials Design. The goal of our project was to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. The goal of our project is to develop a coarse-graining of finite temperature molecular dynamics (MD) that successfully transitions from statistical mechanics to continuum mechanics. Our coarse-graining overcomes the intrinsic limitation of coupling atomistics with classical continuum mechanics via the FEM (finite element method), SPH (smoothed particle hydrodynamics), or MPM (material point method); namely, that classical continuum mechanics assumes a local force interaction that is incompatible with the nonlocal force model of atomistic methods. Therefore FEM, SPH, and MPM inherit this limitation. This seemingly innocuous dichotomy has far reaching consequences; for example, classical continuum mechanics cannot resolve the short wavelength behavior associated with atomistics. Other consequences include spurious forces, invalid phonon dispersion relationships, and irreconcilable descriptions/treatments of temperature. We propose a statistically based coarse-graining of atomistics via peridynamics and so develop a first of a kind mesoscopic capability to enable consistent, thermodynamically sound, atomistic-to-continuum (AtC) multiscale material simulation. Peridynamics (PD) is a microcontinuum theory that assumes nonlocal forces for describing long-range material interaction. The force interactions occurring at finite distances are naturally accounted for in PD. Moreover, PDs nonlocal force model is entirely consistent with those used by atomistics methods, in stark contrast to classical continuum mechanics. Hence, PD can be employed for mesoscopic phenomena that are beyond the realms of classical continuum mechanics and atomistic simulations, e.g., molecular dynamics and density functional theory (DFT). The latter two atomistic techniques are handicapped by the onerous length and time scales associated with simulating mesoscopic materials. Simulating such mesoscopic materials is likely to require, and greatly benefit from multiscale simulations coupling DFT, MD, PD, and explicit transient dynamic finite element methods FEM (e.g., Presto). The proposed work fills the gap needed to enable multiscale materials simulations.
We present the results of a three year LDRD project that focused on understanding the impact of defects on the electrical, optical and thermal properties of GaN-based nanowires (NWs). We describe the development and application of a host of experimental techniques to quantify and understand the physics of defects and thermal transport in GaN NWs. We also present the development of analytical models and computational studies of thermal conductivity in GaN NWs. Finally, we present an atomistic model for GaN NW electrical breakdown supported with experimental evidence. GaN-based nanowires are attractive for applications requiring compact, high-current density devices such as ultraviolet laser arrays. Understanding GaN nanowire failure at high-current density is crucial to developing nanowire (NW) devices. Nanowire device failure is likely more complex than thin film due to the prominence of surface effects and enhanced interaction among point defects. Understanding the impact of surfaces and point defects on nanowire thermal and electrical transport is the first step toward rational control and mitigation of device failure mechanisms. However, investigating defects in GaN NWs is extremely challenging because conventional defect spectroscopy techniques are unsuitable for wide-bandgap nanostructures. To understand NW breakdown, the influence of pre-existing and emergent defects during high current stress on NW properties will be investigated. Acute sensitivity of NW thermal conductivity to point-defect density is expected due to the lack of threading dislocation (TD) gettering sites, and enhanced phonon-surface scattering further inhibits thermal transport. Excess defect creation during Joule heating could further degrade thermal conductivity, producing a viscous cycle culminating in catastrophic breakdown. To investigate these issues, a unique combination of electron microscopy, scanning luminescence and photoconductivity implemented at the nanoscale will be used in concert with sophisticated molecular-dynamics calculations of surface and defect-mediated NW thermal transport. This proposal seeks to elucidate long standing material science questions for GaN while addressing issues critical to realizing reliable GaN NW devices.
This report examines the current policy, legal, and regulatory framework pertaining to used nuclear fuel and high level waste management in the United States. The goal is to identify potential changes that if made could add flexibility and possibly improve the chances of successfully implementing technical aspects of a nuclear waste policy. Experience suggests that the regulatory framework should be established prior to initiating future repository development. Concerning specifics of the regulatory framework, reasonable expectation as the standard of proof was successfully implemented and could be retained in the future; yet, the current classification system for radioactive waste, including hazardous constituents, warrants reexamination. Whether or not consideration of multiple sites are considered simultaneously in the future, inclusion of mechanisms such as deliberate use of performance assessment to manage site characterization would be wise. Because of experience gained here and abroad, diversity of geologic media is not particularly necessary as a criterion in site selection guidelines for multiple sites. Stepwise development of the repository program that includes flexibility also warrants serious consideration. Furthermore, integration of the waste management system from storage, transportation, and disposition, should be examined and would be facilitated by integration of the legal and regulatory framework. Finally, in order to enhance acceptability of future repository development, the national policy should be cognizant of those policy and technical attributes that enhance initial acceptance, and those policy and technical attributes that maintain and broaden credibility.
This report summarizes a 3-year LDRD program at Sandia National Laboratories exploring mutual injection locking of composite-cavity lasers for enhanced modulation responses. The program focused on developing a fundamental understanding of the frequency enhancement previously demonstrated for optically injection locked lasers. This was then applied to the development of a theoretical description of strongly coupled laser microsystems. This understanding was validated experimentally with a novel 'photonic lab bench on a chip'.
This report contains documentation from an interoperability study conducted under the Late Start LDRD 149630, Exploration of Cloud Computing. A small late-start LDRD from last year resulted in a study (Raincoat) on using Virtual Private Networks (VPNs) to enhance security in a hybrid cloud environment. Raincoat initially explored the use of OpenVPN on IPv4 and demonstrates that it is possible to secure the communication channel between two small 'test' clouds (a few nodes each) at New Mexico Tech and Sandia. We extended the Raincoat study to add IPSec support via Vyatta routers, to interface with a public cloud (Amazon Elastic Compute Cloud (EC2)), and to be significantly more scalable than the previous iteration. The study contributed to our understanding of interoperability in a hybrid cloud.
This LDRD Senior's Council Project is focused on the development, implementation and evaluation of Reduced Order Models (ROM) for application in the thermal analysis of complex engineering problems. Two basic approaches to developing a ROM for combined thermal conduction and enclosure radiation problems are considered. As a prerequisite to a ROM a fully coupled solution method for conduction/radiation models is required; a parallel implementation is explored for this class of problems. High-fidelity models of large, complex systems are now used routinely to verify design and performance. However, there are applications where the high-fidelity model is too large to be used repetitively in a design mode. One such application is the design of a control system that oversees the functioning of the complex, high-fidelity model. Examples include control systems for manufacturing processes such as brazing and annealing furnaces as well as control systems for the thermal management of optical systems. A reduced order model (ROM) seeks to reduce the number of degrees of freedom needed to represent the overall behavior of the large system without a significant loss in accuracy. The reduction in the number of degrees of freedom of the ROM leads to immediate increases in computational efficiency and allows many design parameters and perturbations to be quickly and effectively evaluated. Reduced order models are routinely used in solid mechanics where techniques such as modal analysis have reached a high state of refinement. Similar techniques have recently been applied in standard thermal conduction problems e.g. though the general use of ROM for heat transfer is not yet widespread. One major difficulty with the development of ROM for general thermal analysis is the need to include the very nonlinear effects of enclosure radiation in many applications. Many ROM methods have considered only linear or mildly nonlinear problems. In the present study a reduced order model is considered for application to the combined problem of thermal conduction and enclosure radiation. The main objective is to develop a procedure that can be implemented in an existing thermal analysis code. The main analysis objective is to allow thermal controller software to be used in the design of a control system for a large optical system that resides with a complex radiation dominated enclosure. In the remainder of this section a brief outline of ROM methods is provided. The following chapter describes the fully coupled conduction/radiation method that is required prior to considering a ROM approach. Considerable effort was expended to implement and test the combined solution method; the ROM project ended shortly after the completion of this milestone and thus the ROM results are incomplete. The report concludes with some observations and recommendations.