I report the progress to date of my work on scaling the CPAPR algorithm and necessary supporting code to enable processing large (gigabyte to 100 gigabyte) data sets and benchmarking the same. Where possible, I also report background information possibly of relevance in future modifications of the code. The results include: minor repairs and additions to the TTB library for portability, algorithmic improvements relevant to both serial and multithreaded implementations, algorithmic improvements taking advantage of multithreading hardware, support library additions (binary IO routines) needed for efficiently and reproducibly benchmarking the algorithms. For this optimization work, no large scale data sets are available. Therefore, scalability of data synthesis algorithms is addressed as well.
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.
This paper introduces the concept of systems resilience as a new framework for thinking about the future of nonproliferation. Resilience refers to the ability of a system to maintain its vital functions in the face of continuous and unpredictable change. The nonproliferation regime can be viewed as a complex system, and key themes from the literature on systems resilience can be applied to the nonproliferation system. Most existing nonproliferation strategies are aimed at stability rather than resilience, and the current nonproliferation system may be over-constrained by the cumulative evolution of strategies, increasing its vulnerability to collapse. The resilience of the nonproliferation system can be enhanced by diversifying nonproliferation strategies to include general international capabilities to respond to proliferation and focusing more attention on reducing the motivation to acquire nuclear weapons in the first place. Ideas for future research, include understanding unintended consequences and feedbacks among nonproliferation strategies, developing methodologies for measuring the resilience of the nonproliferation system, and accounting for interactions of the nonproliferation system with other systems on larger and smaller scales.
Graph algorithms are becoming increasingly important for solving many problems in scientific computing, data mining and other domains. As these problems grow in scale, parallel computing resources are required to meet their computational and memory requirements. Unfortunately, the algorithms, software, and hardware that have worked well for developing mainstream parallel scientific applications are not necessarily effective for large-scale graph problems. In this paper we present the inter-relationships between graph problems, software, and parallel hardware in the current state of the art and discuss how those issues present inherent challenges in solving large-scale graph problems. The range of these challenges suggests a research agenda for the development of scalable high-performance software for graph problems.
A coordination chemistry analysis of oil-calcite adhesion allows waterflood chemistry controls over enhanced oil recovery from limestones to be understood. The model relies on temperature-dependent surface complexation models of calcite and oil. The primary electrostatic bridges holding oil to calcite are calculated to be [-COO-][>CaOH2+], [-COO-][>COOCa+], [>CaSO4-][-COOCa+] and [-COOCa+][>COO-] (“>” denotes calcite surface groups; “-” denotes polar oil surface groups; Mg2+ can substitute for Ca+2). The [-COO-][>CaOH2+] bridge between oil carboxylate and protonated calcite calcium sites is most sensitive to changes in waterflood chemistry. Model calculations predict that increased levels of Ca+2, Mg+2, and SO4-2, alone or in combination, will increase oil recovery from limestones by decreasing the number of [-COO-][>CaOH2+] bridges. Divalent cations decrease the local interfacial potential by decreasing the net negative charge on oil carboxylate groups; SO4-2 coordinates to protonated calcite calcium sites to decrease charge and electrostatic attraction. Increases in ionic strength should increase adhesion by increasing the net charge on each surface, though the effect will be less on calcite. The model presented here requires no fitting parameters yet accurately reproduces observed oil mobilization trends suggesting the model to be a potentially valuable tool for designing chemistries of waterfloods employed in limestones.
The amount of publicly available source code on the Internet makes it attractive as a potential message carrier for steganographic applications. Unfortunately, it is often overlooked since embedding information in an undetectable way is challenging. We investigate term rewriting as a method for embedding messages into programs via transformations on source code. We elaborate on several possible transformation strategies and discuss how they might be applied in a steganographic setting. We continue with a discussion on (a) the implications and trade-offs of preserving semantic properties, (b) the relationship between messages and transformations, and (c) how to incorporate existing natural language processing techniques. The goal of this work is to elicit constructive feedback and present ideas that stimulate future work.
Membrane projection lithography is extended from a single layer fabrication technique to a multilayer process, adding polymeric backfill and planarization after each layer is completed. Unaligned contact lithography is used as a rapid prototyping tool to aid in process development, patterning resist membranes in seconds without requiring long e-beam write times. The fabricated multilayer structures show good resistance to solvent attack from subsequent process steps and demonstrate in-plane and out of plane multilayer metallic inclusions in a dielectric host, which is a critical step in the path to develop bulklike metamaterials at optical frequencies.
A study was conducted to demonstrate that the Raman response had the potential to be implemented in several different manners to deduce temperature. Each approach was derived from a different physical mechanism and offered particular advantages and disadvantages. It was demonstrated that temperature was deduced through the analysis of the inelastic energy transfer between the incident laser source and the quantized lattice vibrations in Raman thermometry. The peak position of the Raman signal was derived from the energy of the zone-center optical phonons that were probed during the Raman experiment. The linewidth of a Raman spectrum evolved as a result of the finite lifetime of the zone-center phonons that were being investigated. It was observed that the Raman signal originated as a consequence of the Heisenberg uncertainty principle, which stipulated that the energy of the phonon was measured only to within a certain specificity when the mode being investigated was available for only a finite amount of time.
The nexus between thermoelectric power production and water use is not uniform across the U.S., but rather differs according to regional physiography, demography, power plant fleet composition, and the transmission network. That is, in some regions water demand for thermoelectric production is relatively small while in other regions it represents the dominate use. The later is the case for the Great Lakes region, which has important implications for the water resources and aquatic ecology of the Great Lakes watershed. This is today, but what about the future? Projected demographic trends, shifting lifestyles, and economic growth coupled with the threat of global climate change and mounting pressure for greater U.S. energy security could have profound effects on the region's energy future. Planning for such an uncertain future is further complicated by the fact that energy and environmental planning and regulatory decisionmaking is largely bifurcated in the region, with environmental and water resource concerns generally taken into account after new energy facilities and technologies have been proposed, or practices are already in place. Based on these confounding needs, the objective of this effort is to develop Great Lakes-specific methods and tools to integrate energy and water resource planning and thereby support the dual goals of smarter energy planning and development, and protection of Great Lakes water resources. Guiding policies for this planning are the Great Lakes and St. Lawrence River Basin Water Resources Compact and the Great Lakes Water Quality Agreement. The desired outcome of integrated energy-water-aquatic resource planning is a more sustainable regional energy mix for the Great Lakes basin ecosystem.
In 2011 the Department of Energy's Office of Electricity embarked on a comprehensive program to assist our Nation's three primary electric interconnections with long term transmission planning. Given the growing concern over water resources in the western U.S. the Western Electricity Coordinating Council (WECC) requested assistance with integrating water resource considerations into their broader electric transmission planning. The result is a project with three overarching objectives: (1) Develop an integrated Energy-Water Decision Support System (DSS) that will enable planners in the Western Interconnection to analyze the potential implications of water stress for transmission and resource planning. (2) Pursue the formulation and development of the Energy-Water DSS through a strongly collaborative process between the Western Electricity Coordinating Council (WECC), Western Governors Association (WGA), the Western States Water Council (WSWC) and their associated stakeholder teams. (3) Exercise the Energy-Water DSS to investigate water stress implications of the transmission planning scenarios put forward by WECC, WGA, and WSWC. The foundation for the Energy-Water DSS is Sandia National Laboratories Energy-Power-Water Simulation (EPWSim) model (Tidwell et al. 2009). The modeling framework targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. This framework provides an interactive environment to explore trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., state, county, watershed, interconnection). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. The framework currently supports modules for calculating water withdrawal and consumption for current and planned electric power generation; projected water demand from competing use sectors; and, surface and groundwater availability. WECC's long range planning is organized according to two target planning horizons, a 10-year and a 20-year. This study supports WECC in the 10-year planning endeavor. In this case the water implications associated with four of WECC's alternative future study cases (described below) are calculated and reported. In future phases of planning we will work with WECC to craft study cases that aim to reduce the thermoelectric footprint of the interconnection and/or limit production in the most water stressed regions of the West.
This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.
Sandia National Laboratories (SNL) is the world leader in the development of the detailed science underpinning the application of a probabilistic risk assessment methodology, referred to in this report as performance assessment (PA), for (1) understanding and forecasting the long-term behavior of a radioactive waste disposal system, (2) estimating the ability of the disposal system and its various components to isolate the waste, (3) developing regulations, (4) implementing programs to estimate the safety that the system can afford to individuals and to the environment, and (5) demonstrating compliance with the attendant regulatory requirements. This report documents the evolution of the SNL PA methodology from inception in the mid-1970s, summarizing major SNL PA applications including: the Subseabed Disposal Project PAs for high-level radioactive waste; the Waste Isolation Pilot Plant PAs for disposal of defense transuranic waste; the Yucca Mountain Project total system PAs for deep geologic disposal of spent nuclear fuel and high-level radioactive waste; PAs for the Greater Confinement Borehole Disposal boreholes at the Nevada National Security Site; and PA evaluations for disposal of high-level wastes and Department of Energy spent nuclear fuels stored at Idaho National Laboratory. In addition, the report summarizes smaller PA programs for long-term cover systems implemented for the Monticello, Utah, mill-tailings repository; a PA for the SNL Mixed Waste Landfill in support of environmental restoration; PA support for radioactive waste management efforts in Egypt, Iraq, and Taiwan; and, most recently, PAs for analysis of alternative high-level radioactive waste disposal strategies including repositories deep borehole disposal and geologic repositories in shale and granite. Finally, this report summarizes the extension of the PA methodology for radioactive waste disposal toward development of an enhanced PA system for carbon sequestration and storage systems. These efforts have produced a generic PA methodology for the evaluation of waste management systems that has gained wide acceptance within the international community. This report documents how this methodology has been used as an effective management tool to evaluate different disposal designs and sites; inform development of regulatory requirements; identify, prioritize, and guide research aimed at reducing uncertainties for objective estimations of risk; and support safety assessments.
Salinas provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas, we refer the reader to Salinas, User's Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature.
Vertical wafer stacking will enable a wide variety of new system architectures by enabling the integration of dissimilar technologies in one small form factor package. With this LDRD, we explored the combination of processes and integration techniques required to achieve stacking of three or more layers. The specific topics that we investigated include design and layout of a reticle set for use as a process development vehicle, through silicon via formation, bonding media, wafer thinning, dielectric deposition for via isolation on the wafer backside, and pad formation.
This report summarizes the accomplishments of a Laboratory-Directed Research and Development (LDRD) project focused on developing and applying new x-ray spectroscopies to understand and improve electric charge transfer in electrochemical devices. Our approach studies the device materials as they function at elevated temperature and in the presence of sufficient gas to generate meaningful currents through the device. We developed hardware and methods to allow x-ray photoelectron spectroscopy to be applied under these conditions. We then showed that the approach can measure the local electric potentials of the materials, identify the chemical nature of the electrochemical intermediate reaction species and determine the chemical state of the active materials. When performed simultaneous to traditional impedance-based analysis, the approach provides an unprecedented characterization of an operating electrochemical system.
Two versions of a current driver for single-turn, single-use 1-cm diameter magnetic field coils have been built and tested at the Sandia National Laboratories for use with cluster fusion experiments at the University of Texas in Austin. These coils are used to provide axial magnetic fields to slow radial loss of electrons from laser-produced deuterium plasmas. Typical peak field strength achievable for the two-capacitor system is 50 T, and 200 T for the ten-capacitor system. Current rise time for both systems is about 1.7 {mu}s, with peak current of 500 kA and 2 MA, respectively. Because the coil must be brought to the laser, the driver needs to be portable and drive currents in vacuum. The drivers are complete but laser-plasma experiments are still in progress. Therefore, in this report, we focus on system design, initial tests, and performance characteristics of the two-capacitor and ten-capacitors systems. The questions of whether a 200 T magnetic field can retard the breakup of a cluster-fusion plasma, and whether this field can enhance neutron production have not yet been answered. However, tools have been developed that will enable producing the magnetic fields needed to answer these questions. These are a two-capacitor, 400-kA system that was delivered to the University of Texas in 2010, and a 2-MA ten-capacitor system delivered this year. The first system allowed initial testing, and the second system will be able to produce the 200 T magnetic fields needed for cluster fusion experiments with a petawatt laser. The prototype 400-kA magnetic field driver system was designed and built to test the design concept for the system, and to verify that a portable driver system could be built that delivers current to a magnetic field coil in vacuum. This system was built copying a design from a fixed-facility, high-field machine at LANL, but made to be portable and to use a Z-machine-like vacuum insulator and vacuum transmission line. This system was sent to the University of Texas in Austin where magnetic fields up to 50 T have been produced in vacuum. Peak charge voltage and current for this system have been 100 kV and 490 kA. It was used this last year to verify injection of deuterium and surrogate clusters into these small, single-turn coils without shorting the coil. Initial test confirmed the need to insulate the inner surface of the coil, which requires that the clusters must be injected through small holes in an insulator. Tests with a low power laser confirmed that it is possible to inject clusters into the magnetic field coils through these holes without destroying the clusters. The university team also learned the necessity of maintaining good vacuum to avoid insulator, transmission line, and coil shorting. A 200-T, 2 MA system was also constructed using the experience from the first design to make the pulsed-power system more robust. This machine is a copy of the prototype design, but with ten 100-kV capacitors versus the two used in the prototype. It has additional inductance in the switch/capacitor unit to avoid breakdown seen in the prototype design. It also has slightly more inductance at the cable connection to the vacuum chamber. With this design we have been able to demonstrate 1 MA current into a 1 cm diameter coil with the vacuum chamber at air pressure. Circuit code simulations, including the additional inductance with the new design, agree well with the measured current at a charge voltage of 40 kV with a short circuit load, and at 50 kV with a coil. The code also predicts that with a charge voltage of 97 kV we will be able to get 2 MA into a 1 cm diameter coil, which will be sufficient for 200 T fields. Smaller diameter or multiple-turn coils will be able to achieve even higher fields, or be able to achieve 200-T fields with lower charge voltage. Work is now proceeding at the university under separate funding to verify operation at the 2-MA level, and to address issues of debris mitigation, measurement of the magnetic field, and operation in vacuum. We anticipate operation at full current with single-turn, magnetic field coils this fall, with 200 T experiments on the Texas Petawatt laser in the spring of 2012.
Peridynamics is a nonlocal extension of classical continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamics model. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized within LAMMPS. An example problem is also included.
LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.
A dynamic assessment model has been developed for evaluating the potential algal biomass and extracted biocrude productivity and costs, using nutrient and water resources available from waste streams in four regions of Canada (western British Columbia, Alberta oil fields, southern Ontario, and Nova Scotia). The purpose of this model is to help identify optimal locations in Canada for algae cultivation and biofuel production. The model uses spatially referenced data across the four regions for nitrogen and phosphorous loads in municipal wastewaters, and CO{sub 2} in exhaust streams from a variety of large industrial sources. Other data inputs include land cover, and solar insolation. Model users can develop estimates of resource potential by manipulating model assumptions in a graphic user interface, and updated results are viewed in real time. Resource potential by location can be viewed in terms of biomass production potential, potential CO{sub 2} fixed, biocrude production potential, and area required. The cost of producing algal biomass can be estimated using an approximation of the distance to move CO{sub 2} and water to the desired land parcel and an estimation of capital and operating costs for a theoretical open pond facility. Preliminary results suggest that in most cases, the CO{sub 2} resource is plentiful compared to other necessary nutrients (especially nitrogen), and that siting and prospects for successful large-scale algae cultivation efforts in Canada will be driven by availability of those other nutrients and the efficiency with which they can be used and re-used. Cost curves based on optimal possible siting of an open pond system are shown. The cost of energy for maintaining optimal growth temperatures is not considered in this effort, and additional research in this area, which has not been well studied at these latitudes, will be important in refining the costs of algal biomass production. The model will be used by NRC-IMB Canada to identify promising locations for both demonstration and pilot-scale algal cultivation projects, including the production potential of using wastewater, and potential land use considerations.
The essential characteristics of the issue of radioactive waste management can be conceptualized as complex, with a variety of facets and uncertainty. These characteristics tend to cause people to perceive the issue of radioactive waste management as a 'risk'. This study was initiated in response to a desire to understand the perceptions of risk that the Korean public holds towards radioactive waste and the relevant policies and policy-making processes. The study further attempts to identify the factors influencing risk perceptions and the relationships between risk perception and social acceptance.