Trough heat collection element deformation and solar intercept impact
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
An expert panel was assembled to identify gaps in fuels and materials research prior to licensing sodium cooled fast reactor (SFR) design. The expert panel considered both metal and oxide fuels, various cladding and duct materials, structural materials, fuel performance codes, fabrication capability and records, and transient behavior of fuel types. A methodology was developed to rate the relative importance of phenomena and properties both as to importance to a regulatory body and the maturity of the technology base. The technology base for fuels and cladding was divided into three regimes: information of high maturity under conservative operating conditions, information of low maturity under more aggressive operating conditions, and future design expectations where meager data exist.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The future of reprocessing in the United States is strongly driven by plant economics. With increasing safeguards, security, and safety requirements, future plant monitoring systems must be able to demonstrate more efficient operations while improving the current state of the art. The goal of this work was to design and examine the incorporation of advanced plant monitoring technologies into safeguards systems with attention to the burden on the operator. The technologies examined include micro-fluidic sampling for more rapid analytical measurements and spectroscopy-based techniques for on-line process monitoring. The Separations and Safeguards Performance Model was used to design the layout and test the effect of adding these technologies to reprocessing. The results here show that both technologies fill key gaps in existing materials accountability that provide detection of diversion events that may not be detected in a timely manner in existing plants. The plant architecture and results under diversion scenarios are described. As a tangent to this work, both the AMUSE and SEPHIS solvent extraction codes were examined for integration in the model to improve the reality of diversion scenarios. The AMUSE integration was found to be the most successful and provided useful results. The SEPHIS integration is still a work in progress and may provide an alternative option.
A secondary containment vessel, made of stainless 316, failed due to severe nitrate salt corrosion. Corrosion was in the form of pitting was observed during high temperature, chemical stability experiments. Optical microscopy, scanning electron microscopy and energy dispersive spectroscopy were all used to diagnose the cause of the failure. Failure was caused by potassium oxide that crept into the gap between the primary vessel (alumina) and the stainless steel vessel. Molten nitrate solar salt (89% KNO{sub 3}, 11% NaNO{sub 3} by weight) was used during chemical stability experiments, with an oxygen cover gas, at a salt temperature of 350-700 C. Nitrate salt was primarily contained in an alumina vessel; however salt crept into the gap between the alumina and 316 stainless steel. Corrosion occurred over a period of approximately 2000 hours, with the end result of full wall penetration through the stainless steel vessel; see Figures 1 and 2 for images of the corrosion damage to the vessel. Wall thickness was 0.0625 inches, which, based on previous data, should have been adequate to avoid corrosion-induced failure while in direct contact with salt temperature at 677 C (0.081-inch/year). Salt temperatures exceeding 650 C lasted for approximately 14 days. However, previous corrosion data was performed with air as the cover gas. High temperature combined with an oxygen cover gas obviously drove corrosion rates to a much higher value. Corrosion resulted in the form of uniform pitting. Based on SEM and EDS data, pits contained primarily potassium oxide and potassium chromate, reinforcing the link between oxides and severe corrosion. In addition to the pitting corrosion, a large blister formed on the side wall, which was mainly composed of potassium, chromium and oxygen. All data indicated that corrosion initiated internally and moved outward. There was no evidence of intergranular corrosion nor were there any indication of fast pathways along grain boundaries. Much of the pitting occurred near welds; however this was the hottest region in the chamber. Pitting was observed up to two inches above the weld, indicating independence from weld effects.
Abstract not provided.
Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
The goal of this project was to expand upon previously demonstrated single carbon nanotube devices by preparing a more practical, multi-single-walled carbon nanotube (SWNT) device. As a late-start, proof-of-concept project, the work focused on the fabrication and testing of chromophore-functionalized aligned SWNT field effect transistors (SWNT-FET). Such devices have not yet been demonstrated. The advantages of fabricating aligned SWNT devices include increased device cross-section to improve sensitivity to light, elimination of increased electrical resistance at nanotube junctions in random mat devices, and the ability to model device responses. The project did not achieve the goal of fabricating and testing chromophore-modified SWNT arrays, but a new SWNT growth capability was established that will benefit future projects. Although the ultimate goal of fabricating and testing chromophore-modified SWNT arrays was not achieved, the work did lead to a new carbon nanotube growth capability at Sandia/CA. The synthesis of dense arrays of horizontally aligned SWNTs is a developing area of research with significant potential for new discoveries. In particular, the ability to prepare arrays of carbon nanotubes of specific electronic types (metallic or semiconducting) could yield new classes of nanoscale devices.
Abstract not provided.
Abstract not provided.
Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. The unclonability property comes from the accepted hardness of replicating the multitude of characteristics introduced during the manufacturing process. This makes PUFs useful for solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection in offline settings. We first argue that traditional (black-box) PUFs are not useful for protecting software in settings where communication with a vendor's server or third party network device is infeasible or impossible. Instead, we argue that Intrinsic PUFs are needed to solve the above mentioned problems because they are intrinsically involved in processing the information that is to be protected. Finally, we describe how sources of randomness in any computing device can be used for creating intrinsic-personal-PUFs (IP-PUF) and present experimental results in using standard off-the-shelf computers as IP-PUFs.
IEEE Transactions on Nuclear Science
Abstract not provided.
Abstract not provided.
Mathematical Biosciences
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Operations Research Letters
Abstract not provided.
In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. The model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of parameter values. We suggest 'consensus' parameter values for military and civilian populations in the form of normal distributions so that they may be further used in other applications. Representing the parameter values as distributions, instead of point values, allows us to capture the uncertainty and scatter in the parameters. Quantifying the uncertainty allows us to use these models further in inverse problems, predictions under uncertainty and various other studies involving risk.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.
The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) is a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.
The CABle ANAlysis (CABANA) portion of the EMPHASIS{trademark} suite is designed specifically for the simulation of cable system-generated electromagnetic pulse (SGEMP). The code can be used to evaluate the response of a specific cable design to threat or to compare and minimize the relative response of difference designs. This document provides user-specific information to facilitate the application of the code to cables of interest. It solves the electrical portion of a cable SGEMP simulation. It takes specific results from the deterministic radiation-transport code CEPTRE as sources and computes the resulting electrical response to an arbitrary cable load. The cable geometry itself is also arbitrary and is limited only by the patience of the user in meshing and by the available computing resources for the solution. The CABANA simulation involves solution of the quasi-static Maxwell equations using finite-element method (FEM) techniques.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.
European Space Agency, [Special Publication] ESA SP
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributions are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.
There is currently sparse literature on how to implement systematic and comprehensive processes for modern V&V/UQ (VU) within large computational simulation projects. Important design requirements have been identified in order to construct a viable 'system' of processes. Significant processes that are needed include discovery, accumulation, and assessment. A preliminary design is presented for a VU Discovery process that accounts for an important subset of the requirements. The design uses a hierarchical approach to set context and a series of place-holders that identify the evidence and artifacts that need to be created in order to tell the VU story and to perform assessments. The hierarchy incorporates VU elements from a Predictive Capability Maturity Model and uses questionnaires to define critical issues in VU. The place-holders organize VU data within a central repository that serves as the official VU record of the project. A review process ensures that those who will contribute to the record have agreed to provide the evidence identified by the Discovery process. VU expertise is an essential part of this process and ensures that the roadmap provided by the Discovery process is adequate. Both the requirements and the design were developed to support the Nuclear Energy Advanced Modeling and Simulation Waste project, which is developing a set of advanced codes for simulating the performance of nuclear waste storage sites. The Waste project served as an example to keep the design of the VU Discovery process grounded in practicalities. However, the system is represented abstractly so that it can be applied to other M&S projects.
The class of discontinuous Petrov-Galerkin finite element methods (DPG) proposed by L. Demkowicz and J. Gopalakrishnan guarantees the optimality of the solution in an energy norm and produces a symmetric positive definite stiffness matrix, among other desirable properties. In this paper, we describe a toolbox, implemented atop Sandia's Trilinos library, for rapid development of solvers for DPG methods. We use this toolbox to develop solvers for the Poisson and Stokes problems.
Abstract not provided.
Journal of ACM
Abstract not provided.
Abstract not provided.
Abstract not provided.
Concurreny and Computation: Practice and Experience
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Development of an effective strategy for shelter and evacuation is among the most important planning tasks in preparation for response to a low yield, nuclear detonation in an urban area. Extensive studies have been performed and guidance published that highlight the key principles for saving lives following such an event. However, region-specific data are important in the planning process as well. This study examines some of the unique regional factors that impact planning for a 10 kt detonation in Chicago. The work utilizes a single scenario to examine regional impacts as well as the shelter-evacuate decision alternatives at selected exemplary points. For many Chicago neighborhoods, the excellent assessed shelter quality available make shelter-in-place or selective transit to a nearby shelter a compelling post-detonation strategy.
High Energy Density Physics
Abstract not provided.
This report provides the results of a scoping study evaluating the potential risk reduction value of a hypothetical, earthquake early-warning system. The study was based on an analysis of the actions that could be taken to reduce risks to population and infrastructures, how much time would be required to take each action and the potential consequences of false alarms given the nature of the action. The results of the scoping analysis indicate that risks could be reduced through improving existing event notification systems and individual responses to the notification; and production and utilization of more detailed risk maps for local planning. Detailed maps and training programs, based on existing knowledge of geologic conditions and processes, would reduce uncertainty in the consequence portion of the risk analysis. Uncertainties in the timing, magnitude and location of earthquakes and the potential impacts of false alarms will present major challenges to the value of an early-warning system.
IEEE Microwave and Wireless Components Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.