With the Lemnos framework, interoperability of control security equipment is straightforward. To obtain interoperability between proprietary security appliance units, one or both vendors must now write cumbersome 'translation code.' If one party changes something, the translation code 'breaks.' The Lemnos project is developing and testing a framework that uses widely available security functions and protocols like IPsec - to form a secure communications channel - and Syslog, to exchange security log messages. Using this model, security appliances from two or more different vendors can clearly and securely exchange information, helping to better protect the total system. Simplify regulatory compliance in a complicated security environment by leveraging the Lemnos framework. As an electric utility, are you struggling to implement the NERC CIP standards and other regulations? Are you weighing the misery of multiple management interfaces against committing to a ubiquitous single-vendor solution? When vendors build their security appliances to interoperate using the Lemnos framework, it becomes practical to match best-of-breed offerings from an assortment of vendors to your specific control systems needs. The Lemnos project is developing and testing a framework that uses widely available open-source security functions and protocols like IPsec and Syslog to create a secure communications channel between appliances in order to exchange security data.
Instrumented, fully coupled thermal-mechanical experiments were conducted to provide validation data for finite element simulations of failure in pressurized, high temperature systems. The design and implementation of the experimental methodology is described in another paper of this conference. Experimental coupling was accomplished on tubular 304L stainless steel specimens by mechanical loading imparted by internal pressurization and thermal loading by side radiant heating. Experimental parameters, including temperature and pressurization ramp rates, maximum temperature and pressure, phasing of the thermal and mechanical loading and specimen geometry details were studied. Experiments were conducted to increasing degrees of deformation, up to and including failure. Mechanical characterization experiments of the 304L stainless steel tube material was also completed for development of a thermal elastic-plastic material constitutive model used in the finite element simulations of the validation experiments. The material was characterized in tension at a strain rate of 0.001/s from room temperature to 800 C. The tensile behavior of the tube material was found to differ substantially from 304L bar stock material, with the plasticity characteristics and strain to failure differing at every test temperature.
Coupled thermal-mechanical experiments with well-defined, controlled boundary conditions were designed through an iterative process involving a team of experimentalists, material modelers and computational analysts. First the basic experimental premise was selected: an axisymmetric tubular specimen mechanically loaded by internal pressurization and thermally loaded asymmetrically by side radiant heating. Then several integrated experimental-analytical steps were taken to determine the experimental details. The boundary conditions were mostly thermally driven and were chosen so they could be modeled accurately; the experimental fixtures were designed to ensure that the boundary conditions were met. Preliminary, uncoupled analyses were used to size the specimen diameter, height and thickness with experimental consideration of maximum pressure loads and fixture design. Iterations of analyses and experiments were used to efficiently determine heating parameters including lamp and heating shroud design, set off distance between the lamps and shroud and between the shroud and specimen, obtainable ramp rates, and the number and spatial placement of thermocouples. The design process and the experimental implementation of the final coupled thermomechanical failure experiment design will be presented.
The blades of a modern wind turbine are critical components central to capturing and transmitting most of the load experienced by the system. They are complex structural items composed of many layers of fiber and resin composite material and typically, one or more shear webs. Large turbine blades being developed today are beyond the point of effective trial-and-error design of the past and design for reliability is always extremely important. Section analysis tools are used to reduce the three-dimensional continuum blade structure to a simpler beam representation for use in system response calculations to support full system design and certification. One model simplification approach is to analyze the two-dimensional blade cross sections to determine the properties for the beam. Another technique is to determine beam properties using static deflections of a full three-dimensional finite element model of a blade. This paper provides insight into discrepancies observed in outputs from each approach. Simple two-dimensional geometries and three-dimensional blade models are analyzed in this investigation. Finally, a subset of computational and experimental section properties for a full turbine blade are compared.
Resonant plasmonic detectors are potentially important for terahertz (THz) spectroscopic imaging. We have fabricated and characterized antenna coupled detectors that integrate a broad-band antenna, which improves coupling of THz radiation. The vertex of the antenna contains the tuning gates and the bolometric barrier gate. Incident THz radiation may excite 2D plasmons with wave-vectors defined by either a periodic grating gate or a plasmonic cavity determined by ohmic contacts and gate terminals. The latter approach of exciting plasmons in a cavity defined by a short micron-scale channel appears most promising. With this short-channel geometry, we have observed multiple harmonics of THz plasmons. At 20 K with detector bias optimized we report responsivity on resonance of 2.5 kV/W and an NEP of 5 x 10{sup -10} W/Hz{sup 1/2}.
Separation distances are used in hydrogen refueling stations to protect people, structures, and equipment from the consequences of accidental hydrogen releases. Specifically, hydrogen jet flames resulting from ignition of unintended releases can be extensive in length and pose significant radiation and impingement hazards. Depending on the leak diameter and source pressure, the resulting separation distances can be unacceptably large. One possible mitigation strategy to reduce exposure to hydrogen flames is to incorporate barriers around hydrogen storage, process piping, and delivery equipment. The effectiveness of barrier walls to reduce hazards at hydrogen facilities has been previously evaluated using experimental and modeling information developed at Sandia National Laboratories. The effect of barriers on the risk from different types of hazards including direct flame contact, radiation heat fluxes, and overpressures associated with delayed hydrogen ignition has subsequently been evaluated and used to identify potential reductions in separation distances in hydrogen facilities. Both the frequency and consequences used in this risk assessment and the risk results are described. The results of the barrier risk analysis can also be used to help establish risk-informed barrier design requirements for use in hydrogen codes and standards.
This study investigates a pathway to nanoporous structures created by hydrogen and helium implantation in aluminum. Previous experiments for fusion applications have indicated that hydrogen and helium ion implantations are capable of producing bicontinuous nanoporous structures in a variety of metals. This study focuses specifically on implantations of hydrogen and helium ions at 25 keV in aluminum. The hydrogen and helium systems result in remarkably different nanostructures of aluminum at the surface. Scanning electron microscopy, focused ion beam, and transmission electron microscopy show that both implantations result in porosity that persists approximately 200 nm deep. However, hydrogen implantations tend to produce larger and more irregular voids that preferentially reside at defects. Implantations of helium at a fluence of 10{sup 18} cm{sup -2} produce much smaller porosity on the order of 10 nm that is regular and creates a bicontinuous structure in the porous region. The primary difference driving the formation of the contrasting structures is likely the relatively high mobility of hydrogen and the ability of hydrogen to form alanes that are capable of desorbing and etching Al (111) faces.
The internal structure of stars depends on the radiative opacity of the stellar matter. However, opacity models have never been experimentally tested at the conditions that exist inside stars. Experiments at the Sandia Z facility are underway to measure the x-ray transmission of iron, an important stellar constituent, at temperature and density high enough to evaluate the physical underpinnings of stellar opacity models. Initial experiments provided information on the charge state distribution and the energy level structure for the iron ions that exist at the solar radiation/convection boundary. Data analysis and new experiments at higher densities and temperatures will be described.
There is an increasing need to assess the performance of high consequence systems using a modeling and simulation based approach. Central to this approach are the need to quantify the uncertainties present in the system and to compare the system response to an expected performance measure. At Sandia National Laboratories, this process is referred to as quantification of margins and uncertainties or QMU. Depending on the outcome of the assessment, there might be a need to increase the confidence in the predicted response of a system model; thus a need to understand where resources need to be allocated to increase this confidence. This paper examines the problem of resource allocation done within the context of QMU. An optimization based approach to solving the resource allocation is considered and sources of aleatoric and epistemic uncertainty are included in the calculations.
Changing paradigms from paper laboratory notebooks to electronic creates challenges. Meeting regulatory requirements in an R&D environment drives thorough documentation. Creating complete experimental records is easier using electronic laboratory notebooks. Supporting investigations through re-creating experimental conditions is greatly facilitated using an ELN.
Pandemic influenza has become a serious global health concern; in response, governments around the world have allocated increasing funds to containment of public health threats from this disease. Pandemic influenza is also recognized to have serious economic implications, causing illness and absence that reduces worker productivity and economic output and, through mortality, robs nations of their most valuable assets - human resources. This paper reports two studies that investigate both the short- and long-term economic implications of a pandemic flu outbreak. Policy makers can use the growing number of economic impact estimates to decide how much to spend to combat the pandemic influenza outbreaks. Experts recognize that pandemic influenza has serious global economic implications. The illness causes absenteeism, reduced worker productivity, and therefore reduced economic output. This, combined with the associated mortality rate, robs nations of valuable human resources. Policy makers can use economic impact estimates to decide how much to spend to combat the pandemic influenza outbreaks. In this paper economists examine two studies which investigate both the short- and long-term economic implications of a pandemic influenza outbreak. Resulting policy implications are also discussed. The research uses the Regional Economic Modeling, Inc. (REMI) Policy Insight + Model. This model provides a dynamic, regional, North America Industrial Classification System (NAICS) industry-structured framework for forecasting. It is supported by a population dynamics model that is well-adapted to investigating macro-economic implications of pandemic influenza, including possible demand side effects. The studies reported in this paper exercise all of these capabilities.
The relatively recent development of short (nsec) and ultra-short (fsec) pulsed laser systems has introduced process capabilities which are particularly suited for micro-manufacturing applications. Micrometer feature resolutions and minimal heat affected zones are commonly cited benefits, although unique material interactions also prove attractive for many applications. A background of short and ultra-short pulsed laser system capabilities and material interactions will be presented for micro-scale processing. Processing strengths and limitations will be discussed and demonstrated within the framework of applications related to micro-machining, material surface modifications, and fundamental material science research.
This document provides common best practices for the efficient utilization of parallel file systems for analysts and application developers. A multi-program, parallel supercomputer is able to provide effective compute power by aggregating a host of lower-power processors using a network. The idea, in general, is that one either constructs the application to distribute parts to the different nodes and processors available and then collects the result (a parallel application), or one launches a large number of small jobs, each doing similar work on different subsets (a campaign). The I/O system on these machines is usually implemented as a tightly-coupled, parallel application itself. It is providing the concept of a 'file' to the host applications. The 'file' is an addressable store of bytes and that address space is global in nature. In essence, it is providing a global address space. Beyond the simple reality that the I/O system is normally composed of a small, less capable, collection of hardware, that concept of a global address space will cause problems if not very carefully utilized. How much of a problem and the ways in which those problems manifest will be different, but that it is problem prone has been well established. Worse, the file system is a shared resource on the machine - a system service. What an application does when it uses the file system impacts all users. It is not the case that some portion of the available resource is reserved. Instead, the I/O system responds to requests by scheduling and queuing based on instantaneous demand. Using the system well contributes to the overall throughput on the machine. From a solely self-centered perspective, using it well reduces the time that the application or campaign is subject to impact by others. The developer's goal should be to accomplish I/O in a way that minimizes interaction with the I/O system, maximizes the amount of data moved per call, and provides the I/O system the most information about the I/O transfer per request.
The amounts of charge collection by single-photon absorption to that by two-photon absorption laser testing techniques have been directly compared using specially made SOI diodes. Details of this comparison are discussed.
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
We report reflectivity, design and laser damage comparisons of our AR coatings for use at 1054 nm and/or 527 nm, and at angles of incidence between 0 and 45 degrees.
Axial Ge/Si heterostructure nanowires (NWs) allow energy band-edge engineering along the axis of the NW, which is the charge transport direction, and the realization of asymmetric devices for novel device architectures. This work reports on two significant advances in the area of heterostructure NWs and tunnel FETs: (i) the realization of 100% compositionally modulated Si/Ge axial heterostructure NWs with lengths suitable for device fabrication and (ii) the design and implementation of Schottky barrier tunnel FETs on these NWs for high-on currents and suppressed ambipolar behavior. Initial prototype devices with 10 nm PECVD SiN{sub x} gate dielectric resulted in a very high current drive in excess of 100 {micro}A/{micro}m (I/{pi}D) and 10{sup 5} I{sub on}/I{sub off} ratios. Prior work on the synthesis of Ge/Si axial NW heterostructures through the VLS mechanism have resulted in axial Si/Si{sub 1-x}Ge{sub x} NW heterostructures with x{sub max} {approx} 0.3, and more recently 100% composition modulation was achieved with a solid growth catalyst. In this latter case, the thickness of the heterostructure cannot exceed few atomic layers due to the slow axial growth rate and concurrent radial deposition on the NW sidewalls leading to a mixture of axial and radial deposition, which imposes a big challenge for fabricating useful devices form these NWs in the near future. Here, we report the VLS growth of 100% doping and composition modulated axial Ge/Si heterostructure NWs with lengths appropriate for device fabrication by devising a growth procedure that eliminates Au diffusion on the NW sidewalls and minimizes random kinking in the heterostructure NWs as deduced from detailed microscopy analysis. Fig. 1 a shows a cross-sectional SEM image of epitaxial Ge/Si axial NW heterostructures grown on a Ge(111) surface. The interface abruptness in these Ge/Si heterostructure NWs is of the order of the NW diameter. Some of these NWs develop a crystallographic kink that is {approx}20{sup o} off the <111> axis at about 300 nm away from the Ge/Si interface. This provides a natural marker for placing the gate contact electrodes and gate metal at appropriate location for desired high-on current and reduced ambipolarity as shown in Fig. 2. The 1D heterostructures allow band-edge engineering in the transport direction, not easily accessible in planar devices, providing an additional degree of freedom for designing tunnel FETs (TFETs). For instance, a Ge tunnel source can be used for efficient electron/hole tunneling and a Si drain can be used for reduced back-tunneling and ambipolar behavior. Interface abruptness on the other hand (particularly for doping) imposes challenges in these structures and others for realizing high performance TFETs in p-i-n junctions. Since the metal-semiconductor contacts provide a sharp interface with band-edge control, we use properly designed Schottky contacts (aided by 3D Silvaco simulations) as the tunnel barriers both at the source and drain and utilize the asymmetry in the Ge/Si channel bandgap to reduce ambipolar transport behavior generally observed in TFETs. Fig. 3 shows the room-temperature transfer curves of a Ge/Si heterostructure TFET (H-TFET) for different V{sub DS} values showing a maximum on-current of {approx}7 {micro}A, {approx}170 mV/decade inverse subthreshold slope and 5 orders of magnitude I{sub on}/I{sub off} ratios for all V{sub DS} biases considered here. This high on-current value is {approx}1750 X higher than that obtained with Si p-i-n{sup +} NW TFETs and {approx}35 X higher than that obtained with CNT TFET. The I{sub on}/I{sub off} ratio and inverse subthreshold slope compare favorably to that of Si {approx} 10{sup 3} I{sub on}/I{sub off} and {approx} 800 mV/decade SS{sup -1} but lags behind those of CNT TFET due to poor PECVD nitride gate oxide quality ({var_epsilon}{sub r} {approx} 3-4). The asymmetry in the Schottky barrier heights used here eliminates the stringent requirements of abrupt doped interfaces used in p-i-n based TFETs, which is hard to achieve both in thin-film and in NW growth. These initial promising results are expected to be further improved by using a high-k gate dielectric.
Numerical simulations indicate that significant fusion yields (>100 kJ) may be obtained by pulsed-power-driven implosions of cylindrical metal liners onto magnetized and preheated deuterium-tritium fuel. The primary physics risk to this approach is the Magneto-Rayleigh-Taylor (MRT) instability, which operates during both the acceleration and deceleration phase of the liner implosion. We have designed and performed some experiments to study the MRT during the acceleration phase, where the light fluid is purely magnetic. Results from our first series of experiments and plans for future experiments will be presented. According to simulations, an initial axial magnetic field of 10 T is compressed to >100 MG within the liner during the implosion. The magnetic pressure becomes comparable to the plasma pressure during deceleration, which could significantly affect the growth of the MRT instability at the fuel/liner interface. The MRT instability is also important in some astronomical objects such as the Crab Nebula (NGC1962). In particular, the morphological structure of the observed filaments may be determined by the ratio of the magnetic to material pressure and alignment of the magnetic field with the direction of acceleration [Hester, ApJ, 456, 225 1996]. Potential experiments to study this MRT behavior using the Z facility will be presented.
3-D cubic unit cell arrays containing split ring resonators were fabricated and characterized. The unit cells are {approx}3 orders-of-magnitude smaller than microwave SRR-based metamaterials and exhibit both electrically and magnetically excited resonances for normally incident TEM waves in addition to showing improved isotropic response.
An AlN MEMS resonator technology has been developed, enabling massively parallel filter arrays on a single chip. Low-loss filter banks covering the 10 MHz--10-GHz frequency range have been demonstrated, as has monolithic integration with inductors and CMOS circuitry. The high level of integration enables miniature multi-bandm spectrally aware, and cognitive radios.
We describe a time-domain spectroscopy system in the thermal infrared used for complete transmission and reflection characterization of metamaterials in amplitude and phase. The system uses a triple-output near-infrared ultrafast fiber laser, phase-locked difference frequency generation and phase-matched electro-optic sampling. We will present measurements of several metamaterials designs.
Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement. Large photovoltaic systems are typically developed as projects which supply electricity to a utility and are owned by independent power producers. Obtaining financing at favorable rates and attracting investors requires confidence in the projected energy yield from the plant. In this paper, various performance models for projecting annual energy yield from Concentrating Photovoltaic (CPV) systems are assessed by comparing measured system output to model predictions based on measured weather and irradiance data. The results are statistically analyzed to identify systematic error sources.
Four approaches to modeling multi-junction concentrating photovoltaic system performance are assessed by comparing modeled performance to measured performance. Measured weather, irradiance, and system performance data were collected on two systems over a one month period. Residual analysis is used to assess the models and to identify opportunities for model improvement.
Problem Statement: (1) Uncertainties in PV system performance and reliability impact business decisions - Project cost and financing estimates, Pricing service contracts and guarantees, Developing deployment and O&M strategies; (2) Understanding and reducing these uncertainties will help make the PV industry more competitive (3) Performance has typically been estimated without much attention to reliability of components; and (4) Tools are needed to assess all inputs to the value proposition (e.g., LCOE, cash flow, reputation, etc.). Goals and objectives are: (1) Develop a stochastic simulation model (in GoldSim) that can represent PV system performance as a function of system design, weather, reliability, and O&M policies; (2) Evaluate performance for an example system to quantify sources of uncertainty and identify dominant parameters via a sensitivity study; and (3) Example System - 1 inverter, 225 kW DC Array latitude tilt (90 strings of 12 modules {l_brace}1080 modules{r_brace}), Weather from Tucumcari, NM (TMY2 with annual uncertainty).
Despite decades of international consensus that deep geological disposal is the best option for permanent management of long-lived high-level radioactive wastes, no repositories for used nuclear fuel or high-level waste are in operation. Detailed long-term safety assessments have been completed worldwide for a wide range of repository designs and disposal concepts, however, and valuable insights from these assessments are available to inform future decisions about managing radioactive wastes. Qualitative comparisons among the existing safety assessments for disposal concepts in clay, granite, salt, and unsaturated volcanic tuff show how different geologic settings can be matched with appropriate engineered barrier systems to provide a high degree of confidence in the long-term safety of geologic disposal. Review of individual assessments provides insights regarding the release pathways and radionuclides that are most likely to contribute to estimated doses to humans in the far future for different disposal concepts, and can help focus research and development programs to improve management and disposal technologies. Lessons learned from existing safety assessments may be particularly relevant for informing decisions during the process of selecting potential repository sites.
The Fracture-Matrix Transport (FMT) code developed at Sandia National Laboratories solves chemical equilibrium problems using the Pitzer activity coefficient model with a database containing actinide species. The code is capable of predicting actinide solubilities at 25 C in various ionic-strength solutions from dilute groundwaters to high-ionic-strength brines. The code uses oxidation state analogies, i.e., Am(III) is used to predict solubilities of actinides in the +III oxidation state; Th(IV) is used to predict solubilities of actinides in the +IV state; Np(V) is utilized to predict solubilities of actinides in the +V state. This code has been qualified for predicting actinide solubilities for the Waste Isolation Pilot Plant (WIPP) Compliance Certification Application in 1996, and Compliance Re-Certification Applications in 2004 and 2009. We have established revised actinide-solubility uncertainty ranges and probability distributions for Performance Assessment (PA) by comparing actinide solubilities predicted by the FMT code with solubility data in various solutions from the open literature. The literature data used in this study include solubilities in simple solutions (NaCl, NaHCO{sub 3}, Na{sub 2}CO{sub 3}, NaClO{sub 4}, KCl, K{sub 2}CO{sub 3}, etc.), binary mixing solutions (NaCl+NaHCO{sub 3}, NaCl+Na{sub 2}CO{sub 3}, KCl+K{sub 2}CO{sub 3}, etc.), ternary mixing solutions (NaCl+Na{sub 2}CO{sub 3}+KCl, NaHCO{sub 3}+Na{sub 2}CO{sub 3}+NaClO{sub 4}, etc.), and multi-component synthetic brines relevant to the WIPP.
Modern space based optical sensors place substantial demands on the focal plane array readout integrated circuit. Active pixel readout designs offer direct access to individual pixel data but require analog to digital conversion at or near each pixel. Thus, circuit designers must create precise, fundamentally analog circuitry within tightly constrained areas on the integrated circuit. Rapidly changing phenomena necessitate tradeoffs between sampling and conversion speed, data precision, and heat generation adjacent the detector array, especially of concern for thermally sensitive space grade infrared detectors. A simplified parametric model is presented that illustrates seeker system performance and analog to digital conversion requirements trends in the visible through mid-wave infrared, for varying sample rate. Notional limiting-case Earth optical backgrounds were generated using MODTRAN4 with a range of cloud extremes and approximate practical albedo limits for typical surface features from a composite of the Mosart and Aster spectral albedo databases. The dynamic range requirements imposed by these background spectra are discussed in the context of optical band selection and readout design impacts.
This document provides detailed test results of ballistic impact experiments performed on several types of high performance concrete. These tests were performed at the Sandia National Laboratories Shock Thermodynamic Applied Research Facility using a 50 caliber powder gun to study penetration resistance of concrete samples. This document provides test results for ballistic impact experiments performed on two types of concrete samples, (1) Ductal{reg_sign} concrete is a fiber reinforced high performance concrete patented by Lafarge Group and (2) ultra-high performance concrete (UHPC) produced in-house by DoD. These tests were performed as part of a research demonstration project overseen by USACE and ERDC, at the Sandia National Laboratories Shock Thermodynamic Applied Research (STAR) facility. Ballistic penetration tests were performed on a single stage research powder gun of 50 caliber bore using a full metal jacket M33 ball projectile with a nominal velocity of 914 m/s (3000 ft/s). Testing was observed by Beverly DiPaolo from ERDC-GSL. In all, 31 tests were performed to achieve the test objectives which were: (1) recovery of concrete test specimens for post mortem analysis and characterization at outside labs, (2) measurement of projectile impact velocity and post-penetration residual velocity from electronic and radiographic techniques and, (3) high-speed photography of the projectile prior to impact, impact and exit of the rear surface of the concrete construct, and (4) summarize the results.
Climate change is a long-term process that will trigger a range of multi-dimensional demographic, economic, geopolitical, and national security issues with many unknowns and significant uncertainties. At first glance, climate-change-related national security dimensions seem far removed from today's major national security threats. Yet climate change has already set in motion forces that will require U.S. attention and preparedness. The extent and uncertainty associated with these situations necessitate a move away from conventional security practices, toward a small but flexible portfolio of assets to maintain U.S. interests. Thoughtful action is required now if we are to acquire the capabilities, tools, systems, and institutions needed to meet U.S. national security requirements as they evolve with the emerging stresses and shifts of climate change.
Herranz, T.; McCarty, K.F.; Santos, B.; Monti, M.; De La Figuera, J.
The growth and decomposition of thin layers of Mg and magnesium hydride on Ru(0001) using an in situ technique that provides real-space, real-time observations of the formation of hydride islands, was reported. The experiments have been performed using low energy electron microscopy (LEEM). Using LEEM, the growth of films was followed up to 10 atomic layers (AL) for temperatures between 300 and 430 K. With increased exposure time, there is little further nucleation. The LEED patterns of the 4 AL Mg film shows two sets of 6-fold diffraction spots, one set from the film and the other from the substrate. After being exposed to hydrogen the films were heated in UHV and the surface was simultaneously imaged by LEEM and gas generation monitored by temperature desorption (TD). In the LEEM images, no changes were observed up to temperatures around 450 K. From this temperature desorption of the Mg layers started. An increase in the decomposition temperature is observed with thicker original Mg films.
Portable remote sensing devices are increasingly needed to cost effectively characterize the meteorology at a potential wind energy site as the size of modern wind turbines increase. A short term project co-locating a Sound Detection and Ranging System (SODAR) with a 200 meter instrumented meteorological tower at the Texas Tech Wind Technology Field Site was performed to collect and summarize wind information through an atmospheric layer typical of utility scale rotor plane depths. Data collected identified large speed shears and directional shears that may lead to unbalanced loads on the rotors. This report identifies suggestions for incorporation of additional data in wind resource assessments and a few thoughts on the potential for using a SODAR or SODAR data to quantify or investigate other parameters that may be significant to the wind industry.
This assessment takes the result of the FY08 performance target baseline of mercury at Sandia National Laboratories/New Mexico, and records the steps taken in FY09 to collect additional data, encourage the voluntary reduction of mercury, and measure success. Elemental (metallic) mercury and all of its compounds are toxic, and exposure to excessive levels can permanently damage or fatally injure the brain and kidneys. Elemental mercury can also be absorbed through the skin and cause allergic reactions. Ingestion of inorganic mercury compounds can cause severe renal and gastrointestinal damage. Organic compounds of mercury such as methyl mercury, created when elemental mercury enters the environment, are considered the most toxic forms of the element. Exposures to very small amounts of these compounds can result in devastating neurological damage and death.1 SNL/NM is required to report annually on the site wide inventory of mercury for the Environmental Protection Agency's (EPA) Toxics Release Inventory (TRI) Program, as the site's inventory is excess of the ten pound reportable threshold quantity. In the fiscal year 2008 (FY08) Pollution Prevention Program Plan, Section 5.3 Reduction of Environmental Releases, a performance target stated was to establish a baseline of mercury, its principle uses, and annual quantity or inventory. This was accomplished on July 29, 2008 by recording the current status of mercury in the Chemical Information System (CIS).
Knight & Carver was contracted by Sandia National Laboratories to develop a Sweep Twist Adaptive Rotor (STAR) blade that reduced operating loads, thereby allowing a larger, more productive rotor. The blade design used outer blade sweep to create twist coupling without angled fiber. Knight & Carver successfully designed, fabricated, tested and evaluated STAR prototype blades. Through laboratory and field tests, Knight & Carver showed the STAR blade met the engineering design criteria and economic goals for the program. A STAR prototype was successfully tested in Tehachapi during 2008 and a large data set was collected to support engineering and commercial development of the technology. This report documents the methodology used to develop the STAR blade design and reviews the approach used for laboratory and field testing. The effort demonstrated that STAR technology can provide significantly greater energy capture without higher operating loads on the turbine.
A Sewer System Management Plan (SSMP) is required by the State Water Resources Control Board (SWRCB) Order No. 2006-0003-DWQ Statewide General Waste Discharge Requirements (WDR) for Sanitary Sewer Systems (General Permit). DOE, National Nuclear Security Administration (NNSA), Sandia Site Office has filed a Notice of Intent to be covered under this General Permit. The General Permit requires a proactive approach to reduce the number and frequency of sanitary sewer overflows (SSOs) within the State. SSMPs must include provisions to provide proper and efficient management, operation, and maintenance of sanitary sewer systems and must contain a spill response plan. Elements of this Plan are under development in accordance with the SWRCB's schedule.
The intent of this study is to provide an analysis of the scattering from a crevasse in Antarctic ice, utilizing a physics-based model for the scattering process. Of primary interest is a crevasse covered with a snow bridge, which makes the crevasse undetectable in visible-light images. It is demonstrated that a crevasse covered with a snow bridge can be visible in synthetic-aperture-radar (SAR) images. The model of the crevasse and snow bridge incorporates a complex dielectric permittivity model for dry snow and ice that takes into account the density profile of the glacier. The surface structure is based on a fractal model that can produce sastrugi-like features found on the surface of Antarctic glaciers. Simulated phase histories, computed with the Shooting and Bouncing Ray (SBR) method, are processed into SAR images. The viability of the SBR method for predicting scattering from a crevasse covered with a snow bridge is demonstrated. Some suggestions for improving the model are given.
Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not available to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.
An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.
We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.
This paper describes a new hybrid modeling and simulation architecture developed at Sandia for understanding and developing protections against and mitigations for cyber threats upon control systems. It first outlines the challenges to PCS security that can be addressed using these technologies. The paper then describes Virtual Control System Environments (VCSE) that use this approach and briefly discusses security research that Sandia has performed using VCSE. It closes with recommendations to the control systems security community for applying this valuable technology.
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine is required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.
Specimens of poled and unpoled PZST ceramic were tested under hydrostatic loading conditions at temperatures of -55, 25, and 75 C. The objective of this experimental study was to obtain the electro-mechanical properties of the ceramic and the criteria of FE (Ferroelectric) to AFE (Antiferroelectric) phase transformations of the PZST ceramic to aid grain-scale modeling efforts in developing and testing realistic response models for use in simulation codes. As seen in previous studies, the poled ceramic from PZST undergoes anisotropic deformation during the transition from a FE to an AFE phase at -55 C. Warmer temperature tests exhibit anisotropic deformation in both the FE and AFE phase. The phase transformation is permanent at -55 C for all ceramics tests, whereas the transformation can be completely reversed at 25 and 75 C. The change in the phase transformation pressures at different temperatures were practically identical for both unpoled and poled PZST specimens. Bulk modulus for both poled and unpoled material was lowest in the FE phase, intermediate in the transition phase, and highest in the AFE phase. Additionally, bulk modulus varies with temperature in that PZST is stiffer as temperature decreases. Results from one poled-biased test for PZST and four poled-biased tests from PNZT 95/5-2Nb are presented. A bias of 1kV did not show noticeable differences in phase transformation pressure for the PZST material. However, with PNZT 95/5-2Nb phase transformation pressure increased with increasing voltage bias up to 4.5kV.
This report is a revision of SAND2009-0852. SAND2009-0852 was revised because it was discovered that a gage used in the original testing was mis-calibrated. Following the recalibration, all affected raw data were recalculated and re-presented. Most revised data is similar to, but slightly different than, the original data. Following the data re-analysis, none of the inferences or conclusions about the data or site relative to the SAND2009-0852 data have been changed. A laboratory testing program was developed to examine the mechanical behavior of salt from the Richton salt dome. The resulting information is intended for use in design and evaluation of a proposed Strategic Petroleum Reserve storage facility in that dome. Core obtained from the drill hole MRIG-9 was obtained from the Texas Bureau of Economic Geology. Mechanical properties testing included: (1) acoustic velocity wave measurements; (2) indirect tensile strength tests; (3) unconfined compressive strength tests; (4) ambient temperature quasi-static triaxial compression tests to evaluate dilational stress states at confining pressures of 725, 1450, 2175, and 2900 psi; and (5) confined triaxial creep experiments to evaluate the time-dependent behavior of the salt at axial stress differences of 4000 psi, 3500 psi, 3000 psi, 2175 psi and 2000 psi at 55 C and 4000 psi at 35 C, all at a constant confining pressure of 4000 psi. All comments, inferences, discussions of the Richton characterization and analysis are caveated by the small number of tests. Additional core and testing from a deeper well located at the proposed site is planned. The Richton rock salt is generally inhomogeneous as expressed by the density and velocity measurements with depth. In fact, we treated the salt as two populations, one clean and relatively pure (> 98% halite), the other salt with abundant (at times) anhydrite. The density has been related to the insoluble content. The limited mechanical testing completed has allowed us to conclude that the dilatational criteria are distinct for the halite-rich and other salts, and that the dilation criteria are pressure dependent. The indirect tensile strengths and unconfined compressive strengths determined are consistently lower than other coastal domal salts. The steady-state-only creep model being developed suggests that Richton salt is intermediate in creep resistance when compared to other domal and bedded salts. The results of the study provide only limited information for structural modeling needed to evaluate the integrity and safety of the proposed cavern field. This study should be augmented with more extensive testing. This report documents a series of test methods, philosophies, and empirical relationships, etc., that are used to define and extend our understanding of the mechanical behavior of the Richton salt. This understanding could be used in conjunction with planned further studies or on its own for initial assessments.
Los Alamos and Sandia National Laboratories are partners in an effort to survey the super-cooled liquid water in clouds over the state of New Mexico in a project sponsored by the New Mexico Small Business Assistance Program. This report summarizes the scientific work performed at Sandia National Laboratories during the 2009. In this second year of the project a practical methodology for estimating cloud super-cooled liquid water was created. This was accomplished through the analysis of certain MODIS sensor satellite derived cloud products and vetted parameterizations techniques. A software code was developed to analyze multiple cases automatically. The eighty-one storm events identified in the previous year effort from 2006-2007 were again the focus. Six derived MODIS products were obtained first through careful MODIS image evaluation. Both cloud and clear-sky properties from this dataset were determined over New Mexico. Sensitivity studies were performed that identified the parameters which most influenced the estimation of cloud super-cooled liquid water. Limited validation was undertaken to ensure the soundness of the cloud super-cooled estimates. Finally, a path forward was formulized to insure the successful completion of the initial scientific goals which include analyzing different of annual datasets, validation of the developed algorithm, and the creation of a user-friendly and interactive tool for estimating cloud super-cooled liquid water.
This guide describes a high-level, technology-neutral framework for assessing potential benefits from and economic market potential for energy storage used for electric-utility-related applications. The overarching theme addressed is the concept of combining applications/benefits into attractive value propositions that include use of energy storage, possibly including distributed and/or modular systems. Other topics addressed include: high-level estimates of application-specific lifecycle benefit (10 years) in $/kW and maximum market potential (10 years) in MW. Combined, these criteria indicate the economic potential (in $Millions) for a given energy storage application/benefit. The benefits and value propositions characterized provide an important indication of storage system cost targets for system and subsystem developers, vendors, and prospective users. Maximum market potential estimates provide developers, vendors, and energy policymakers with an indication of the upper bound of the potential demand for storage. The combination of the value of an individual benefit (in $/kW) and the corresponding maximum market potential estimate (in MW) indicates the possible impact that storage could have on the U.S. economy. The intended audience for this document includes persons or organizations needing a framework for making first-cut or high-level estimates of benefits for a specific storage project and/or those seeking a high-level estimate of viable price points and/or maximum market potential for their products. Thus, the intended audience includes: electric utility planners, electricity end users, non-utility electric energy and electric services providers, electric utility regulators and policymakers, intermittent renewables advocates and developers, Smart Grid advocates and developers, storage technology and project developers, and energy storage advocates.
Radial wire arrays provide an alternative x-ray source for Z-pinch driven Inertial Confinement Fusion. These arrays, where wires are positioned radially outwards from a central cathode to a concentric anode, have the potential to drive a more compact ICF hohlraum. A number of experiments were performed on the 7MA Saturn Generator. These experiments studied a number of potential risks in scaling radial wire arrays up from the 1MA level, where they have been shown to provide similar x-ray outputs to larger diameter cylindrical arrays, to the higher current levels required for ICF. Data indicates that at 7MA radial arrays can obtain higher power densities than cylindrical wire arrays, so may be of use for x-ray driven ICF on future facilities. Even at the 7MA level, data using Saturn's short pulse mode indicates that a radial array should be able to drive a compact hohlraum to temperatures {approx}92eV, which may be of interest for opacity experiments. These arrays are also shown to have applications to jet production for laboratory astrophysics. MHD simulations require additional physics to match the observed behavior.
This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.
Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.
During the past several years, there has been a growing recognition of the threats posed by the use of shallow tunnels against both international border security and the integrity of critical facilities. This has led to the development and testing of a variety of geophysical and surveillance techniques for the detection of these clandestine tunnels. The challenges of detection of these tunnels arising from the complexity of the near surface environment, the subtlety of the tunnel signatures themselves, and the frequent siting of these tunnels in urban environments with a high level of cultural noise, have time and again shown that any single technique is not robust enough to solve the tunnel detection problem in all cases. The question then arises as to how to best combine the multiple techniques currently available to create an integrated system that results in the best chance of detecting these tunnels in a variety of clutter environments and geologies. This study utilizes Taguchi analysis with simulated sensor detection performance to address this question. The analysis results show that ambient noise has the most effect on detection performance over the effects of tunnel characteristics and geological factors.