Materials with switchable states are desirable in many areas of science and technology. The ability to thermally transform a dielectric material to a conductive state should allow for the creation of electronics with built-in safety features. Specifically, the non-desirable build-up and discharge of electricity in the event of a fire or over-heating would be averted by utilizing thermo-switchable dielectrics in the capacitors of electrical devices (preventing the capacitors from charging at elevated temperatures). We have designed a series of polymers that effectively switch from a non-conductive to a conductive state. The thermal transition is governed by the stability of the leaving group after it leaves as a free entity. Here, we present the synthesis and characterization of a series of precursor polymers that eliminate to form poly(p-phenylene vinylene) (PPV's).
Trilinos is an object-oriented software framework to enabled the solution of large-scale, complex multiphysics engineering and scientific problems. Different Trilinos packages build on each other to create a stack providing the necessary capability: (1) Non-linear solver; (2) Linear solver/preconditioner; (3) Distributed linear algebra; and (4) Local linear algebra.
This paper provides a summary of observations drawn from twenty years of personal experience in working with regulatory criteria for the permanent disposal of radioactive waste for both the Waste Isolation Pilot Plant repository for transuranic defense waste and the proposed Yucca Mountain repository for spent nuclear fuel and high-level wastes. Rather than providing specific recommendations for regulatory criteria, my goal here is to provide a perspective on topics that are fundamental to how high-level radioactive waste disposal regulations have been implemented in the past. What are the main questions raised relevant to long-term disposal regulations? What has proven effective in the past? Where have regulatory requirements perhaps had unintended consequences? New regulations for radioactive waste disposal may prove necessary, but the drafting of these regulations may be premature until a broad range of policy issues are better addressed. In the interim, the perspective offered here may be helpful for framing policy discussions.
Aircraft impacts at flight speeds are relevant environments for aircraft safety studies. This type of environment pertains to normal environments such as wildlife impacts and rough landings, but also the abnormal environment that has more recently been evidenced in cases such as the Pentagon and World Trade Center events of September 11, 2001, and the FBI building impact in Austin. For more severe impacts, the environment is combined because it involves not just the structural mechanics, but also the release of the fuel and the subsequent fire. Impacts normally last on the order of milliseconds to seconds, whereas the fire dynamics may last for minutes to hours, or longer. This presents a serious challenge for physical models that employ discrete time stepping to model the dynamics with accuracy. Another challenge is that the capabilities to model the fire and structural impact are seldom found in a common simulation tool. Sandia National Labs maintains two codes under a common architecture that have been used to model the dynamics of aircraft impact and fire scenarios. Only recently have these codes been coupled directly to provide a fire prediction that is better informed on the basis of a detailed structural calculation. To enable this technology, several facilitating models are necessary, as is a methodology for determining and executing the transfer of information from the structural code to the fire code. A methodology has been developed and implemented. Previous test programs at the Sandia National Labs sled track provide unique data for the dynamic response of an aluminum tank of liquid water impacting a barricade at flight speeds. These data are used to validate the modeling effort, and suggest reasonable accuracy for the dispersion of a non-combustible fluid in an impact environment. The capability is also demonstrated with a notional impact of a fuel-filled container at flight speed. Both of these scenarios are used to evaluate numeric approximations, and help provide an understanding of the quantitative accuracy of the modeling methods.
A unitary quantum gate is the basic functioning element of a quantum computer. Summary of results: (1) Robustness of a general n-qubit gate = 1 - F {proportional_to} 2{sup n}; (2) Robustness of a universal gate with complete isolation of one-and two-qubit subgates = 1 - F {proportional_to} n; and (3) Robustness of a universal gate with small unwanted couplings between the qubits is unclear.
The Reflex Triode can efficiently produce and transmit medium energy (10-100 keV) x-rays. Perfect reflexing through thin converter can increase transmission of 10-100 keV x-rays. Gamble II experiment at 1 MV, 1 MA, 60 ns - maximum dose with 25 micron tantalum. Electron orbits depend on the foil thickness. Electron orbits from LSP used to calculate path length inside tantalum. A simple formula predicts the optimum foil thickness for reflexing converters. The I(V) characteristics of the diode can be understood using simple models. Critical current dominates high voltage triodes, bipolar current is more important at low voltage. Higher current (2.5 MA), lower voltage (250 kV) triodes are being tested on Saturn at Sandia. Small, precise, anode-cathode gaps enable low impedance operation. Sample Saturn results at 2.5 MA, 250 kV. Saturn dose rate could be about two times greater. Cylindrical triode may improve x-ray transmission. Cylindrical triode design will be tested at 1/2 scale on Gamble II. For higher current on Saturn, could use two cylindrical triodes in parallel. 3 triodes in parallel require positive polarity operation. 'Triodes in series' would improve matching low impedance triodes to generator. Conclusions of this presentation are: (1) Physics of reflex triodes from Gamble II experiments (1 MA, 1 MV) - (a) Converter thickness 1/20 of CSDA range optimizes x-ray dose; (b) Simple model based on electron orbits predicts optimum thickness from LSP/ITS calculations and experiment; (c) I(V) analysis: beam dynamics different between 1 MV and 250 kV; (2) Multi-MA triode experiments on Saturn (2.5 MA, 250 kV) - (a) Polarity inversion in vacuum, (b) No-convolute configuration, accurate gap settings, (c) About half of current produces useful x-rays, (d) Cylindrical triode one option to increase x-ray transmission; and (3) Potential to increase Saturn current toward 10 MA, maintaining voltage and outer diameter - (a) 2 (or 3) cylindrical triodes in parallel, (b) Triodes in series to improve matching, (c) These concepts will be tested first on Gamble II.
Subsurface containment of CO2 is predicated on effective caprock sealing. Many previous studies have relied on macroscopic measurements of capillary breakthrough pressure and other petrophysical properties without direct examination of solid phases that line pore networks and directly contact fluids. However, pore-lining phases strongly contribute to sealing behavior through interfacial interactions among CO2, brine, and the mineral or non-mineral phases. Our high resolution (i.e., sub-micron) examination of the composition of pore-lining phases of several continental and marine mudstones indicates that sealing efficiency (i.e., breakthrough pressure) is governed by pore shapes and pore-lining phases that are not identifiable except through direct characterization of pores. Bulk X-ray diffraction data does not indicate which phases line the pores and may be especially lacking for mudstones with organic material. Organics can line pores and may represent once-mobile phases that modify the wettability of an originally clay-lined pore network. For shallow formations (i.e., < {approx}800 m depth), interfacial tension and contact angles result in breakthrough pressures that may be as high as those needed to fracture the rock - thus, in the absence of fractures, capillary sealing efficiency is indicated. Deeper seals have poorer capillary sealing if mica-like wetting dominates the wettability.
There are many applications that need a meso-scale rotational actuator. These applications have been left by the wayside because of the lack of actuation at this scale. Sandia National Laboratories has many unique fabrication technologies that could be used to create an electromagnetic actuator at this scale. There are also many designs to be explored. In this internship exploration of the designs and fabrications technologies to find an inexpensive design that can be used for prototyping the electromagnetic rotational actuator.
To help determine the capability range of a MEMS optical microphone design in harsh conditions computer simulations were carried out. Thermal stress modeling was performed up to temperatures of 1000 C. Particular concern was over stress and strain profiles due to the coefficient of thermal expansion mismatch between the polysilicon device and alumina packaging. Preliminary results with simplified models indicate acceptable levels of deformation within the device.
Partial characterization of a series of electrostatically actuated active microfluidic valves is to be performed. Tests are performed on a series of 24 valves from two different MEMS sets. Focus is on the physical deformation of the structures under variable pressure loadings, as well as voltage levels. Other issues that inhibit proper performance of the valves are observed, addressed and documented as well. Many microfluidic applications have need for the distribution of gases at finely specified pressures and times. To this end a series of electrostatically actuated active valves have been fabricated. Eight separate silicon die are discussed, each with a series of four active valves present. The devices are designed such that the valve boss is held at a ground, with a voltage applied to lower contacts. Resulting electrostatic forces pull the boss down against a series of stops, intended to create a seal as well as prevent accidental shorting of the device. They have been uniquely packaged atop a stack of material layers, which have inlaid channels for application of fluid flow to the backside of the valve. Electrical contact is supplied from the underlying printed circuit board, attached to external supplies and along traces on the silicon. Pressure is supplied from a reservoir of house compressed air, up to 100 Psig. This is routed through a Norgren R07-200-RGKA pressure regulator, rated to 150 Psig. From there flow passes a manually operated ball valve, and to a flow meter. Two flow meters were utilized; initially an Omega FMA1802 rated at 10 sccm, and followed by a Flocat model for higher flow rates up to 100 sccm. An Omega DPG4000-500 pressure gauge produced pressure measurements. Optical measurements were returned via a WYKO Interferometry probe station. This would allow for determination of physical deformations of the device under a variety of voltage and pressure loads. This knowledge could lead to insight as to the failure mechanisms of the device, yielding improvements for subsequent fabrications.
This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric [11] to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.
The dose limits for emissions from the nuclear fuel cycle were established by the Environmental Protection Agency in 40 CFR Part 190 in 1977. These limits were based on assumptions regarding the growth of nuclear power and the technical capabilities of decontamination systems as well as the then-current knowledge of atmospheric dispersion and the biological effects of ionizing radiation. In the more than thirty years since the adoption of the limits, much has changed with respect to the scale of nuclear energy deployment in the United States and the scientific knowledge associated with modeling health effects from radioactivity release. Sandia National Laboratories conducted a study to examine and understand the methodologies and technical bases of 40 CFR 190 and also to determine if the conclusions of the earlier work would be different today given the current projected growth of nuclear power and the advances in scientific understanding. This report documents the results of that work.
This Recycling Opportunity Assessment (ROA) is a revision and expansion of the FY04 ROA. The original 16 materials are updated through FY08, and then 56 material streams are examined through FY09 with action items for ongoing improvement listed for most. In addition to expanding the list of solid waste materials examined, two new sections have been added to cover hazardous waste materials. Appendices include energy equivalencies of materials recycled, trends and recycle data, and summary tables of high, medium, and low priority action items.
The nearest neighbor search is a significant problem in transportation modeling and simulation. This paper describes how the nearest neighbor search is implemented efficiently with respect to running time in the NISAC Agent-Based Laboratory for Economics. The paper shows two methods to optimize running time of the nearest neighbor search. The first optimization uses a different distance metric that is more computationally efficient. The concept of a magnitude-comparable distance is described, and the paper gives a specific magnitude-comparable distance that is more computationally efficient than the actual distance function. The paper also shows how the given magnitude-comparable distance can be used to speed up the actual distance calculation. The second optimization reduces the number of points the search examines by using a spatial data structure. The paper concludes with testing of the different techniques discussed and the results.
We discuss two recent diagnostic-development efforts in our laboratory: femtosecond pure-rotational Coherent anti-Stokes Raman scattering (CARS) for thermometry and species detection in nitrogen and air, and nanosecond vibrational CARS measurements of electric fields in air. Transient pure-rotational fs-CARS data show the evolution of the rotational Raman polarization in nitrogen and air over the first 20 ps after impulsive pump/Stokes excitation. The Raman-resonant signal strength at long time delays is large, and we additionally observe large time separation between the fs-CARS signatures of nitrogen and oxygen, so that the pure-rotational approach to fs-CARS has promise for simultaneous species and temperature measurements with suppressed nonresonant background. Nanosecond vibrational CARS of nitrogen for electric-field measurements is also demonstrated. In the presence of an electric field, a dipole is induced in the otherwise nonpolar nitrogen molecule, which can be probed with the introduction of strong collinear pump and Stokes fields, resulting in CARS signal radiation in the infrared. The electric-field diagnostic is demonstrated in air, where the strength of the coherent infrared emission and sensitivity our field measurements is quantified, and the scaling of the infrared signal with field strength is verified.
A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must be confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.
The presentation briefly addresses three topics. First, science has played an important role throughout the history of the WIPP project, beginning with site selection in the middle 1970s. Science was a key part of site characterization in the 1980s, providing basic information on geology, hydrology, geochemisty, and the mechanical behavior of the salt, among other topics. Science programs also made significant contributions to facility design, specifically in the area of shaft seal design and testing. By the middle 1990s, emphasis shifted from site characterization to regulatory evaluations, and the science program provided one of the essential bases for certification by the Environmental Protection Agency in 1998. Current science activities support ongoing disposal operations and regulatory recertification evaluations mandated by the EPA. Second, the EPA regulatory standards for long-term performance frame the scientific evaluations that provide the basis for certification. Unlike long-term dose standards applied to Yucca Mountain and proposed repositories in other nations, the WIPP regulations focused on cumulative releases during a fixed time interval of 10,000 years, and placed a high emphasis on the consequences of future inadvertent drilling intrusions into the repository. Close attention to the details of the regulatory requirements facilitated EPA's review of the DOE's 1996 Compliance Certification Application. Third, the scientific understanding developed for WIPP provided the basis for modeling studies that evaluated the long-term performance of the repository in the context of regulatory requirements. These performance assessment analyses formed a critical part of the demonstration that the site met the specific regulatory requirements as well as providing insight into the overall understanding of the long-term performance of the system. The presentation concludes with observations on the role of science in the process of developing a disposal system, including the importance of establishing the regulatory framework, building confidence in the long-term safety of the system, and the critical role of the regulator in decision making.
The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.
The Sunshine to Petrol effort at Sandia aims to convert carbon dioxide and water to precursors for liquid hydrocarbon fuels using concentrated solar power. Significant advances have been made in the field of solar thermochemical CO{sub 2}-splitting technologies utilizing yttria-stabilized zirconia (YSZ)-supported ferrite composites. Conceptually, such materials work via the basic redox reactions: Fe{sub 3}O{sub 4} {yields} 3FeO + 0.5O{sub 2} (Thermal reduction, >1350 C) and 3FeO + CO{sub 2} {yields} Fe{sub 3}O{sub 4} + CO (CO{sub 2}-splitting oxidation, <1200 C). There has been limited fundamental characterization of the ferrite-based materials at the high temperatures and conditions present in these cycles. A systematic study of these composites is underway in an effort to begin to elucidate microstructure, structure-property relationships, and the role of the support on redox behavior under high-temperature reducing and oxidizing environments. In this paper the synthesis, structural characterization (including scanning electron microscopy and room temperature and in-situ x-ray diffraction), and thermogravimetric analysis of YSZ-supported ferrites will be reported.
The Sunshine to Petrol effort at Sandia aims to convert carbon dioxide and water to precursors for liquid hydrocarbon fuels using concentrated solar power. Significant advances have been made in the field of solar thermochemical CO{sub 2}-splitting technologies utilizing yttria-stabilized zirconia (YSZ)-supported ferrite composites. Conceptually, such materials work via the basic redox reactions: Fe{sub 3}O{sub 4} {yields} 3FeO + 0.5O{sub 2} (Thermal reduction, >1350 C) and 3FeO + CO{sub 2} {yields} Fe{sub 3}O{sub 4} + CO (CO{sub 2}-splitting oxidation, <1200 C). There has been limited fundamental characterization of the ferrite-based materials at the high temperatures and conditions present in these cycles. A systematic study of these composites is underway in an effort to begin to elucidate microstructure, structure-property relationships, and the role of the support on redox behavior under high-temperature reducing and oxidizing environments. In this paper the synthesis, structural characterization (including scanning electron microscopy and room temperature and in-situ x-ray diffraction), and thermogravimetric analysis of YSZ-supported ferrites will be reported.
In the fire safety community, the trend is toward implementing performance-based standards in place of existing prescriptive ones. Prescriptive standards can be difficult to adapt to changing design methods, materials, and application situations of systems that ultimately must perform well in unwanted fire situations. In general, this trend has produced positive results and is embraced by the fire protection community. The question arises as to whether this approach could be used to advantage in cook-off testing. Prescribed fuel fire cook-off tests have been instigated because of historical incidents that led to extensive damage to structures and loss of life. They are designed to evaluate the propensity for a violent response. The prescribed protocol has several advantages: it can be defined in terms of controllable parameters (wind speed, fuel type, pool size, etc.); and it may be conservative for a particular scenario. However, fires are inherently variable and prescribed tests are not necessarily representative of a particular accident scenario. Moreover, prescribed protocols are not necessarily adaptable and may not be conservative. We also consider performance-based testing. This requires more knowledge and thought regarding not only the fire environment, but the behavior of the munitions themselves. Sandia uses a performance based approach in assuring the safe behavior of systems of interest that contain energetic materials. Sandia also conducts prescriptive fire testing for the IAEA, NRC and the DOT. Here we comment on the strengths and weakness of both approaches and suggest a path forward should it be desirable to pursue a performance based cook-off standard.
The magneto-Rayleigh-Taylor (MRT) instability is the most important instability for determining whether a cylindrical liner can be compressed to its axis in a relatively intact form, a requirement for achieving the high pressures needed for inertial confinement fusion (ICF) and other high energy-density physics applications. While there are many published RT studies, there are a handful of well-characterized MRT experiments at time scales >1 {micro}s and none for 100 ns z-pinch implosions. Experiments used solid Al liners with outer radii of 3.16 mm and thicknesses of 292 {micro}m, dimensions similar to magnetically-driven ICF target designs [1]. In most tests the MRT instability was seeded with sinusoidal perturbations ({lambda} = 200, 400 {micro}m, peak-to-valley amplitudes of 10, 20 {micro}m, respectively), wavelengths similar to those predicted to dominate near stagnation. Radiographs show the evolution of the MRT instability and the effects of current-induced ablation of mass from the liner surface. Additional Al liner tests used 25-200 {micro}m wavelengths and flat surfaces. Codes being used to design magnetized liner ICF loads [1] match the features seen except at the smallest scales (<50 {micro}m). Recent experiments used Be liners to enable penetrating radiography using the same 6.151 keV diagnostics and provide an in-flight measurement of the liner density profile.
RF toxicity and Information Warfare (IW) are becoming omnipresent posing threats to the protection of nuclear assets, and within theatres of hostility or combat where tactical operation of wireless communication without detection and interception is important and sometimes critical for survival. As a result, a requirement for deployment of many security systems is a highly secure wireless technology manifesting stealth or covert operation suitable for either permanent or tactical deployment where operation without detection or interruption is important The possible use of ultra wideband (UWB) spectrum technology as an alternative physical medium for wireless network communication offers many advantages over conventional narrowband and spread spectrum wireless communication. UWB also known as fast-frequency chirp is nonsinusoidal and sends information directly by transmitting sub-nanosecond pulses without the use of mixing baseband information upon a sinusoidal carrier. Thus UWB sends information using radar-like impulses by spreading its energy thinly over a vast spectrum and can operate at extremely low-power transmission within the noise floor where other forms of RF find it difficult or impossible to operate. As a result UWB offers low probability of detection (LPD), low probability of interception (LPI) as well as anti-jamming (AJ) properties in signal space. This paper analyzes and compares the vulnerability of UWB to narrowband and spread spectrum wireless network communication.