Proposed for publication in Siam Journal on Optimization.
Identifying small groups of lines, whose removal would cause a severe blackout, is critical for the secure operation of the electric power grid. We show how power grid vulnerability analysis can be studied as a mixed integer nonlinear programming (minlp) problem. Our analysis reveals a special structure in the formulation that can be exploited to avoid nonlinearity and approximate the original problem as a pure combinatorial problem. The key new observation behind our analysis is the correspondence between the Jacobian matrix (a representation of the feasibility boundary of the equations that describe the flow of power in the network) and the Laplacian matrix in spectral graph theory (a representation of the graph of the power grid). The reduced combinatorial problem is known as the network inhibition problem, for which we present a mixed integer linear programming formulation. Our experiments on benchmark power grids show that the reduced combinatorial model provides an accurate approximation, to enable vulnerability analyses of real-sized problems with more than 10,000 power lines.
We design and implement a multipixel spatial modulator for terahertz beams using active terahertz metamaterials. Our first-generation device consists of a 4 x 4 pixel array, where each pixel is an array of subwavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. Through terahertz transmission experiments, we show that the spatial modulator has a uniform modulation depth of around 40% across all pixels, and negligible crosstalk, at the resonant frequency. This device can operate under small voltage levels, at room temperature, with low power consumption and reasonably high switching speed.
A need exists for developing codes and standards to support the wide-spread delivery of liquid hydrogen bulk fuel and fueling station storage. To develop these codes and standards the consequences of planned and unplanned hydrogen releases must be understood. The systems under consideration are mainly those used in supplying hydrogen for transportation. These systems include production storage tanks, tanker trucks and tanks located at vehicle fueling stations. Typically these systems store hydrogen in the saturated state at approximately 11 atmospheres. Storage vessels are heavily insulated and sometimes actively cooled to minimize the rate of hydrogen boil-off (intended hydrogen release).
Proposed for publication in a journal compiled by the National Defense University as a product of a regional network of strategic studies centers for North Africa, Middle East, and South Asia, planned for 2010 publication.
We conducted a series of modified Hopkinson pressure bar (HPB) experiments to evaluate a new, damped, high-shock accelerometer that has recently been developed by PCB Piezotronics Inc. Pulse shapers were used to create a long duration, non-dispersive stress pulse in an aluminum bar that interacted with a tungsten disk at the end of the incident bar. We measured stress at the aluminum bar-disk interface with a quartz gage and measured acceleration at the free-end of the disk with an Endevco brand 7270A and the new PCB 3991 accelerometers. The rise-time of the incident stress pulse in the aluminum bar was long enough and the disk length short enough so that the response of the disk can be approximated closely as rigid-body motion; an experimentally verified analytical model has been shown previously to support this assumption. Since the cross-sectional area and mass of the disk were known, we calculated acceleration of the rigid-disk from the quartz-gage force measurement and Newton's Second Law of Motion. Comparisons of accelerations calculated from the quartz-gage data and measured acceleration data show excellent agreement for acceleration pulses with the PCB accelerometer for peak amplitudes between 4,000 and 40,000 Gs , rise times as short as 40 microsec, and pulse durations between 150 and 320 microsec.
The basis of this work is a paraphrase of a well-known aphorism regarding system models that is extended to the object being modeled, begging your indulgence: “Essentially, all systems are broken, but some do useful work.” (Box, 1987).
Pure, amine-derivatized and nickel-doped sol-gel silica membranes have been developed on tubular Membralox-type commercial ceramic supports for the purpose of carbon dioxide separation from nitrogen under coal-fired power plant flue gas conditions. An extensive synthetic and permeation test study was carried out in order to optimize membrane CO{sub 2} permeance, CO{sub 2}:N{sub 2} separation factor and resistance against densification. Pure silica membranes prepared under optimized conditions exhibited an attractive combination of CO{sub 2} permeance of 2.0 MPU (1 MPU = 1 cm{sup 3}(STP) {center_dot} cm{sup -2} min{sup -1} atm{sup -1}) and CO{sub 2}:N{sub 2} separation factor of 80 with a dry 10:90 (v/v) CO{sub 2}:N{sub 2} feed at 25 C. However, these membranes exhibited flux decline phenomena under prolonged exposure to humidified feeds, especially in the presence of trace SO{sub 2} gas in the feed. Doping the membranes with nickel (II) nitrate salt was effective in retarding densification, as manifested by combined higher permeance and higher separation factor of the doped membrane compared to the pure (undoped) silica membrane after 168 hours exposure to simulated flue gas conditions.
Over the past several years, verifying and validating complex codes at Sandia National Laboratories has become a major part of code development. These aspects tackle two important parts of simulation modeling: determining if the models have been correctly implemented - verification, and determining if the correct models have been selected - validation. In this talk, we will focus on verification and discuss the basics of code verification and its application to a few codes and problems at Sandia.
The paper describes the preliminary evaluation of a 6 degree of freedom electrodynamic shaker system. The 8 by 8 inch (20.3 cm) table is driven by 12 electrodynamic shakers producing motion in all 6 rigid body modes. A small electrodynamic shaker system suitable for small component testing is described. The principal purpose of the system is to demonstrate the technology. The shaker is driven by 12 electrodynamic shakers each with a force capability of about 50 lbs (220 N). The system was developed through an informal cooperative agreement between Sandia National Laboratories, Team Corp. and Spectral Dynamics Corporation. Sandia provided the laboratory space and some development funds. Team provided the mechanical system, and Spectral Dynamics provided the control system. Spectral Dynamics was chosen to provide the control system partly because of their experience in MIMO control and partly because Sandia already had part of the system in house. The shaker system was conceived and manufactured by TEAM Corp. Figure 1 shows the overall system. The vibration table, electrodynamic shakers, hydraulic pumps, and amplifiers are all housed in a single cabinet. Figure 2 is a drawing showing how the electrodynamic shakers are coupled to the table. The shakers are coupled to the table through a hydraulic spherical pad bearing providing 5 degrees of freedom and one stiff degree of freedom. The pad bearing must be preloaded with a static force as they are unable to provide any tension forces. The horizontal bearings are preloaded with steel springs. The drawing shows a spring providing the vertical preload. This was changed in the final design. The vertical preload is provided by multiple strands of an O-ring material as shown in Figure 4. Four shakers provide excitation in each of the three orthogonal axes. The specifications of the shaker are outlined in Table 1. Four shakers provide inputs in each of the three orthogonal directions. By choosing the phase relationships between the shakers all six rigid body modes (three translation, and three rotations) can be excited. The system is over determined. There are more shakers than degrees of freedom. This provided an interesting control problem. The problem was approached using the input-output transformation matrices provided in the Spectral control system. Twelve accelerometers were selected for the control accelerometers (a tri-axial accelerometer at each corner of the table (see Figure 5). Figure 6 shows the nomenclature used to identify the shakers and control accelerometers. A fifth tri-axial accelerometer was placed at the center of the table, but it was not used for control. Thus we had 12 control accelerometers and 12 shakers to control a 6-dof shaker. The 12 control channels were reduced to a 6-dof control using a simple input transformation matrix. The control was defined by a 6x6 spectral density matrix. The six outputs in the control variable coordinates were transformed to twelve physical drive signals using another simple output transformation matrix. It was assumed that the accelerometers and shakers were well matched such that the transformation matrices were independent of frequency and could be deduced from rigid body considerations. The input/output transformations are shown in Equations 1 and 2.
This report supplements audit 2008-E-0009, conducted by the ES&H, Quality, Safeguards & Security Audits Department, 12870, during fall and winter of FY 2008. The study evaluates slips, trips and falls, the leading cause of reportable injuries at Sandia. In 2007, almost half of over 100 of such incidents occurred in parking lots. During the course of the audit, over 5000 observations were collected in 10 parking lots across SNL/NM. Based on benchmarks and trends of pedestrian behavior, the report proposes pedestrian-friendly features and attributes to improve pedestrian safety in parking lots. Less safe pedestrian behavior is associated with older parking lots lacking pedestrian-friendly features and attributes, like those for buildings 823, 887 and 811. Conversely, safer pedestrian behavior is associated with newer parking lots that have designated walkways, intra-lot walkways and sidewalks. Observations also revealed that motorists are in widespread noncompliance with parking lot speed limits and stop signs and markers.
During the 110th Congress (calendar years 2007 and 2008), Matthew Allen, a Sandian nuclear scientist, served as a Congressional Fellow on the Committee on Homeland Security in the House of Representatives. This report is an informative account of the role staffers play in assisting the members of Congress in their oversight and legislative duties. It is also a personal account of Matthew Allen's experience as a committee staffer in the House of Representatives.
A simplified ESBWR MELCOR model was developed to track the transport of iodine released from damaged reactor fuel in a hypothesized core damage accident. To account for the effects of iodine pool chemistry, radiolysis of air and cable insulation, and surface coatings (i.e., paint) the iodine pool model in MELCOR was activated. Modifications were made to MELCOR to add sodium pentaborate as a buffer in the iodine pool chemistry model. An issue of specific interest was whether iodine vapor removed from the drywell vapor space by the PCCS heat exchangers would be sequestered in water pools or if it would be rereleased as vapor back into the drywell. As iodine vapor is not included in the deposition models for diffusiophoresis or thermophoresis in current version of MELCOR, a parametric study was conducted to evaluate the impact of a range of iodine removal coefficients in the PCCS heat exchangers. The study found that higher removal coefficients resulted in a lower mass of iodine vapor in the drywell vapor space.
Because of past military operations, lack of upkeep and looting there are now enormous radioactive waste problems in Iraq. These waste problems include destroyed nuclear facilities, uncharacterized radioactive wastes, liquid radioactive waste in underground tanks, wastes related to the production of yellow cake, sealed radioactive sources, activated metals and contaminated metals that must be constantly guarded. Iraq currently lacks the trained personnel, regulatory and physical infrastructure to safely and securely manage these facilities and wastes. In 2005 the International Atomic Energy Agency (IAEA) agreed to organize an international cooperative program to assist Iraq with these issues. Soon after, the Iraq Nuclear Facility Dismantlement and Disposal Program (the NDs Program) was initiated by the U.S. Department of State (DOS) to support the IAEA and assist the Government of Iraq (GOI) in eliminating the threats from poorly controlled radioactive materials. The Iraq NDs Program is providing support for the IAEA plus training, consultation and limited equipment to the GOI. The GOI owns the problems and will be responsible for implementation of the Iraq NDs Program. Sandia National Laboratories (Sandia) is a part of the DOS's team implementing the Iraq NDs Program. This report documents Sandia's support of the Iraq NDs Program, which has developed into three principal work streams: (1) training and technical consultation; (2) introducing Iraqis to modern decommissioning and waste management practices; and (3) supporting the IAEA, as they assist the GOI. Examples of each of these work streams include: (1) presentation of a three-day training workshop on 'Practical Concepts for Safe Disposal of Low-Level Radioactive Waste in Arid Settings;' (2) leading GOI representatives on a tour of two operating low level radioactive waste disposal facilities in the U.S.; and (3) supporting the IAEA's Technical Meeting with the GOI from April 21-25, 2008. As noted in the report, there was significant teaming between the various participants to best help the GOI. On-the-ground progress is the focus of the Iraq NDs Program and much of the work is a transfer of technical and practical skills and knowledge that Sandia uses day-to-day. On-the-ground progress was achieved in July of 2008 when the GOI began the physical cleanup and dismantlement of the Active Metallurgical Testing Laboratory (LAMA) facility at Al Tuwaitha, near Baghdad.
This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones in more detail; the next section provides an overview of the project and how the current progress fits into it.
This report presents progress on identifying and classifying features involving combustion in turbulent flow using principal component analysis (PCA) and k-means clustering using an in situ analysis framework. We describe a process for extracting temporally- and spatially-varying information from the simulation, classifying the information, and then applying the classification algorithm to either other portions of the simulation not used for training the classifier or further simulations. Because the regions classified as being of interest take up a small portion of the overall simulation domain, it will consume fewer resources to perform further analysis or save these regions at a higher fidelity than previously possible. The implementation of this process is partially complete and results obtained from PCA of test data is presented that indicates the process may have merit: the basis vectors that PCA provides are significantly different in regions where combustion is occurring and even when all 21 species of a lifted flame simulation are correlated the computational cost of PCA is minimal. What remains to be determined is whether k-means (or other) clustering techniques will be able to identify combined combustion and flow features with an accuracy that makes further characterization of these regions feasible and meaningful.
The physical foundations and domain of applicability of the Kayenta constitutive model are presented along with descriptions of the source code and user instructions. Kayenta, which is an outgrowth of the Sandia GeoModel, includes features and fitting functions appropriate to a broad class of materials including rocks, rock-like engineered materials (such as concretes and ceramics), and metals. Fundamentally, Kayenta is a computational framework for generalized plasticity models. As such, it includes a yield surface, but the term 'yield' is generalized to include any form of inelastic material response including microcrack growth and pore collapse. Kayenta supports optional anisotropic elasticity associated with ubiquitous joint sets. Kayenta supports optional deformation-induced anisotropy through kinematic hardening (in which the initially isotropic yield surface is permitted to translate in deviatoric stress space to model Bauschinger effects). The governing equations are otherwise isotropic. Because Kayenta is a unification and generalization of simpler models, it can be run using as few as 2 parameters (for linear elasticity) to as many as 40 material and control parameters in the exceptionally rare case when all features are used. For high-strain-rate applications, Kayenta supports rate dependence through an overstress model. Isotropic damage is modeled through loss of stiffness and strength.
This is meant as a place to put commentary on the whitepaper and is meant to be pretty much ad-hoc. Because the whitepaper describes a potential program in DOE ASCR and because it concerns many researchers in the field, these notes are meant to be extendable by anyone willing to put in the effort. Of course criticisms of the contents of the notes themselves are also welcome.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.
Enhanced knowledge preservation for DOE DP technical component activities has recently received much attention. As part of this recent knowledge preservation effort, improved documentation of the sample preparation and electrical testing procedures for lead magnesium niobate--lead titanate (PMN/PT) qualification pellets was completed. The qualification pellets are fabricated from the same parent powders used to produce PMN/PT lightning arrestor connector (LAC) granules at HWF&T. In our report, the procedures for fired pellet surface preparation, electrode deposition, electrical testing and data recording are described. The dielectric measurements described in our report are an information only test. Technical reasons for selecting the electrode material, electrode size and geometry are presented. The electrical testing is based on measuring the dielectric constant and dissipation factor of the pellet during cooling from 280 C to 220 C. The most important data are the temperature for which the peak dielectric constant occurs (Curie Point temperature) and the peak dielectric constant magnitude. We determined that the peak dielectric constant for our procedure would be that measured at 1 kHz at the Curie Point. Both the peak dielectric constant and the Curie point parameters provide semi-quantitative information concerning the chemical and microstructural homogeneity of the parent material used for the production of PMN/PT granules for LACs. Finally, we have proposed flag limits for the dielectric data for the pellets. Specifically, if the temperature of the peak dielectric constant falls outside the range of 250 C {+-} 30 C we propose that a flag limit be imposed that will initiate communication between production agency and design agency personnel. If the peak dielectric constant measured falls outside the range 25,000 {+-} 10,000 we also propose that a flag limit be imposed.
The composite material research and development performed over the last year has greatly enhanced the capabilities of CTH for non-isotropic materials. The enhancements provide the users and developers with greatly enhanced capabilities to address non-isotropic materials and their constitutive model development. The enhancements to CTH are intended to address various composite material applications such as armor systems, rocket motor cases, etc. A new method for inserting non-isotropic materials was developed using Diatom capabilities. This new insertion method makes it possible to add a layering capability to a shock physics hydrocode. This allows users to explicitly model each lamina of a composite without the overhead of modeling each lamina as a separate material to represent a laminate composite. This capability is designed for computational speed and modeling efficiency when studying composite material applications. In addition, the layering capability also allows a user to model interlaminar mechanisms. Finally, non-isotropic coupling methods have been investigated. The coupling methods are specific to shock physics where the Equation of State (EOS) is used with a nonisotropic constitutive model. This capability elastically corrects the EOS pressure (typically isotropic) for deviatoric pressure coupling for non-isotropic materials.
This paper is a continuation of the work presented in SAND2007-2591 'Planar LTCC Transformers for High Voltage Flyback Converters'. The designs in that SAND report were all based on a ferrite tape/dielectric paste system originally developed by NASCENTechnoloy, Inc, who collaborated in the design and manufacturing of the planar LTCC flyback converters. The output/volume requirements were targeted to DoD application for hard target/mini fuzing at around 1500 V for reasonable primary peak currents. High voltages could be obtained but with considerable higher current. Work had begun on higher voltage systems and is where this report begins. Limits in material properties and processing capabilities show that the state-of-the-art has limited our practical output voltage from such a small part volume. In other words, the technology is currently limited within the allowable funding and interest.
Inelastic neutron scattering, density functional theory, ab initio molecular dynamics, and classical molecular dynamics were used to examine the behavior of nanoconfined water in palygorskite and sepiolite. These complementary methods provide a strong basis to illustrate and correlate the significant differences observed in the spectroscopic signatures of water in two unique clay minerals. Distortions of silicate tetrahedra in the smaller-pore palygorskite exhibit a limited number of hydrogen bonds having relatively short bond lengths. However, without the distorted silicate tetrahedra, an increased number of hydrogen bonds are observed in the larger-pore sepiolite with corresponding longer bond distances. Because there is more hydrogen bonding at the pore interface in sepiolite than in palygorskite, we expect librational modes to have higher overall frequencies (i.e., more restricted rotational motions); experimental neutron scattering data clearly illustrates this shift in spectroscopic signatures. It follows that distortions of the silicate tetrahedra in these minerals effectively disrupt hydrogen-bonding patterns at the silicate?water interface, and this has a greater impact on the dynamical behavior of nanoconfined water than the actual size of the pore or the presence of coordinatively unsaturated magnesium edge sites.
Sol-gel thermites, formulated from nanoporous oxides and dispersed fuel particles, may provide materials useful for small-scale, intense thermal sources, but understanding the factors affecting performance is critical prior to use. Work was conducted on understanding the synthesis conditions, thermal treatments, and additives that lead to different performance characteristics in iron oxide sol-gel thermites. Additionally, the safety properties of sol-gel thermites were investigated, especially those related to air sensitivity. Sol-gel thermites were synthesized using a variety of different techniques and there appear to be many viable routes to relatively equivalent thermites. These thermites were subjected to several different thermal treatments under argon in a differential scanning calorimeter, and it was shown that a 65 C hold for up to 200 minutes was effective for the removal of residual solvent, thus preventing boiling during the final thermal activation step. Vacuum-drying prior to this heating was shown to be even more effective at removing residual solvent. The addition of aluminum and molybdenum trioxide (MoO{sub 3}) reduced the total heat release per unit mass upon exposure to air, probably due to a decrease in the amount of reduced iron oxide species in the thermite. For the thermal activation step of heat treatment, three different temperatures were investigated. Thermal activation at 200 C resulted in increased ignition sensitivity over thermal activation at 232 C, and thermal activation at 300 C resulted in non-ignitable material. Non-sol-gel iron oxide did not exhibit any of the air-sensitivity observed in sol-gel iron oxide. In the DSC experiments, no bulk ignition of sol-gel thermites was observed upon exposure to air after thermal activation in argon; however ignition did occur when the material was heated in air after thermal treatment. In larger-scale experiments, up to a few hundred milligrams, no ignition was observed upon exposure to air after thermal activation in vacuum; however ignition by resistively-heated tungsten wire was possible. Thin films of thermite were fabricated using a dispersed mixture of aluminum and iron oxide particles, but ignition and propagation of these films was difficult. The only ignition and propagation observed was in a preheated sample.
Salinas provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas , we refer the reader to Salinas, Users Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature.
Bi-Te-based thermoelectric (TE) alloys are excellent candidates for power generation modules. We are interested in reliable TE modules for long-term use at or below 200 C. It is known that the metallurgical characteristics of TE materials and of interconnect components affect the performance of TE modules. Thus, we have conducted an extensive scientific investigation of several commercial TE modules to determine whether they meet our technical requirements. Our main focus is on the metallurgy and thermal stability of (Bi,Sb){sup 2}(Te,Se){sup 3} TE compounds and of other materials used in TE modules in the temperature range between 25 C and 200 C. Our study confirms the material suite used in the construction of TE modules. The module consists of three major components: AlN cover plates; electrical interconnects; and the TE legs, P-doped (Bi{sub 8}Sb{sub 32})(Te{sub 60}) and N-doped (Bi{sub 37}Sb{sub 3})(Te{sub 56}Se{sub 4}). The interconnect assembly contains Sn (Sb {approx} 1wt%) solder, sandwiched between Cu conductor with Ni diffusion barriers on the outside. Potential failure modes of the TE modules in this temperature range were discovered and analyzed. The results show that the metallurgical characteristics of the alloys used in the P and N legs are stable up to 200 C. However, whole TE modules are thermally unstable at temperatures above 160 C, lower than the nominal melting point of the solder suggested by the manufacture. Two failure modes were observed when they were heated above 160 C: solder melting and flowing out of the interconnect assembly; and solder reacting with the TE leg, causing dimensional swelling of the TE legs. The reaction of the solder with the TE leg occurs as the lack of a nickel diffusion barrier on the side of the TE leg where the displaced solder and/or the preexisting solder beads is directly contact the TE material. This study concludes that the present TE modules are not suitable for long-term use at temperatures above 160 C due to the reactivity between the Sn-solder and the (Bi,Sb){sup 2}(Te,Se){sup 3} TE alloys. In order to deploy a reliable TE power generator for use at or below 200 C, alternate interconnect materials must be used and/or a modified module fabrication technique must be developed.
Spatially-distributed arrays of seismometers are often utilized to infer the speed and direction of incident seismic waves. Conventionally, individual seismometers of the array measure one or more orthogonal components of rectilinear particle motion (displacement, velocity, or acceleration). The present work demonstrates that measure of both the particle velocity vector and the particle rotation vector at a single point receiver yields sufficient information to discern the type (compressional or shear), speed, and direction of an incident plane seismic wave. Hence, the approach offers the intriguing possibility of dispensing with spatially-extended received arrays, with their many problematic deployment, maintenance, relocation, and post-acquisition data processing issues. This study outlines straightforward mathematical theory underlying the point seismic array concept, and implements a simple cross-correlation scanning algorithm for determining the azimuth of incident seismic waves from measured acceleration and rotation rate data. The algorithm is successfully applied to synthetic seismic data generated by an advanced finite-difference seismic wave propagation modeling algorithm. Application of the same azimuth scanning approach to data acquired at a site near Yucca Mountain, Nevada yields ambiguous, albeit encouraging, results. Practical issues associated with rotational seismometry are recognized as important, but are not addressed in this investigation.
The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.
The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.
A laboratory testing program was developed to examine the mechanical behavior of salt from the Richton salt dome. The resulting information is intended for use in design and evaluation of a proposed Strategic Petroleum Reserve storage facility in that dome. Core obtained from the drill hole MRIG-9 was obtained from the Texas Bureau of Economic Geology. Mechanical properties testing included: (1) acoustic velocity wave measurements; (2) indirect tensile strength tests; (3) unconfined compressive strength tests; (4) ambient temperature quasi-static triaxial compression tests to evaluate dilational stress states at confining pressures of 725, 1450, 2175, and 2900 psi; and (5) confined triaxial creep experiments to evaluate the time-dependent behavior of the salt at axial stress differences of 4000 psi, 3500 psi, 3000 psi, 2175 psi and 2000 psi at 55 C and 4000 psi at 35 C, all at a constant confining pressure of 4000 psi. All comments, inferences, discussions of the Richton characterization and analysis are caveated by the small number of tests. Additional core and testing from a deeper well located at the proposed site is planned. The Richton rock salt is generally inhomogeneous as expressed by the density and velocity measurements with depth. In fact, we treated the salt as two populations, one clean and relatively pure (> 98% halite), the other salt with abundant (at times) anhydrite. The density has been related to the insoluble content. The limited mechanical testing completed has allowed us to conclude that the dilatational criteria are distinct for the halite-rich and other salts, and that the dilation criteria are pressure dependent. The indirect tensile strengths and unconfined compressive strengths determined are consistently lower than other coastal domal salts. The steady-state-only creep model being developed suggests that Richton salt is intermediate in creep resistance when compared to other domal and bedded salts. The results of the study provide only limited information for structural modeling needed to evaluate the integrity and safety of the proposed cavern field. This study should be augmented with more extensive testing. This report documents a series of test methods, philosophies, and empirical relationships, etc., that are used to define and extend our understanding of the mechanical behavior of the Richton salt. This understanding could be used in conjunction with planned further studies or on its own for initial assessments.
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
In the years leading up to the Compliance Certification Application in 1996, scientists working on the Waste Isolation Pilot Plant (WIPP) conducted an extensive suite of laboratory and field experiments. Additionally, full-scale experiments in the underground established performance standards and expectations, while the fundamental science of salt deformation was explored in the laboratory. Field experiments included several at elevated temperature to ascertain salt response under conditions anticipated for the operating repository, which at the outset included heat-generating defense waste. Simulations and predictions of the field tests were made using finite element computer models that incorporated sophisticated models for salt deformation. Parameters for the salt model were derived from laboratory experiments on natural salt extracted from the repository horizon. All of these science investigations provided confidence in the predicted behavior of the salt at WIPP. Lastly, on this tenth anniversary of WIPP operations, this paper recounts some of the geomechanics investigations conducted during site characterization, highlights three key geomechanics issues experienced over the decade of operations, and concludes that our basic understanding of salt mechanics portends a promising future for radioactive waste disposal in salt.
This document describes the testing and facility requirements to support the Yucca Mountain Project long-term corrosion testing program. The purpose of this document is to describe a corrosion testing program that will (a) reduce model uncertainty and variability, (b) reduce the reliance upon overly conservative assumptions, and (c) improve model defensibility. Test matrices were developed for 17 topical areas (tasks): each matrix corresponds to a specific test activity that is a subset of the total work performed in a task. A future document will identify which of these activities are considered to be performance confirmation activities. Detailed matrices are provided for FY08, FY09 and FY10 and rough order estimates are provided for FY11-17. Criteria for the selection of appropriate test facilities were developed through a meeting of Lead Lab and DOE personnel on October 16-17, 2007. These criteria were applied to the testing activities and recommendations were made for the facility types appropriate to carry out each activity. The facility requirements for each activity were assessed and activities were identified that can not be performed with currently available facilities. Based on this assessment, a total of approximately 10,000 square feet of facility space is recommended to accommodate all future testing, given that all testing is consolidated to a single location. This report is a revision to SAND2008-4922 to address DOE comments.
This report summarizes the work completed during FY2007 and FY2008 for the LDRD project ''Hybrid Plasma Modeling''. The goal of this project was to develop hybrid methods to model plasmas across the non-continuum-to-continuum collisionality spectrum. The primary methodology to span these regimes was to couple a kinetic method (e.g., Particle-In-Cell) in the non-continuum regions to a continuum PDE-based method (e.g., finite differences) in continuum regions. The interface between the two would be adjusted dynamically ased on statistical sampling of the kinetic results. Although originally a three-year project, it became clear during the second year (FY2008) that there were not sufficient resources to complete the project and it was terminated mid-year.
IWA (Isentropic Wave Analysis) is a program for analyzing velocity profiles of isentropic compression experiments. IWA applies incremental impedance matching correction to measured velocity profiles to obtain in-situ particle velocity profiles for Lagrangian wave analysis. From the in-situ velocity profiles, material properties such as wave velocities, stress, strain, strain rate, and strength are calculated. The program can be run in any current version of MATLAB (2008a or later) or as a Windows XP executable.
The annual program report provides detailed information about all aspects of the Sandia National Laboratories, California (SNL/CA) Waste Management Program. It functions as supporting documentation to the SNL/CA Environmental Management System rogram Manual. This annual program report describes the activities undertaken during the past year, and activities planned in future years to implement the Waste Management (WM) Program, one of six programs that supports environmental management at SNL/CA.
The annual program report provides detailed information about all aspects of the Sandia National Laboratories, California (SNL/CA) Hazardous Materials Management Program. It functions as supporting documentation to the SNL/CA Environmental anagement ystem Program Manual. This program annual report describes the activities undertaken during the past year, and activities planned in future years to implement the Hazardous Materials Management Program, one of six programs that supports environmental management at SNL/CA.
This guide describes the R&A process, Common Look and Feel requirements, and preparation and publishing procedures for communication products at Sandia National Laboratories. Samples of forms and examples of published communications products are provided.
This paper delivers a brief survey of renewable energy technologies applicable to Alaska's climate, latitude, geography, and geology. We first identify Alaska's natural renewable energy resources and which renewable energy technologies would be most productive. e survey the current state of renewable energy technologies and research efforts within the U.S. and, where appropriate, internationally. We also present information on the current state of Alaska's renewable energy assets, incentives, and commercial enterprises. Finally, we escribe places where research efforts at Sandia National Laboratories could assist the state of Alaska with its renewable energy technology investment efforts.
Computational and mathematical models are developed in engineering to represent the behavior of physical systems to various system inputs and conditions. These models are often used to predict at other conditions, rather than to just reproduce the behavior of data obtained at the experimental conditions. For example, the boundary or initial conditions, time of prediction, geometry, material properties, and other model parameters can be different at test conditions than those for an anticipated application of a model. Situations for which the conditions may differ include those for which (1) one is in the design phase and a prototype of the system has not been constructed and tested under the anticipated conditions, (2) only one version of a final system can be built and destructive testing is not feasible, or (3) the anticipated design conditions are variable and one cannot easily reproduce the range of conditions with a limited number of carefully controlled experiments. Because data from these supporting experiments have value in model validation, even if the model was tested at different conditions than an anticipated application, methodology is required to evaluate the ability of the validation experiments to resolve the critical behavior for the anticipated application. The methodology presented uses models for the validation experiments and a model for the application to address how well the validation experiments resolve the application. More specifically, the methodology investigates the tradeoff that exists between the uncertainty (variability) in the behavior of the resolved critical variables for the anticipated application and the ability of the validation experiments to resolve this behavior. The important features of this approach are demonstrated through simple linear and non-linear heat conduction examples.
Ion traps present a potential architecture for future quantum computers. These computers are of interest due to their increased power over classical computers stemming from the superposition of states and the resulting capability to simultaneously perform many computations. This paper describes a software application used to prepare and visualize simulations of trapping and maneuvering ions in ion traps.
To analyze the risks due to cyber attack against control systems used in the United States electrical infrastructure, new algorithms are needed to determine the possible impacts. This research is studying the Reliability Impact of Cyber ttack (RICA) in a two-pronged approach. First, malevolent cyber actions are analyzed in terms of reduced grid reliability. Second, power system impacts are investigated using an abstraction of the grid's dynamic model. This second year of esearch extends the work done during the first year.
This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized multi-correlative and principal component analysis engines. It is a sequel to [PT08] which studied the parallel descriptive and correlative engines. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
The purpose of this report is to provide an overview of Micro-System technology as it applies to inertial sensing. Transduction methods are reviewed with capacitance and piezoresistive being the most often used in COTS Micro-electro-mechanical system (MEMS) inertial sensors. Optical transduction is the most recent transduction method having significant impact on improving sensor resolution. A few other methods are motioned which are in a R&D status to hopefully allow MEMS inertial sensors to become viable as a navigation grade sensor. The accelerometer, gyroscope and gravity gradiometer are the type of inertial sensors which are reviewed in this report. Their method of operation and a sampling of COTS sensors and grade are reviewed as well.
The outline of this presentation is: (1) Applications of Kovar Alloy in metal/ceramic brazing; (2) Diffusion bonding of precision-photoetched Kovar parts; (3) Sample composition and annealing conditions; (4) Intermediate temperature creep properties (350-650 C); (5) Power law creep correlations--with and without modulus correction; (6) Compressive stress-strain properties (23-900 C); (7) Effect of creep deformation on grain growth; and (8) Application of the power law creep correlation to the diffusion bonding application. The summary and conclusions are: Elevated temperature creep properties of Kovar from 750-900 C obey a power law creep equation with a stress exponent equal to 4.9, modulus compensated activation energy of 47.96 kcal/mole. Grain growth in Kovar creep samples tested at 750 and 800 C is quite sluggish. Significant grain growth occurs at 850 C and above, this is consistent with isothermal grain growth studies performed on Kovar alloy wires. Finite element analysis of the diffusion bonding of Kovar predict that stresses of 30 MPa and higher are needed for good bonding at 850 C, we believe that 'sintering' effects must be accounted for to allow FEA to be predictive of actual processing conditions. Additional creep tests are planned at 250-650 C.
A study proposed that metal-organic frameworks (MOF) can potentially offer the desired level of structural control, leading to the formation of a new class of radiation detection materials. It was found that the rigid structure of MOFs can create permanent nonporosity. It was demonstrated permanent nonporosity has the potential for gas storage,separations, catalysis, and sensing. It was demonstrated that this feature of MOFs can be beneficial in scintillation materials, enabling MOFs to serve as hosts for wavelength shifters, or elements designed to improve the detection cross-section. It was observed that MOFs, along with scintillation materials, present significant opportunity to perform crystal engineering, creating the potential for rational design of new scintillation materials. Spectroscopic measurements of these MOFs, using single crystals demonstrated that they respond to ionizing radiation by emitting light, creating a new class of scintillation materials.
A subscale experiment has been constructed using fins mounted on one wall of a transonic wind tunnel to investigate the influence of fin trailing vortices upon downstream control surfaces. Data were collected using a fin balance instrumenting the downstream fin to measure the aerodynamic forces of the interaction, combined with stereoscopic particle image velocimetry to determine vortex properties. The fin balance data show that the response of the downstream fin essentially is shifted from the baseline single-fin data dependent upon the angle of attack of the upstream fin. Freestream Mach number and the spacing between fins have secondary effects. The velocimetry shows the increase in vortex strength with upstream fin angle of attack, but no variation with Mach number can be discerned in the normalized velocity data. Correlations between the force data and the velocimetry indicate that the interaction is fundamentally a result of an angle of attack superposed upon the downstream fin by the vortex shed from the upstream fin tip. The Mach number influence arises from differing vortex lift on the leading edge of the downstream fin even when the impinging vortex is Mach invariant. Copyright Clearance Center, Inc.
The development of a manufactured solution for enclosure radiation in an infinitely long circular cylinder with a nonparticipating medium is presented. This solution is then used to verify the correct implementation of the commonly used discrete enclosure equations. The circular cross section is approximated by a faceted geometry; the numbers of facets used are 4, 8, 16, 32, 64, and 128. The crossed-string method, which is exact in this application, is used to compute the view factors. Computational results using six levels of grid refinement suggest that the error norm between the integral equation solution and the discrete equation solution behaves as h2 where h is a characteristic mesh size.
This paper presents a derivation of an expression to estimate the accommodation coefficient for gas collisions with a graphite surface, which is meant for use in models of laser-induced incandescence (LII) of soot. Energy transfer between gas molecules and solid surfaces has been studied extensively, and a considerable amount is known about the physical mechanisms important in thermal accommodation. Values of accommodation coefficients currently used in LII models are temperature independent and are based on a small subset of information available in the literature. The expression derived in this study is based on published data from state-to-state gas-surface scattering experiments. The present study compiles data on the temperature dependence of translational, rotational, and vibrational energy transfer for diatomic molecules (predominantly NO) colliding with graphite surfaces. The data were used to infer partial accommodation coefficients for translational, rotational, and vibrational degrees of freedom, which were consolidated to derive an overall accommodation coefficient that accounts for accommodation of all degrees of freedom of the scattered gas distributions. This accommodation coefficient can be used to calculate conductive cooling rates following laser heating of soot particles.