Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.
Optically detected magnetic resonance (ODMR) has been performed on Si-doped GaN homoepitaxial layers grown by organometallic chemical vapor deposition on free-standing GaN templates. In addition to intense excitonic bandedge emission with narrow linewidths (<0.4 meV), these films exhibit strong shallow donor-shallow acceptor recombination at 3.27 eV. Most notably, ODMR on this photoluminescence band reveals a highly anisotropic resonance with g{sub {parallel}} = 2.193 {+-} 0.001 and g{sub {perpendicular}} {approx}0 as expected for effective-mass shallow acceptors in wurtzitic GaN from k {center_dot} p theory. This previously elusive result is attributed to the much reduced dislocation density and impurity levels compared to those typically found in the widely investigated Mg-doped GaN heteroepitaxial layers. The possible chemical origin of the shallow acceptors in these homoepitaxial films will be discussed.
Sealed lead acid cells are used in many projects in Sandia National Laboratories Department 2660 Telemetry and Instrumentation systems. The importance of these cells in battery packs for powering electronics to remotely conduct tests is significant. Since many tests are carried out in flight or launched, temperature is a major factor. It is also important that the battery packs are properly charged so that the test is completed before the pack cannot supply sufficient power. Department 2665 conducted research and studies to determine the effects of temperature on cycle time as well as charging techniques to maximize cycle life and cycle times on sealed lead acid cells. The studies proved that both temperature and charging techniques are very important for battery life to support successful field testing and expensive flight and launched tests. This report demonstrates the effects of temperature on cycle time for SLA cells as well as proper charging techniques to get the most life and cycle time out of SLA cells in battery packs.
Intracellular molecular machines synthesize molecules, tear apart others, transport materials, transform energy into different forms, and carry out a host of other coordinated processes. Many molecular processes have been shown to work outside of cells, and the idea of harnessing these molecular machines to build nanostructures is attractive. Two examples are microtubules and motor proteins, which aid cell movement, help determine cell shape and internal structure, and transport vesicles and organelles within the cell. These molecular machines work in a stochastic, noisy fashion: microtubules switch randomly between growing and shrinking in a process known as dynamic instability; motor protein movement along microtubules is randomly interrupted by the motor proteins falling off. A common strategy in attempting to gain control over these highly dynamic, stochastic processes is to eliminate some processes (e.g., work with stabilized microtubules) in order to focus on others (interaction of microtubules with motor proteins). In this paper, we illustrate a different strategy for building nanostructures, which, rather than attempting to control or eliminate some dynamic processes, uses them to advantage in building nanostructures. Specifically, using stochastic agent-based simulations, we show how the natural dynamic instability of microtubules can be harnessed in building nanostructures, and discuss strategies for ensuring that 'unreliable' stochastic processes yield a robust outcome.
Coupling between transient simulation codes of different fidelity can often be performed at the nonlinear solver level, if the time scales of the two codes are similar. A good example is electrical mixed-mode simulation, in which an analog circuit simulator is coupled to a PDE-based semiconductor device simulator. Semiconductor simulation problems, such as single-event upset (SEU), often require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (somewhat conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the PDE device, while optimizing the numerics for both. The research was done within Xyce, a massively parallel electronic simulator under development at Sandia National Laboratories.
Dynamic memory management in C++ is one of the most common areas of difficulty and errors for amateur and expert C++ developers alike. The improper use of operator new and operator delete is arguably the most common cause of incorrect program behavior and segmentation faults in C++ programs. Here we introduce a templated concrete C++ class Teuchos::RefCountPtr<>, which is part of the Trilinos tools package Teuchos, that combines the concepts of smart pointers and reference counting to build a low-overhead but effective tool for simplifying dynamic memory management in C++. We discuss why the use of raw pointers for memory management, managed through explicit calls to operator new and operator delete, is so difficult to accomplish without making mistakes and how programs that use raw pointers for memory management can easily be modified to use RefCountPtr<>. In addition, explicit calls to operator delete is fragile and results in memory leaks in the presents of C++ exceptions. In its most basic usage, RefCountPtr<> automatically determines when operator delete should be called to free an object allocated with operator new and is not fragile in the presents of exceptions. The class also supports more sophisticated use cases as well. This document describes just the most basic usage of RefCountPtr<> to allow developers to get started using it right away. However, more detailed information on the design and advanced features of RefCountPtr<> is provided by the companion document 'Teuchos::RefCountPtr : The Trilinos Smart Reference-Counted Pointer Class for (Almost) Automatic Dynamic Memory Management in C++'.
The parameterization of the stably stratified atmospheric boundary layer is a difficult issue, having a significant impact on medium-range weather forecasts and climate integrations. To pursue this further, a moderately stratified Arctic case is simulated by nineteen single-column turbulence schemes. Statistics from a large-eddy simulation intercomparison made for the same case by eleven different models are used as a guiding reference. The single-column parameterizations include research and operational schemes from major forecast and climate research centers. Results from first-order schemes, a large number of turbulence kinetic energy closures, and other models were used. There is a large spread in the results; in general, the operational schemes mix over a deeper layer than the research schemes, and the turbulence kinetic energy and other higher-order closures give results closer to the statistics obtained from the large-eddy simulations. The sensitivities of the schemes to the parameters of their turbulence closures are partially explored.
The spinning ball rheometer has been proposed as a method to measure rheological properties of concentrated suspensions. Recent experiments have shown that the measured extra torque on the spinning ball decreases as the radius of the spinning ball becomes comparable to the size of the suspended particle. We have performed a series of three dimensional boundary element calculations of the rheometer geometry to probe the microstructure effects that contribute to the apparent 'slip.' We will present a series of snap-shot results as well as several transient calculations which are compared to the available experimental data. The computational limitations of these large-scale simulations shall also be discussed.
Receptivity of compressible mixing layers to general source distributions is examined by a combined theoretical/computational approach. The properties of solutions to the adjoint Navier-Stokes equations are exploited to derive expressions for receptivity in terms of the local value of the adjoint solution. The result is a description of receptivity for arbitrary small-amplitude mass, momentum, and heat sources in the vicinity of a mixing-layer flow, including the edge-scattering effects due to the presence of a splitter plate of finite width. The adjoint solutions are examined in detail for a Mach 1.2 mixing-layer flow. The near field of the adjoint solution reveals regions of relatively high receptivity to direct forcing within the mixing layer, with receptivity to nearby acoustic sources depending on the source type and position. Receptivity 'nodes' are present at certain locations near the splitter plate edge where the flow is not sensitive to forcing. The presence of the nodes is explained by interpretation of the adjoint solution as the superposition of incident and scattered fields. The adjoint solution within the boundary layer upstream of the splitter-plate trailing edge reveals a mechanism for transfer of energy from boundary-layer stability modes to Kelvin-Helmholtz modes. Extension of the adjoint solution to the far field using a Kirchhoff surface gives the receptivity of the mixing layer to incident sound from distant sources.
The goal of the Blade System Design Study (BSDS) was investigation and evaluation of design and manufacturing issues for wind turbine blades in the one to ten megawatt size range. A series of analysis tasks were completed in support of the design effort. We began with a parametric scaling study to assess blade structure using current technology. This was followed by an economic study of the cost to manufacture, transport and install large blades. Subsequently we identified several innovative design approaches that showed potential for overcoming fundamental physical and manufacturing constraints. The final stage of the project was used to develop several preliminary 50m blade designs. The key design impacts identified in this study are: (1) blade cross-sections, (2) alternative materials, (3) IEC design class, and (4) root attachment. The results show that thick blade cross-sections can provide a large reduction in blade weight, while maintaining high aerodynamic performance. Increasing blade thickness for inboard sections is a key method for improving structural efficiency and reducing blade weight. Carbon/glass hybrid blades were found to provide good improvements in blade weight, stiffness, and deflection when used in the main structural elements of the blade. The addition of carbon resulted in modest cost increases and provided significant benefits, particularly with respect to deflection. The change in design loads between IEC classes is quite significant. Optimized blades should be designed for each IEC design class. A significant portion of blade weight is related to the root buildup and metal hardware for typical root attachment designs. The results show that increasing the number of blade fasteners has a positive effect on total weight, because it reduces the required root laminate thickness.
Exploration of the fundamental chemical behavior of the AlCl{sub 3}/SO{sub 2}Cl{sub 2} catholyte system for the ARDEC Self-Destruct Fuze Reserve Battery Project under accelerated aging conditions was completed using a variety of analytical tools. Four different molecular species were identified in this solution, three of which are major. The relative concentrations of the molecular species formed were found to depend on aging time, initial concentrations, and storage temperature, with each variable affecting the kinetics and thermodynamics of this complex reaction system. We also evaluated the effect of water on the system, and determined that it does not play a role in dictating the observed molecular species present in solution. The first Al-containing species formed was identified as the dimer [Al({mu}-Cl)Cl{sub 2}]{sub 2}, and was found to be in equilibrium with the monomer, AlCl{sub 3}. The second species formed in the reaction scheme was identified by single crystal X-ray diffraction studies as [Cl{sub 2}Al({mu}-O{sub 2}SCl)]{sub 2} (I), a scrambled AlCl{sub 3}{center_dot}SO{sub 2} adduct. The SO{sub 2}(g) present, as well as CL{sub 2}(g), was formed through decomposition of SO{sub 2}CL{sub 2}. The SO{sub 2}(g) generated was readily consumed by AlCl{sub 3} to form the adduct 1 which was experimentally verified when 1 was also isolated from the reaction of SO{sub 2}(g) and AlCl {sub 3}. The third species found was tentatively identified as a compound having the general formula {l_brace}[Al(O)Cl{sub 2}][OSCl{sub 2}]{r_brace}{sub n}. This was based on {sup 27}Al NMR data that revealed a species with tetrahedrally coordinated Al metal centers with increased oxygen coordination and the fact that the precipitate, or gel, that forms over time was shown by Raman spectroscopic studies to possess a component that is consistent with SOCl{sub 2}. The precursor to the precipitate should have similar constituents, thus the assignment of {l_brace}[Al(O)Cl{sub 2}][OSCl{sub 2}]{r_brace}{sub n}. The precipitate was further identified by solid state {sup 27}Al MAS NMR data to possess predominantly octahedral A1 metal center which implies {l_brace}[Al(O)Cl{sub 2}][OSCl{sub 2}]{r_brace}{sub n} must undergo some internal rearrangements. A reaction sequence has been proposed to account for the various molecular species identified in this complex reaction mixture during the aging process. The metallurgical welds were of high quality. These results were all visually determined there was no mechanical testing performed. However, it is recommended that the end plate geometry and weld be changed. If the present weld strength, based on .003' - .005' penetration, is sufficient for unit performance, the end plate thickness can be reduced to .005' instead of the .020' thickness. This will enable the plug to be stamped so that it can form a cap rather than a plug and solve existing problems and increase the amount of catholyte which may be beneficial to battery performance.
Accurate modeling of nucleation, growth and clustering of helium bubbles within metal tritide alloys is of high scientific and technological importance. Of interest is the ability to predict both the distribution of these bubbles and the manner in which these bubbles interact at a critical concentration of helium-to-metal atoms to produce an accelerated release of helium gas. One technique that has been used in the past to model these materials, and again revisited in this research, is percolation theory. Previous efforts have used classical percolation theory to qualitatively and quantitatively model the behavior of interstitial helium atoms in a metal tritide lattice; however, higher fidelity models are needed to predict the distribution of helium bubbles and include features that capture the underlying physical mechanisms present in these materials. In this work, we enhance classical percolation theory by developing the dynamic point-source percolation model. This model alters the traditionally binary character of site occupation probabilities by enabling them to vary depending on proximity to existing occupied sites, i.e. nucleated bubbles. This revised model produces characteristics for one and two dimensional systems that are extremely comparable with measurements from three dimensional physical samples. Future directions for continued development of the dynamic model are also outlined.
Co-firing tests were conducted in a pilot-scale reactor at Sandia National Laboratories and in a boiler at the Hawaiian Commercial & Sugar factory at Puunene, Hawaii. Combustion tests were performed in the Sandia Multi-Fuel Combustor using Australian coal, whole fiber cane including tops and leaves processed at three different levels (milled only, milled and leached, and milled followed by leaching and subsequent milling), and fiber cane stripped of its tops and leaves and heavily processed through subsequent milling, leaching, and milling cycles. Testing was performed for pure fuels and for biomass co-firing with the coal at levels of 30% and 70% by mass. The laboratory tests revealed the following information: (1) The biomass fuels convert their native nitrogen into NO more efficiently than coal because of higher volatile content and more reactive nitrogen complexes. (2) Adding coal to whole fiber cane to reduce its tendency to form deposits should not adversely affect NO emissions. ( 3 ) Stripped cane does not offer a NO advantage over whole cane when co-fired with coal. During the field test, Sandia measured 0 2 , C02, CO, SO2, and NO concentrations in the stack and gas velocities near the superheater. Gas concentrations and velocities fluctuated more during biomass co-firing than during coal combustion. The mean 0 2 concentration was lower and the mean C02 concentration was higher during biomass co-firing than during coal combustion. When normalized to a constant exhaust 0 2 concentration, mean CO concentration was higher and mean NO concentration was lower for biomass co-firing than for coal. The SO2 concentration tracked the use of Bunker C fuel oil. When normalized by the amount of boiler energy input, the amounts of NO and SO2 formed were lower during biomass co-firing than during coal combustion. The difference between NOx trends in the lab and in the field are most likely a result of less effective heat and mass transfer in the boiler. Particles were sampled near the superheater tube using an impaction probe and were analyzed using scanning electron microscopy. Particle loading appeared higher for biomass co-firing than for coal combustion, especially for the smaller particle diameters. Laser-induced breakdown spectroscopy (LIBS) was used to detect silicon, aluminum, titanium, iron, calcium, magnesium, sodium, and potassium concentrations near the superheater. LIBS provided an abundant amount of real-time information. The major constituents of the fuel ash (silicon and aluminum) were also the major measured inorganic constituents of the combustion products. The combustion products were enriched in sodium relative to the fuel ash during all tests, and they were enriched in potassium for the biomass co-firing tests. Alkali metals are enriched because compounds containing these elements are more readily releasable into the combustion products than refractory components that remain in large particles such as silicon, aluminum, and titanium. Relative to the measured deposit chemistry, the combustion flows were enriched in iron, sodium, and potassium, constituents that are known to form fumes laden with fine particles and/or vapors. The LIBS results yield insight into the deposition mechanism: Impaction of larger particles dominates over fume deposition. The present application of LIBS reveals its potential to provide real-time field information on the deposition propensity of different fuels and the effects of different fuels and boiler operating conditions.
A multinational test program is in progress to quantify the aerosol particulates produced when a high energy density device, HEDD, impacts surrogate material and actual spent fuel test rodlets. This program provides needed data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments; the program also provides significant political benefits in international cooperation. We are quantifying the spent fuel ratio, SFR, the ratio of the aerosol particles released from HEDD-impacted actual spent fuel to the aerosol particles produced from surrogate materials, measured under closely matched test conditions. In addition, we are measuring the amounts, nuclide content, size distribution of the released aerosol materials, and enhanced sorption of volatile fission product nuclides onto specific aerosol particle size fractions. These data are crucial for predicting radiological impacts. This document includes a thorough description of the test program, including the current, detailed test plan, concept and design, plus a description of all test components, and requirements for future components and related nuclear facility needs. It also serves as a program status report as of the end of FY 2003. All available test results, observations, and analyses - primarily for surrogate material Phase 2 tests using cerium oxide sintered ceramic pellets are included. This spent fuel sabotage - aerosol test program is coordinated with the international Working Group for Sabotage Concerns of Transport and Storage Casks, WGSTSC, and supported by both the U.S. Department of Energy and Nuclear Regulatory Commission.
Bulk and surface energies are calculated for endmembers of the isostructural rhombohedral carbonate mineral family, including Ca, Cd, Co, Fe, Mg, Mn, Ni, and Zn compositions. The calculations for the bulk agree with the densities, bond distances, bond angles, and lattice enthalpies reported in the literature. The calculated energies also correlate with measured dissolution rates: the lattice energies show a log-linear relationship to the macroscopic dissolution rates at circumneutral pH. Moreover, the energies of ion pairs translated along surface steps are calculated and found to predict experimentally observed microscopic step retreat velocities. Finally, pit formation excess energies decrease with increasing pit size, which is consistent with the nonlinear dissolution kinetics hypothesized for the initial stages of pit formation.
The threat from biological weapons is assessed through both a comparative historical analysis of the patterns of biological weapons use and an assessment of the technological hurdles to proliferation and use that must be overcome. The history of biological weapons is studied to learn how agents have been acquired and what types of states and substate actors have used agents. Substate actors have generally been more willing than states to use pathogens and toxins and they have focused on those agents that are more readily available. There has been an increasing trend of bioterrorism incidents over the past century, but states and substate actors have struggled with one or more of the necessary technological steps. These steps include acquisition of a suitable agent, production of an appropriate quantity and form, and effective deployment. The technological hurdles associated with the steps present a real barrier to producing a high consequence event. However, the ever increasing technological sophistication of society continually lowers the barriers, resulting in a low but increasing probability of a high consequence bioterrorism event.
The traditional mono-color statistical pressure snake was modified to function on a color image with target errors defined in HSV color space. Large variations in target lighting and shading are permitted if the target color is only specified in terms of hue. This method works well with custom targets where the target is surrounded by a color of a very different hue. A significant robustness increase is achieved in the computer vision capability to track a specific target in an unstructured, outdoor environment. By specifying the target color to contain hue, saturation and intensity values, it is possible to establish a reasonably robust method to track general image features of a single color. This method is convenient to allow the operator to select arbitrary targets, or sections of a target, which have a common color. Further, a modification to the standard pixel averaging routine is introduced which allows the target to be specified not only in terms of a single color, but also using a list of colors. These algorithms were tested and verified by using a web camera attached to a personal computer.
A nonlinear visual servoing steering law is presented which is used to align a camera view with a visual target. A full color version of statistical pressure snakes is used to identify and track the target with a series of video frames. The nonlinear steering law provides camera-frame centric speed commands to a velocity based servo sub-system. To avoid saturating the subsystem, the commanded speeds are smoothly limited to remain within a finite range. Analytical error analysis is also provided illustrating how the two control gains contribute to the stiffness of the control. The algorithm is demonstrated on a pan and tilt camera system. The control law is able to smoothly realign the camera to point at the target.
Assume a target motion is visible in the video signal. Statistical pressure snakes are used to track a target specified by a single or a multitude of colors. These snakes define the target contour through a series of image plane coordinate points. This report outlines how to compute certain target degrees of freedom. The image contour can be used to efficiently compute the area moments of the target, which in return will yield the target center of mass, as well as the orientation of the target principle axes. If the target has a known shape such as begin rectangular or circular, then the dimensions of this shape can be estimated in units of image pixels. If the physical target dimensions are known apriori, then the measured target dimensions can be used to estimate the target depth.
Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method, the numeric bar codes reveal if the target is right-side-up or up-side-down.
Given a video image source, a statistical pressure snake is able to track a color target in real time. This report presents an algorithm that exploits the one-dimensional nature of the visual snake target outline. If the target resembles a four-sided polygon, then the four polygon sides are identified by mapping all image snake point coordinates into Hough space where lines become points. After establishing that four dominant lines are present in snake contour, the polygon corner points are estimated. The computation burden of this algorithm is of the N logN type. The advantage of this method is that it can provide real-time target corner estimates, even if the corners themselves might be occluded.
A new chemically-oriented mathematical model for the development step of the LIGA process is presented. The key assumption is that the developer can react with the polymeric resist material in order to increase the solubility of the latter, thereby partially overcoming the need to reduce the polymer size. The ease with which this reaction takes place is assumed to be determined by the number of side chain scissions that occur during the x-ray exposure phase of the process. The dynamics of the dissolution process are simulated by solving the reaction-diffusion equations for this three-component, two-phase system, the three species being the unreacted and reacted polymers and the solvent. The mass fluxes are described by the multicomponent diffusion (Stefan-Maxwell) equations, and the chemical potentials are assumed to be given by the Flory-Huggins theory. Sample calculations are used to determine the dependence of the dissolution rate on key system parameters such as the reaction rate constant, polymer size, solid-phase diffusivity, and Flory-Huggins interaction parameters. A simple photochemistry model is used to relate the reaction rate constant and the polymer size to the absorbed x-ray dose. The resulting formula for the dissolution rate as a function of dose and temperature is ?t to an extensive experimental data base in order to evaluate a set of unknown global parameters. The results suggest that reaction-assisted dissolution is very important at low doses and low temperatures, the solubility of the unreacted polymer being too small for it to be dissolved at an appreciable rate. However, at high doses or at higher temperatures, the solubility is such that the reaction is no longer needed, and dissolution can take place via the conventional route. These results provide an explanation for the observed dependences of both the dissolution rate and its activation energy on the absorbed dose.
This report describes the methodology, analysis and conclusions of a preliminary assessment carried out for activities and operations at Sandia National Laboratories Building 878, Manufacturing Science and Technology, Organization 14100. The goal of this assessment is to evaluate processes being carried out within the building to determine ways to reduce waste generation and resource use. The ultimate purpose of this assessment is to analyze and prioritize processes within Building 878 for more in-depth assessments and to identify projects that can be implemented immediately.
The Lubkin solution for two spheres pressed together and then subjected to a monotonically increasing axial couple is examined numerically. The Deresiewicz asymptotic solution is compared to the full solution and its utility is evaluated. Alternative approximations for the Lubkin solution are suggested and compared. One approximation is a Pade rational function which matches the analytic solution over all rotations. The other is an exponential approximation that reproduces the asymptotic values of the analytic solution at infinitesimal and infinite rotations. Finally, finite element solutions for the Lubkin problem are compared with the exact and approximate solutions.
A trace explosives detection system typically contains three subsystems: sample collection, preconcentration, and detection. Sample collection of trace explosives (vapor and particulate) through large volumes of airflow helps reduce sampling time while increasing the amount of dilute sample collected. Preconcentration of the collected sample before introduction into the detector improves the sensitivity of the detector because of the increase in sample concentration. By combining large-volume sample collection and preconcentration, an improvement in the detection of explosives is possible. Large-volume sampling and preconcentration is presented using a systems level approach. In addition, the engineering of large-volume sampling and preconcentration for the trace detection of explosives is explained.
A two-year effort focused on applying ASCI technology developed for the analysis of weapons systems to the state-of-the-art accident analysis of a nuclear reactor system was proposed. The Sandia SIERRA parallel computing platform for ASCI codes includes high-fidelity thermal, fluids, and structural codes whose coupling through SIERRA can be specifically tailored to the particular problem at hand to analyze complex multiphysics problems. Presently, however, the suite lacks several physics modules unique to the analysis of nuclear reactors. The NRC MELCOR code, not presently part of SIERRA, was developed to analyze severe accidents in present-technology reactor systems. We attempted to: (1) evaluate the SIERRA code suite for its current applicability to the analysis of next generation nuclear reactors, and the feasibility of implementing MELCOR models into the SIERRA suite, (2) examine the possibility of augmenting ASCI codes or alternatives by coupling to the MELCOR code, or portions thereof, to address physics particular to nuclear reactor issues, especially those facing next generation reactor designs, and (3) apply the coupled code set to a demonstration problem involving a nuclear reactor system. We were successful in completing the first two in sufficient detail to determine that an extensive demonstration problem was not feasible at this time. In the future, completion of this research would demonstrate the feasibility of performing high fidelity and rapid analyses of safety and design issues needed to support the development of next generation power reactor systems.
Proposed for publication in IEEE Transactions on Antennas and Propagation.
A new finite-element time-domain (FETD) volumetric plane-wave excitation method for use with a total- and scattered-field decomposition (TSFD) is rigorously described. This method provides an alternative to the traditional Huygens surface approaches commonly used to impress the incident field into the total-field region. Although both the volumetric and Huygens surface formulations theoretically provide for zero leakage of the impressed wave into the scattered-field region, the volumetric method provides a simple path to numerically realize this. In practice, the level of leakage for the volumetric scheme is determined by available computer precision, as well as the residual of the matrix solution. In addition, the volumetric method exhibits nearly zero dispersion error with regard to the discrete incident field.
A linear elastic constitutive equation for modeling fiber-reinforced laminated composites via shell elements is specified. The effects of transverse shear are included using first-order shear deformation theory. The proposed model is written in a rate form for numerical evaluation in the Sandia quasi-statics code ADAGIO and explicit dynamics code PRESTO. The equation for the critical time step needed for explicit dynamics is listed assuming that a flat bilinear Mindlin shell element is used in the finite element representation. Details of the finite element implementation and usage are given. Finally, some of the verification examples that have been included in the ADAGIO regression test suite are presented.
We report a novel packing mode specific to the cis unsaturated hydrocarbon chain in the title compound, a self-assembled layered double hydroxide-surfactant hybrid nanomaterial, and its influence on crystallite morphology and structure. The kink imposed by the cis double bond in oleate leads to partial overlap between chains on adjacent layers, with incomplete space filling, in contrast to the more usual (and more efficient) mono- and bilayer packings exhibited by the trans analogues. Incorporation of surfactant into the growing crystallite leads to a reversal of the usual LDH growth habit and results in crystallite shapes featuring ribbonlike sheets. The thermal decomposition behavior of the as-prepared organic/inorganic nanocomposites in air and N{sub 2} is described.
This document describes the main functionalities of the Amesos package, version 1.0. Amesos, available as part of Trilinos 4.0, provides an object-oriented interface to several serial and parallel sparse direct solvers libraries, for the solution of the linear systems of equations A X = B where A is a real sparse, distributed matrix, defined as an EpetraRowMatrix object, and X and B are defined as EpetraMultiVector objects. Amesos provides a common look-and-feel to several direct solvers, insulating the user from each package's details, such as matrix and vector formats, and data distribution.
The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. The goal of the Trilinos Project is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multiphysics engineering and scientific applications. The emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all the abstract interfaces. This document introduces the use of Trilinos, version 4.0. The presented material includes, among others, the definition of distributed matrices and vectors with Epetra, the iterative solution of linear systems with AztecOO, incomplete factorizations with IF-PACK, multilevel and domain decomposition preconditioners with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. The tutorial is a self-contained introduction, intended to help computational scientists effectively apply the appropriate Trilinos package to their applications. Basic examples are presented that are fit to be imitated. This document is a companion to the Trilinos User's Guide [20] and Trilinos Development Guides [21,22]. Please note that the documentation included in each of the Trilinos' packages is of fundamental importance.
As part of a study of carbon-tritium co-deposition, we carried out an experiment on DIII-D involving a toroidally symmetric injection of {sup 13}CH{sub 4} at the top of a LSN discharge. A Monte Carlo code, DIVIMP-HC, which includes molecular breakup of hydrocarbons, was used to model the region near the puff. The interpretive analysis indicates a parallel flow in the SOL of M {parallel} {approx} 0.4 directed toward the inner divertor. The CH{sub 4} is ionized in the periphery of the SOL and so the particle confinement time, T{sub C}, is not high, only {approx} 5 ms, and about 4X lower than if the CH{sub 4} were ionized at the separatrix. For such a wall injection location, however, approximately 60-75% of the CH{sub 4} gets ionized to C{sup +}, C{sup 2+}, etc., and is efficiently transported along the SOL to the inner divertor, trapping hydrogen by co-deposition there.
ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.
There is a great need for robust, defect-free, highly selective molecular sieve (zeolite) thin film membranes for light gas molecule separations in hydrogen fuel production from CH{sub 4} or H{sub 2}O sources. In particular, we are interested in (1) separating and isolating H{sub 2} from H{sub 2}O and CH{sub 4}, CO, CO{sub 2}, O{sub 2}, N{sub 2} gases; (2) water management in PEMs and (3) as a replacement for expensive Pt catalysts needed for PEMs. Current hydrogen separation membranes are based on Pd alloys or on chemically and mechanically unstable organic polymer membranes. The use of molecular sieves brings a stable (chemically and mechanically stable) inorganic matrix to the membrane [1-3]. The crystalline frameworks have 'tunable' pores that are capable of size exclusion separations. The frameworks are made of inorganic oxides (e.g., silicates, aluminosilicates, and phosphates) that bring different charge and electrostatic attraction forces to the separation media. The resultant materials have high separation abilities plus inherent thermal stability over 600 C and chemical stability. Furthermore, the crystallographically defined (<1 {angstrom} deviation) pore sizes and shapes allow for size exclusion of very similarly sized molecules. In contrast, organic polymer membranes are successful based on diffusion separations, not size exclusion. We envision the impact of positive results from this project in the near term with hydrocarbon fuels, and long term with biomass fuels. There is a great need for robust, defect-free, highly selective molecular sieve (zeolite) thin film membranes for light gas molecule separations in hydrogen fuel production from CH{sub 4} or H{sub 2}O sources. They contain an inherent chemical, thermal and mechanical stability not found in conventional membrane materials. Our goal is to utilize those zeolitic qualities in membranes for the separation of light gases, and to eventually partner with industry to commercialize the membranes. To date, we have successfully: (1) Demonstrated (through synthesis, characterization and permeation testing) both the ability to synthesize defect-free zeolitic membranes and use them as size selective gas separation membranes; these include aluminosilicates and silicates; (2) Built and operated our in-house light gas permeation unit; we have amended it to enable testing of H{sub 2}S gases, mixed gases and at high temperatures. We are initiating further modification by designing and building an upgraded unit that will allow for temperatures up to 500 C, steady-state vs. pressure driven permeation, and mixed gas resolution through GC/MS analysis; (3) Have shown in preliminary experiments high selectivity for H{sub 2} from binary and industrially-relevant mixed gas streams under low operating pressures of 16 psig; (4) Synthesized membranes on commercially available oxide and composite disks (this is in addition to successes we have in synthesizing zeolitic membranes to tubular supports [9]); and (5) Signed a non-disclosure agreement with industrial partner G. E. Dolbear & Associates, Inc., and have ongoing agreements with Pall Corporation for in-kind support supplies and interest in scale-up for commercialization.
This report describes the findings of the effort initiated by the Arab Science and Technology Foundation and the Cooperative Monitoring Center at Sandia National Laboratories to identify, contact, and engage members of the Iraqi science and technology (S&T) community. The initiative is divided into three phases. The first phase, the survey of the Iraqi scientific community, shed light on the most significant current needs in the fields of science and technology in Iraq. Findings from the first phase will lay the groundwork for the second phase that includes the organization of a workshop to bring international support for the initiative, and simultaneously decides on an implementation mechanism. Phase three involves the execution of outcomes of the report as established in the workshop. During Phase 1 the survey team conducted a series of trips to Iraq during which they had contact with nearly 200 scientists from all sections of the country, representing all major Iraqi S&T specialties. As a result of these contacts, the survey team obtained over 450 project ideas from Iraqi researchers. These projects were revised and analyzed to identify priorities and crucial needs. After refinement, the result is approximately 170 project ideas that have been categorized according to their suitability for (1) developing joint research projects with international partners, (2) engaging Iraqi scientists in solving local problems, and (3) developing new business opportunities. They have also been ranked as to high, medium, or low priority.
This paper presents an automated tool for local, conformal refinement of all-hexahedral meshes based on the insertion of multi-directional twist planes into the spatial twist continuum. The refinement process is divided into independent refinement steps. In each step, an inserted twist plane modifies a single sheet or two parallel hex sheets. Six basic templates, chosen and oriented based on the number of nodes selected for refinement, replace original mesh elements. The contributions of this work are (1) the localized refinement of mesh regions defined by individual or groups of nodes, element edges, element faces or whole elements within an all-hexahedral mesh, (2) the simplification of template-based refinement into a general method and (3) the use of hex sheets for the management of template insertion in multi-directional refinement.
We present Mark-It, a marking user interface that reduced the time to decompose a set of CAD models exhibiting a range of decomposition problems by as much as fifty percent. Instead of performing about 50 mesh decomposition operations using a conventional UI, Mark-It allows users to perform the same operations by drawing 2D marks in the context of the 3D model. The motivation for this study was to test the potential of a marking user interface for the decomposition aspect of the meshing process. To evaluate Mark-It, we designed a user study that consisted of a brief tutorial of both the non-marking and marking UIs, performing the steps to decompose four models contributed to us by experienced meshers at Sandia National Laboratories, and a post-study debriefing to rate the speed, preference, and overall learnability of the two interfaces. Our primary contributions are a practical user interface design for speeding-up mesh decomposition and an evaluation that helps characterize the pros and cons of the new user interface.
In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.
For telemedicine to realize the vision of anywhere, anytime access to care, the question of how to create a fully interoperable technical infrastructure must be addressed. After briefly discussing how 'technical interoperability' compares with other types of interoperability being addressed in the telemedicine community today, this paper describes reasons for pursuing technical interoperability, presents a proposed framework for realizing technical interoperability, identifies key issues that will need to be addressed if technical interoperability is to be achieved, and suggests a course of action that the telemedicine community might follow to accomplish this goal.
This paper describes an assessment of a variety of battery technologies for high pulse power applications. Sandia National Laboratories (SNL) is performing the assessment activities in collaboration with NSWC-Dahlgren. After an initial study of specifications and manufacturers' data, the assessment team identified the following electrochemistries as promising for detailed evaluation: lead-acid (Pb-acid), nickel/metal hydride (Ni/MH), nickel/cadmium (Ni/Cd), and a recently released high power lithium-ion (Li-ion) technology. In the first three technology cases, test cells were obtained from at least two and in some instances several companies that specialize in the respective electrochemistry. In the case of the Li-ion technology, cells from a single company and are being tested. All cells were characterized in Sandia's battery test labs. After several characterization tests, the Pb-acid technology was identified as a backup technology for the demanding power levels of these tests. The other technologies showed varying degrees of promise. Following additional cell testing, the assessment team determined that the Ni/MH technology was suitable for scale-up and acquired 50-V Ni/MH modules from two suppliers for testing. Additional tests are underway to better characterize the Ni/Cd and the Li-ion technologies as well. This paper will present the testing methodology and results from these assessment activities.
An experimental program was conducted to study a proposed approach for oil reintroduction in the Strategic Petroleum Reserve (SPR). The goal was to assess whether useful oil is rendered unusable through formation of a stable oil-brine emulsion during reintroduction of degassed oil into the brine layer in storage caverns. An earlier report (O'Hern et al., 2003) documented the first stage of the program, in which simulant liquids were used to characterize the buoyant plume that is produced when a jet of crude oil is injected downward into brine. This report documents the final two test series. In the first, the plume hydrodynamics experiments were completed using SPR oil, brine, and sludge. In the second, oil reinjection into brine was run for approximately 6 hours, and sampling of oil, sludge, and brine was performed over the next 3 months so that the long-term effects of oil-sludge mixing could be assessed. For both series, the experiment consisted of a large transparent vessel that is a scale model of the proposed oil-injection process at the SPR. For the plume hydrodynamics experiments, an oil layer was floated on top of a brine layer in the first test series and on top of a sludge layer residing above the brine in the second test series. The oil was injected downward through a tube into the brine at a prescribed depth below the oil-brine or sludge-brine interface. Flow rates were determined by scaling to match the ratio of buoyancy to momentum between the experiment and the SPR. Initially, the momentum of the flow produces a downward jet of oil below the tube end. Subsequently, the oil breaks up into droplets due to shear forces, buoyancy dominates the flow, and a plume of oil droplets rises to the interface. The interface was deflected upward by the impinging oil-brine plume. Videos of this flow were recorded for scaled flow rates that bracket the equivalent pumping rates in an SPR cavern during injection of degassed oil. Image-processing analyses were performed to quantify the penetration depth and width of the oil jet. The measured penetration depths were shallow, as predicted by penetration-depth models, in agreement with the assumption that the flow is buoyancy-dominated, rather than momentum-dominated. The turbulent penetration depth model overpredicted the measured values. Both the oil-brine and oil-sludge-brine systems produced plumes with hydrodynamic characteristics similar to the simulant liquids previously examined, except that the penetration depth was 5-10% longer for the crude oil. An unexpected observation was that centimeter-size oil 'bubbles' (thin oil shells completely filled with brine) were produced in large quantities during oil injection. The mixing experiments also used layers of oil, sludge, and brine from the SPR. Oil was injected at a scaled flow rate corresponding to the nominal SPR oil injection rates. Injection was performed for about 6 hours and was stopped when it was evident that brine was being ingested by the oil withdrawal pump. Sampling probes located throughout the oil, sludge, and brine layers were used to withdraw samples before, during, and after the run. The data show that strong mixing caused the water content in the oil layer to increase sharply during oil injection but that the water content in the oil dropped back to less than 0.5% within 16 hours after injection was terminated. On the other hand, the sediment content in the oil indicated that the sludge and oil appeared to be well mixed. The sediment settled slowly but the oil had not returned to the baseline, as-received, sediment values after approximately 2200 hours (3 months). Ash content analysis indicated that the sediment measured during oil analysis was primarily organic.
Existing approaches in multiscale science and engineering have evolved from a range of ideas and solutions that are reflective of their original problem domains. As a result, research in multiscale science has followed widely diverse and disjoint paths, which presents a barrier to cross pollination of ideas and application of methods outside their application domains. The status of the research environment calls for an abstract mathematical framework that can provide a common language to formulate and analyze multiscale problems across a range of scientific and engineering disciplines. In such a framework, critical common issues arising in multiscale problems can be identified, explored and characterized in an abstract setting. This type of overarching approach would allow categorization and clarification of existing models and approximations in a landscape of seemingly disjoint, mutually exclusive and ad hoc methods. More importantly, such an approach can provide context for both the development of new techniques and their critical examination. As with any new mathematical framework, it is necessary to demonstrate its viability on problems of practical importance. At Sandia, lab-centric, prototype application problems in fluid mechanics, reacting flows, magnetohydrodynamics (MHD), shock hydrodynamics and materials science span an important subset of DOE Office of Science applications and form an ideal proving ground for new approaches in multiscale science.
ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.
We have been engaged in a search for coordination catalysts for the copolymerization of polar monomers (such as vinyl chloride and vinyl acetate) with ethylene. We have been investigating complexes of late transition metals with heterocyclic ligands. In this report we describe the synthesis of a symmetrical bis-thiadiazole. We have characterized one of the intermediates using single crystal X-ray diffraction. Several unsuccessful approaches toward 1 are also described, which shed light on some of the unique chemistry of thiadiazoles.
The molecular velocity distribution of a gas with heat flow was analyzed using Bird's direct simulation Monte Carlo (DSMC) method. Large numbers of computational molecules represented the gas in DSMC. Chapman-Enskog behavior was obtained for inverse-power-law molecules at continuum nonequilibrium conditions. It was shown that the Sonine-polynomial coefficients differ systematically from their continuum values as the local Knudsen number is increased, at noncontinuum nonequilibrium conditions.
As electronic and optical components reach the micro- and nanoscales, efficient assembly and packaging require the use of adhesive bonds. This work focuses on resolving several fundamental issues in the transition from macro- to micro- to nanobonding. A primary issue is that, as bondline thicknesses decrease, knowledge of the stability and dewetting dynamics of thin adhesive films is important to obtain robust, void-free adhesive bonds. While researchers have studied dewetting dynamics of thin films of model, non-polar polymers, little experimental work has been done regarding dewetting dynamics of thin adhesive films, which exhibit much more complex behaviors. In this work, the areas of dispensing small volumes of viscous materials, capillary fluid flow, surface energetics, and wetting have all been investigated. By resolving these adhesive-bonding issues, we are allowing significantly smaller devices to be designed and fabricated. Simultaneously, we are increasing the manufacturability and reliability of these devices.
This report describes criticality benchmark experiments containing rhodium that were conducted as part of a Department of Energy Nuclear Energy Research Initiative project. Rhodium is an important fission product absorber. A capability to perform critical experiments with low-enriched uranium fuel was established as part of the project. Ten critical experiments, some containing rhodium and others without, were conducted. The experiments were performed in such a way that the effects of the rhodium could be accurately isolated. The use of the experimental results to test neutronics codes is demonstrated by example for two Monte Carlo codes. These comparisons indicate that the codes predict the behavior of the rhodium in the critical systems within the experimental uncertainties. The results from this project, coupled with the results of follow-on experiments that investigate other fission products, can be used to quantify and reduce the conservatism of spent nuclear fuel safety analyses while still providing the necessary level of safety.
This report is a comprehensive review of the field of molecular enumeration from early isomer counting theories to evolutionary algorithms that design molecules in silico. The core of the review is a detail account on how molecules are counted, enumerated, and sampled. The practical applications of molecular enumeration are also reviewed for chemical information, structure elucidation, molecular design, and combinatorial library design purposes. This review is to appear as a chapter in Reviews in Computational Chemistry volume 21 edited by Kenny B. Lipkowitz.
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply to Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.
More than ten years ago, Sandia managers defined a set of traits and characteristics that were needed for success at Sandia. Today, the Sandia National Laboratories Success Profile Competencies continue to be powerful tools for employee and leadership development. The purpose of this report is to revisit the historical events that led to the creation and adaptation of the competencies and to position them for integration in future employee selection, development, and succession planning processes. This report contains an account of how the competencies were developed, testimonies of how they are used within the organization, and a description of how they will be foundational elements of new processes.
Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp}>10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.
This report summarizes research into effects of electron gun control on piezoelectric polyvinylidene fluoride (PVDF) structures. The experimental apparatus specific to the electron gun control of this structure is detailed, and the equipment developed for the remote examination of the bimorph surface profile is outlined. Experiments conducted to determine the optimum electron beam characteristics for control are summarized. Clearer boundaries on the bimorphs control output capabilities were determined, as was the closed loop response. Further controllability analysis of the bimorph is outlined, and the results are examined. In this research, the bimorph response was tested through a matrix of control inputs of varying current, frequency, and amplitude. Experiments also studied the response to electron gun actuation of piezoelectric bimorph thin film covered with multiple spatial regions of control. Parameter ranges that yielded predictable control under certain circumstances were determined. Research has shown that electron gun control can be used to make macrocontrol and nanocontrol adjustments for PVDF structures. The control response and hysteresis are more linear for a small range of energy levels. Current levels needed for optimum control are established, and the generalized controllability of a PVDF bimorph structure is shown.
Field-structured composites (FSCs) were produced by hosting micron-sized gold-coated nickel particles in a pre-polymer and allowing the mixture to cure in a magnetic field environment. The feasibility of controlling a composite's electrical conductivity using feedback control applied to the field coils was investigated. It was discovered that conductivity in FSCs is primarily determined by stresses in the polymer host matrix due to cure shrinkage. Thus, in cases where the structuring field was uniform and unidirectional so as to produce chainlike structures in the composite, no electrical conductivity was measured until well after the structuring field was turned off at the gel point. In situations where complex, rotating fields were used to generate complex, three-dimensional structures in a composite, very small, but measurable, conductivity was observed prior to the gel point. Responsive, sensitive prototype chemical sensors were developed based on this technology with initial tests showing very promising results.
Micromachines have the potential to significantly impact future weapon component designs as well as other defense, industrial, and consumer product applications. For both electroplated (LIGA) and surface micromachined (SMM) structural elements, the influence of processing on structure, and the resultant effects on material properties are not well understood. The behavior of dynamic interfaces in present as-fabricated microsystem materials is inadequate for most applications and the fundamental relationships between processing conditions and tribological behavior in these systems are not clearly defined. We intend to develop a basic understanding of deformation, fracture, and surface interactions responsible for friction and wear of microelectromechanical system (MEMS) materials. This will enable needed design flexibility for these devices, as well as strengthen our understanding of material behavior at the nanoscale. The goal of this project is to develop new capabilities for sub-microscale mechanical and tribological measurements, and to exercise these capabilities to investigate material behavior at this size scale.
Less toxic, storable, hypergolic propellants are desired to replace nitrogen tetroxide (NTO) and hydrazine in certain applications. Hydrogen peroxide is a very attractive replacement oxidizer, but finding acceptable replacement fuels is more challenging. The focus of this investigation is to find fuels that have short hypergolic ignition delays, high specific impulse, and desirable storage properties. The resulting hypergolic fuel/oxidizer combination would be highly desirable for virtually any high energy-density applications such as small but powerful gas generating systems, attitude control motors, or main propulsion. These systems would be implemented on platforms ranging from guided bombs to replacement of environmentally unfriendly existing systems to manned space vehicles.
For over a half-century, the soldiers and civilians deployed to conflict areas in UN peacekeeping operations have monitored ceasefires and peace agreements of many types with varying degrees of effectiveness. Though there has been a significant evolution of peacekeeping, especially in the 1990s, with many new monitoring functions, the UN has yet to incorporate monitoring technologies into its operations in a systematic fashion. Rather, the level of technology depends largely on the contributing nations and the individual field commanders. In most missions, sensor technology has not been used at all. So the UN has not been able to fully benefit from the sensor technology revolution that has seen effectiveness greatly amplified and costs plummet. This paper argues that monitoring technologies need not replace the human factor, which is essential for confidence building in conflict areas, but they can make peacekeepers more effective, more knowledgeable and safer. Airborne, ground and underground sensors can allow peacekeepers to do better monitoring over larger areas, in rugged terrain, at night (when most infractions occur) and in adverse weather conditions. Technology also allows new ways to share gathered information with the parties to create confidence and, hence, better pre-conditions for peace. In the future sensors should become 'tools of the trade' to help the UN keep the peace in war-torn areas.
The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping of subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.
This report presents the result of an effort to re-implement the Parallel Virtual File System (PVFS) using Portals as the transport. This report provides short overviews of PVFS and Portals, and describes the design and implementation of PVFS over Portals. Finally, the results of performance testing of both stock PVFS and PVFS over Portals are presented.
Sandia National Laboratories has developed a portfolio of programs to address the critical skills needs of the DP labs, as identified by the 1999 Chiles Commission Report. The goals are to attract and retain the best and the brightest students and transition them into Sandia - and DP Complex - employees. The US Department of Energy/Defense Programs University Partnerships funded nine laboratory critical skills development programs in FY03. This report provides a qualitative and quantitative evaluation of these programs and their status.
The New Mexico Environment Department (NMED) requires a Corrective Measures Evaluation to evaluate potential remedial alternatives for contaminants of concern (COCs) in groundwater at Sandia National Laboratories New Mexico (SNUNM) Technical Area (TA)-V. These COCs consist of trichloroethene, tetrachloroethene, and nitrate. This document presents the current conceptual model of groundwater flow and transport at TA-V that will provide the basis for a technically defensible evaluation. Characterization is defined by nine requirement areas that were identified in the NMED Compliance Order on Consent. These characterization requirement areas consist of geohydrologic characteristics that control the subsurface distribution and transport of contaminants. This conceptual model document summarizes the regional geohydrologic setting of SNUNM TA-V. The document also presents a summary of site-specific geohydrologic data and integrates these data into the current conceptual model of flow and contaminant transport. This summary includes characterization of the local geologic framework; characterization of hydrologic conditions at TA-V, including recharge, hydraulics of vadose-zone and aquifer flow, and the aquifer field of flow as it pertains to downgradient receptors. The summary also discusses characterization of contaminant transport in the subsurface, including discussion about source term inventory, release, and contaminant distribution and transport in the vadose zone and aquifer.
Lebow, Patrick S.; Dettmers, Dana L.; Hall, Kevin A.
This document, which is prepared as directed by the Compliance Order on Consent (COOC) issued by the New Mexico Environment Department, identifies and outlines a process to evaluate remedial alternatives to identify a corrective measure for the Sandia National Laboratories New Mexico Technical Area (TA)-V Groundwater. The COOC provides guidance for implementation of a Corrective Measures Evaluation (CME) for the TA-V Groundwater. This Work Plan documents an initial screening of remedial technologies and presents a list of possible remedial alternatives for those technologies that passed the screening. This Work Plan outlines the methods for evaluating these remedial alternatives and describes possible site-specific evaluation activities necessary to estimate remedy effectiveness and cost. These methods will be reported in the CME Report. This Work Plan outlines the CME Report, including key components and a description of the corrective measures process.
Tautges, Timothy J.; Ernst, Corey; Stimpson, Clint; Meyers, Ray J.; Merkley, Karl
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.
Finite element meshes are used to approximate the solution to some differential equation when no exact solution exists. A finite element mesh consists of many small (but finite, not infinitesimal or differential) regions of space that partition the problem domain, {Omega}. Each region, or element, or cell has an associated polynomial map, {Phi}, that converts the coordinates of any point, x = ( x y z ), in the element into another value, f(x), that is an approximate solution to the differential equation, as in Figure 1(a). This representation works quite well for axis-aligned regions of space, but when there are curved boundaries on the problem domain, {Omega}, it becomes algorithmically much more difficult to define {Phi} in terms of x. Rather, we define an archetypal element in a new coordinate space, r = ( r s t ), which has a simple, axis-aligned boundary (see Figure 1(b)) and place two maps onto our archetypal element:
In Phase I of this project, reported in SAND97-1922, Sandia National Laboratories applied a systems approach to identifying innovative biomedical technologies with the potential to reduce U.S. health care delivery costs while maintaining care quality. The effort provided roadmaps for the development and integration of technology to meet perceived care delivery requirements and an economic analysis model for development of care pathway costs for two conditions: coronary artery disease (CAD) and benign prostatic hypertrophy (BPH). Phases II and III of this project, which are presented in this report, were directed at detailing the parameters of telemedicine that influence care delivery costs and quality. These results were used to identify and field test the communication, interoperability, and security capabilities needed for cost-effective, secure, and reliable health care via telemedicine.
A new family of framework titanosilicates, A{sub 2}TiSi{sub 6}O{sub 15} (A=K, Rb, Cs) (space group Cc), has recently been synthesized using the hydrothermal method. This group of phases can potentially be utilized for storage of radioactive elements, particularly {sup 137}Cs, due to its high stability under electron radiation and chemical leaching. Here, we report the syntheses and structures of two intermediate members in the series: KRbTiSi{sub 6}O{sub 15} and RbCsTiSi{sub 6}O{sub 15}. Rietveld analysis of powder synchrotron X-ray diffraction data reveals that they adopt the same framework topology as the end-members, with no apparent Rb/K or Rb/Cs ordering. To study energetics of the solid solution series, high-temperature drop-solution calorimetry using molten 2PbO {center_dot} B{sub 2}O{sub 3} as the solvent at 975 K has been performed for the end-members and intermediate phases. As the size of the alkali cation increases, the measured enthalpies of formation from the constituent oxides and from the elements ({Delta}H{sub f,el}) become more exothermic, suggesting that this framework structure favors the cation in the sequence Cs{sup +}, Rb{sup +}, and K{sup +}. This trend is consistent with the higher melting temperatures of A{sub 2}TiSi{sub 6}O{sub 15} phases with increase in the alkali cation size.
Bulk and surface energies are calculated for endmembers of the isostructural rhombohedral carbonate mineral family, including Ca, Cd, Co, Fe, Mg, Mn, Ni, and Zn compositions. The calculations for the bulk agree with the densities, bond distances, bond angles, and lattice enthalpies reported in the literature. The calculated energies also correlate with measured dissolution rates: the lattice energies show a log-linear relationship to the macroscopic dissolution rates at circumneutral pH. Moreover, the energies of ion pairs translated along surface steps are calculated and found to predict experimentally observed microscopic step retreat velocities. Finally, pit formation excess energies decrease with increasing pit size, which is consistent with the nonlinear dissolution kinetics hypothesized for the initial stages of pit formation.
This paper develops a general framework for applying algebraic multigrid techniques to constrained systems of linear algebraic equations that arise in applications with discretized PDEs. We discuss constraint coarsening strategies for constructing multigrid coarse grid spaces and several classes of multigrid smoothers for these systems. The potential of these methods is investigated with their application to contact problems in solid mechanics. Published in 2004 by John Wiley &Sons, Ltd.
The effects of side wall movement on granular packings were investigated. The studies showed that the resultant structure of the pack did not depend strongly on the magnitude of the wall movement, as long as the packing was moved for an equivalent distance. The main effect of wall movement was to drive the particle-wall and particle-particle contacts to the Coulomb criterion. This forced the packing in the high wall velocity case to obey the Janssen form, which took the Coulomb criterion as one of its main assumptions.
This report describes a new algorithm for the joint estimation of carrier phase, symbol timing and data in a Turbo coded phase shift keyed (PSK) digital communications system. Jointly estimating phase, timing and data can give processing gains of several dB over conventional processing, which consists of joint estimation of carrier phase and symbol timing followed by estimation of the Turbo-coded data. The new joint estimator allows delay and phase locked loops (DLL/PLL) to work at lower bit energies where Turbo codes are most useful. Performance results of software simulations and of a field test are given, as are details of a field programmable gate array (FPGA) implementation that is currently in design.
Containment of chemical wastes in near-surface and repository environments is accomplished by designing engineered barriers to fluid flow. Containment barrier technologies such as clay liners, soil/bentonite slurry walls, soil/plastic walls, artificially grouted sediments and soils, and colloidal gelling materials are intended to stop fluid transport and prevent plume migration. However, despite their effectiveness in the short-term, all of these barriers exhibit geochemical or geomechanical instability over the long-term resulting in degradation of the barrier and its ability to contain waste. No technologically practical or economically affordable technologies or methods exist at present for accomplishing total remediation, contaminant removal, or destruction-degradation in situ. A new type of containment barrier with a potentially broad range of environmental stability and longevity could result in significant cost-savings. This report documents a research program designed to establish the viability of a proposed new type of containment barrier derived from in situ precipitation of clays in the pore space of contaminated soils or sediments. The concept builds upon technologies that exist for colloidal or gel stabilization. Clays have the advantages of being geologically compatible with the near-surface environment and naturally sorptive for a range of contaminants, and further, the precipitation of clays could result in reduced permeability and hydraulic conductivity, and increased mechanical stability through cementation of soil particles. While limited success was achieved under certain controlled laboratory conditions, the results did not warrant continuation to the field stage for multiple reasons, and the research program was thus concluded with Phase 2.
Thermionic energy conversion in a miniature format shows potential as a viable, high efficiency, micro to macro-scale power source. A microminiature thermionic converter (MTC) with inter-electrode spacings on the order of microns has been prototyped and evaluated at Sandia. The remaining enabling technology is the development of low work function materials and processes that can be integrated into these converters to increase power production at modest temperatures (800 - 1300 K). The electrode materials are not well understood and the electrode thermionic properties are highly sensitive to manufacturing processes. Advanced theoretical, modeling, and fabrication capabilities are required to achieve optimum performance for MTC diodes. This report describes the modeling and fabrication efforts performed to develop micro dispenser cathodes for use in the MTC.
Li-ion cells are being developed for high-power applications in hybrid electric vehicles currently being designed for the FreedomCAR (Freedom Cooperative Automotive Research) program. These cells offer superior performance in terms of power and energy density over current cell chemistries. Cells using this chemistry are the basis of battery systems for both gasoline and fuel cell based hybrids. However, the safety of these cells needs to be understood and improved for eventual widespread commercial application in hybrid electric vehicles. The thermal behavior of commercial and prototype cells has been measured under varying conditions of cell composition, age and state-of-charge (SOC). The thermal runaway behavior of full cells has been measured along with the thermal properties of the cell components. We have also measured gas generation and gas composition over the temperature range corresponding to the thermal runaway regime. These studies have allowed characterization of cell thermal abuse tolerance and an understanding of the mechanisms that result in cell thermal runaway.
The Mixed Waste Landfill occupies 2.6 acres in the north-central portion of Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico. The landfill accepted low-level radioactive and mixed waste from March 1959 to December 1988. This report represents the Corrective Measures Study that has been conducted for the Mixed Waste Landfill. The purpose of the study was to identify, develop, and evaluate corrective measures alternatives and recommend the corrective measure(s) to be taken at the site. Based upon detailed evaluation and risk assessment using guidance provided by the U.S. Environmental Protection Agency and the New Mexico Environment Department, the U.S. Department of Energy and Sandia National Laboratories recommend that a vegetative soil cover be deployed as the preferred corrective measure for the Mixed Waste Landfill. The cover would be of sufficient thickness to store precipitation, minimize infiltration and deep percolation, support a healthy vegetative community, and perform with minimal maintenance by emulating the natural analogue ecosystem. There would be no intrusive remedial activities at the site and therefore no potential for exposure to the waste. This alternative poses minimal risk to site workers implementing institutional controls associated with long-term environmental monitoring as well as routine maintenance and surveillance of the site.
A model of malicious attacks against an infrastructure system is developed that uses a network representation of the system structure together with a Hidden Markov Model of an attack at a node of that system and a Markov Decision Process model of attacker strategy across the system as a whole. We use information systems as an illustration, but the analytic structure developed can also apply to attacks against physical facilities or other systems that provide services to customers. This structure provides an explicit mechanism to evaluate expected losses from malicious attacks, and to evaluate changes in those losses that would result from system hardening. Thus, we provide a basis for evaluating the benefits of system hardening. The model also allows investigation of the potential for the purchase of an insurance contract to cover the potential losses when safeguards are breached and the system fails.
The work reported in this document involves a development effort to provide combat commanders and systems engineers with a capability to explore and optimize system concepts that include operational concepts as part of the design effort. An infrastructure and analytic framework has been designed and partially developed that meets a gap in systems engineering design for combat related complex systems. The system consists of three major components: The first component consists of a design environment that permits the combat commander to perform 'what-if' types of analyses in which parts of a course of action (COA) can be automated by generic system constructs. The second component consists of suites of optimization tools designed to integrate into the analytical architecture to explore the massive design space of an integrated design and operational space. These optimization tools have been selected for their utility in requirements development and operational concept development. The third component involves the design of a modeling paradigm for the complex system that takes advantage of functional definitions and the coupled state space representations, generic measures of effectiveness and performance, and a number of modeling constructs to maximize the efficiency of computer simulations. The system architecture has been developed to allow for a future extension in which the operational concept development aspects can be performed in a co-evolutionary process to ensure the most robust designs may be gleaned from the design space(s).
Program transformation is a restricted form of software construction that can be amenable to formal verification. When successful, the nature of the evidence provided by such a verification is considered strong and can constitute a major component of an argument that a high-consequence or safety-critical system meets its dependability requirements. This article explores the application of novel higher-order strategic programming techniques to the development of a portion of a class loader for a restricted implementation of the Java Virtual Machine (JVM). The implementation is called the SSP and is intended for use in high-consequence safety-critical embedded systems. Verification of the strategic program using ACL2 is also discussed.
In many strategic systems, the choice combinator provides a powerful mechanism for controlling the application of rules and strategies to terms. The ability of the choice combinator to exercise control over rewriting is based on the premise that the success and failure of strategy application can be observed. In this paper we present a higher-order strategic framework with the ability to dynamically construct strategies containing the choice combinator. To this framework, a combinator called hide is introduced that prevents the successful application of a strategy from being observed by the choice combinator. We then explore the impact of this new combinator on a real-world problem involving a restricted implementation of the Java Virtual Machine.
This report assembles models for the response of a wire interacting with a conducting ground to an electromagnetic pulse excitation. The cases of an infinite wire above the ground as well as resting on the ground and buried beneath the ground are treated. The focus is on the characteristics and propagation of the transmission line mode. Approximations are used to simplify the description and formulas are obtained for the current. The semi-infinite case, where the short circuit current can be nearly twice that of the infinite line, is also examined.
Network-centric systems that depend on mobile wireless ad hoc networks for their information exchange require detailed analysis to support their development. In many cases, this critical analysis is best provided with high-fidelity system simulations that include the effects of network architectures and protocols. In this research, we developed a high-fidelity system simulation capability using an HLA federation. The HLA federation, consisting of the Umbra system simulator and OPNET Modeler network simulator, provides a means for the system simulator to both affect, and be affected by, events in the network simulator. Advances are also made in increasing the fidelity of the wireless communication channel and reducing simulation run-time with a dead reckoning capability. A simulation experiment is included to demonstrate the developed modeling and simulation capability.
We have researched several new focused ion beam (FIB) micro-fabrication techniques that offer control of feature shape and the ability to accurately define features onto nonplanar substrates. These FIB-based processes are considered useful for prototyping, reverse engineering, and small-lot manufacturing. Ion beam-based techniques have been developed for defining features in miniature, nonplanar substrates. We demonstrate helices in cylindrical substrates having diameters from 100 {micro}m to 3 mm. Ion beam lathe processes sputter-define 10-{micro}m wide features in cylindrical substrates and tubes. For larger substrates, we combine focused ion beam milling with ultra-precision lathe turning techniques to accurately define 25-100 {micro}m features over many meters of path length. In several cases, we combine the feature defining capability of focused ion beam bombardment with additive techniques such as evaporation, sputter deposition and electroplating in order to build geometrically-complex, functionally-simple devices. Damascene methods that fabricate bound, metal microcoils have been developed for cylindrical substrates. Effects of focused ion milling on surface morphology are also highlighted in a study of ion-milled diamond.
A survey has been carried out to quantify the performance and life of over 700,000 valve-regulated lead-acid (VRLA) cells, which have been or are being used in stationary applications across the United States. The findings derived from this study have not identified any fundamental flaws of VRLA battery technology. There is evidence that some cell designs are more successful in float duty than others. A significant number of the VRLA cells covered by the survey were found to have provided satisfactory performance.
Though the Global Positioning System has revolutionized navigation in the modern age, it is limited in its capability for some applications because an unobstructed line of sight to a minimum of four satellites is required. One way of augmenting the system in small areas is by employing pseudolites to broadcast additional signals that can be used to improve the user's position solution. At the Navigation Systems Testing Laboratory (NSTL) at NASA's Johnson Space Center in Houston, TX, research has been underway on the use of pseudolites to perform precision relative navigation. Based on the findings of previous research done at the NSTL, the method used to process the pseudolite measurements is an extended Kalman filter of the double differenced carrier phase measurements. By employing simulations of the system, as well as processing previously collected data in a real time manner, sub-meter tracking of a moving receiver with carrier phase measurements in the extended Kalman filter appears to be possible.
The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.
This report summarizes research advances pursued with award funding issued by the DOE to Drexel University through the Presidential Early Career Award (PECASE) program. Professor Rich Cairncross was the recipient of this award in 1997. With it he pursued two related research topics under Sandia's guidance that address the outstanding issue of fluid-structural interactions of liquids with deformable solid materials, focusing mainly on the ubiquitous dynamic wetting problem. The project focus in the first four years was aimed at deriving a predictive numerical modeling approach for the motion of the dynamic contact line on a deformable substrate. A formulation of physical model equations was derived in the context of the Galerkin finite element method in an arbitrary Lagrangian/Eulerian (ALE) frame of reference. The formulation was successfully integrated in Sandia's Goma finite element code and tested on several technologically important thin-film coating problems. The model equations, the finite-element implementation, and results from several applications are given in this report. In the last year of the five-year project the same physical concepts were extended towards the problem of capillary imbibition in deformable porous media. A synopsis of this preliminary modeling and experimental effort is also discussed.
Science historian James Burke is well known for his stories about how technological innovations are intertwined and embedded in the culture of the time, for example, how the steam engine led to safety matches, imitation diamonds, and the landing on the moon.1 A lesson commonly drawn from his stories is that the path of science and technology (S&T) is nonlinear and unpredictable. Viewed another way, the lesson is that the solution to one problem can lead to solutions to other problems that are not obviously linked in advance, i.e., there is a ripple effect. The motto for Sandia's approach to research and development (R&D) is 'Science with the mission in mind.' In our view, our missions contain the problems that inspire our R&D, and the resulting solutions almost always have multiple benefits. As discussed below, Sandia's Laboratory Directed Research and Development (LDRD) Program is structured to bring problems relevant to our missions to the attention of researchers. LDRD projects are then selected on the basis of their programmatic merit as well as their technical merit. Considerable effort is made to communicate between investment areas to create the ripple effect. In recent years, attention to the ripple effect and to the performance of the LDRD Program, in general, has increased. Inside Sandia, as it is the sole source of discretionary research funding, LDRD funding is recognized as being the most precious of research dollars. Hence, there is great interest in maximizing its impact, especially through the ripple effect. Outside Sandia, there is increased scrutiny of the program's performance to be sure that it is not a 'sandbox' in which researchers play without relevance to national security needs. Let us therefore address the performance of the LDRD Program in fiscal year 2003 and then show how it is designed to maximize impact.
The ASCI Grid Services (initially called Distributed Resource Management) project was started under DisCom{sup 2} when distant and distributed computing was identified as a technology critical to the success of the ASCI Program. The goals of the Grid Services project has and continues to be to provide easy, consistent access to all the ASCI hardware and software resources across the nuclear weapons complex using computational grid technologies, increase the usability of ASCI hardware and software resources by providing interfaces for resource monitoring, job submission, job monitoring, and job control, and enable the effective use of high-end computing capability through complex-wide resource scheduling and brokering. In order to increase acceptance of the new technology, the goal included providing these services in both the unclassified as well as the classified user's environment. This paper summarizes the many accomplishments and lessons learned over approximately five years of the ASCI Grid Services Project. It also provides suggestions on how to renew/restart the effort for grid services capability when the situation is right for that need.
To establish mechanical material properties of cellular concrete mixes, a series of quasi-static, compression and tension tests have been completed. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established failure criteria for the cellular concrete in terms of stress invariants I{sub 1} and J{sub 2}. {radical}J{sub 2} (MPa) = 297.2 - 278.7 exp{sup -0.000455 I}{sub 1}{sup (MPa)} for the 90-pcf concrete {radical}J{sub 2} (MPa) = 211.4 - 204.2 exp {sup -0.000628 I}{sub 1}{sup (MPa)} for the 60-pcf concrete
For many decades, engineers and scientists have studied the effects of high power microwaves (HPM) on electronics. These studies usually focus on means of delivering energy to upset electronic equipment and ways to protect equipment from HPM. The motivation for these studies is to develop the knowledge necessary either to cause disruption or to protect electronics from disruption. Since electronic circuits must absorb sufficient energy to fail and the source used to deliver this energy is far away from the electronic circuit, the source must emit a large quantity of energy. In free space, for example, as the distance between the source and the target increases, the source energy must increase by the square of distance. The HPM community has dedicated substantial resources to the development of higher energy sources as a result. Recently, members of the HPM community suggested a new disruption mechanism that could potentially cause system disruptions at much lower energy levels. The new mechanism, based on nonlinear dynamics, requires an expanded theory of circuit operation. This report summarizes an investigation of electronic circuit nonlinear behavior as it applies to inductor-resistor-diode circuits (known as the Linsay circuit) and phased-locked-loops. With the improvement in computing power and the need to model circuit behavior with greater precision, the nonlinear effects of circuit has become very important. In addition, every integrated circuit has as part of its design a protective circuit. These protective circuits use some variation of semiconductor junctions that can interact with parasitic components, present in every real system. Hence, the protective circuit can behave as a Linsay circuit. Although the nonlinear behavior is understandable, it is difficult to model accurately. Many researchers have used classical diode models successfully to show nonlinear effects within predicted regions of operation. However, these models do not accurately predict measured results. This study shows that models based on SPICE, although they exhibit chaotic behavior, do not properly reproduce circuit behavior without modifying diode parameters. This report describes the models and considerations used to model circuit behavior in the nonlinear range of operation. Further, it describes how a modified SPICE diode model improves the simulation results. We also studied the nonlinear behavior of a phased-locked-loop. Phased-locked loops are fundamental building block to many major systems (aileron, seeker heads, etc). We showed that an injected RF signal could drive the phased-locked-loop into chaos. During these chaotic episodes, the frequency of the phased-locked-loop takes excursion outside its normal range of operation. In addition to these excursions, the phased-locked-loop and the system it is controlling requires some time to get back into normal operation. The phased-locked-loop only needs to be upset enough long enough to keep it off balance.
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
The goal of this Laboratory Directed Research & Development (LDRD) effort was to design, synthesize, and evaluate organic-inorganic nanocomposite membranes for solubility-based separations, such as the removal of higher hydrocarbons from air streams, using experiment and theory. We synthesized membranes by depositing alkylchlorosilanes on the nanoporous surfaces of alumina substrates, using techniques from the self-assembled monolayer literature to control the microstructure. We measured the permeability of these membranes to different gas species, in order to evaluate their performance in solubility-based separations. Membrane design goals were met by manipulating the pore size, alkyl group size, and alkyl surface density. We employed molecular dynamics simulation to gain further understanding of the relationship between membrane microstructure and separation performance.
The Maximum Permissible Exposure (MPE) is central to laser hazard analysis and is in general a function of the radiant wavelength. The selection of a laser for a particular application may allow for flexibility in the selection of the radiant wavelength. This flexibility would allow the selection of a particular laser based on the MPE and the hazards associated with that radiant wavelength. The Calculations of the MPEs for various laser wavelength ranges are presented. Techniques for determining eye safe viewing distances for both aided and unaided viewing and the determination of flight hazard distances are presented as well.
A key factor in our ability to produce and predict the stability of metal-based macro- to nano-scale structures and devices is a fundamental understanding of the localized nature of corrosion. Corrosion processes where physical dimensions become critical in the degradation process include localized corrosion initiation in passivated metals, microgalvanic interactions in metal alloys, and localized corrosion in structurally complex materials like nanocrystalline metal films under atmospheric and inundated conditions. This project focuses on two areas of corrosion science where a fundamental understanding of processes occurring at critical dimensions is not currently available. Sandia will study the critical length scales necessary for passive film breakdown in the inundated aluminum (Al) system and the chemical processes and transport in ultra-thin water films relevant to the atmospheric corrosion of nanocrystalline tungsten (W) films. Techniques are required that provide spatial information without significantly perturbing or masking the underlying relationships. Al passive film breakdown is governed by the relationship between area of the film sampled and its defect structure. We will combine low current measurements with microelectrodes to study the size scale required to observe a single initiation event and record electrochemical breakdown events. The resulting quantitative measure of stability will be correlated with metal grain size, secondary phase size and distribution to understand which metal properties control stability at the macro- and nano-scale. Mechanisms of atmospheric corrosion on W are dependent on the physical dimensions and continuity of adsorbed water layers as well as the chemical reactions that take place in this layer. We will combine electrochemical and scanning probe microscopic techniques to monitor the chemistry and resulting material transport in these thin surface layers. A description of the length scales responsible for driving the corrosion of the nanocrystalline metal films will be developed. The techniques developed and information derived from this work will be used to understand and predict degradation processes in microelectronic and microsystem devices critical to Sandia's mission.
Chemical disinfection and inactivation of viruses is largely understudied, but is very important especially in the case of highly infectious viruses. The purpose of this LDRD was to determine the efficacy of the Sandia National Laboratories developed decontamination formulations against Bovine Coronavirus (BCV) as a surrogate for the coronavirus that causes Severe Acute Respiratory Syndrome (SARS) in humans. The outbreak of SARS in late 2002 resulted from a highly infectious virus that was able to survive and remain infectious for extended periods. For this study, preliminary testing with Escherichia coli MS-2 (MS-2) and Escherichia coli T4 (T4) bacteriophages was conducted to develop virucidal methodology for verifying the inactivation after treatment with the test formulations following AOAC germicidal methodologies. After the determination of various experimental parameters (i.e. exposure, concentration) of the formulations, final testing was conducted on BCV. All experiments were conducted with various organic challenges (horse serum, bovine feces, compost) for results that more accurately represent field use condition. The MS-2 and T4 were slightly more resistant than BCV and required a 2 minute exposure while BCV was completely inactivated after a 1 minute exposure. These results were also consistent for the testing conducted in the presence of the various organic challenges indicating that the test formulations are highly effective for real world application.
The conversion of nitrogen in char (char-N) to NO was studied both experimentally and computationally. In the experiments, pulverized coal char was produced from a U.S. high-volatile bituminous coal and burned in a dilute suspension at 1170 K, 1370 K and 1570 K, at an excess oxygen concentration of 8% (dry), with different levels of background NO. In some experiments, hydrogen bromide (HBr) was added to the vitiated air as a tool to alter the concentration of gas-phase radicals. During char combustion, low NO concentration and high temperature promoted the conversion of char-N to NO. HBr addition altered NO production in a way that depended on temperature. At 1170 K the presence of HBr increased NO production by 80%, whereas the addition of HBr decreased NO production at higher temperatures by 20%. To explain these results, three mechanistic descriptions of char-N evolution during combustion were evaluated with computational models that simulated (a) homogeneous chemistry in a plug-flow reactor with entrained particle combustion, and (b) homogeneous chemistry in the boundary layer surrounding a reacting particle. The observed effect of HBr on NO production could only be captured by a chemical mechanism that considered significant release of HCN from the char particle. Release of HCN also explained changes in NO production with temperature and NO concentration. Thus, the combination of experiments and simulations suggests that HCN evolution from the char during pulverized coal combustion plays an essential role in net NO production. Keywords: Coal; Char; Nitric oxide; Halogen.
Microelectromechanical systems (MEMS) comprise a new class of devices that include various forms of sensors and actuators. Recent studies have shown that microscale cantilever structures are able to detect a wide range of chemicals, biomolecules or even single bacterial cells. In this approach, cantilever deflection replaces optical fluorescence detection thereby eliminating complex chemical tagging steps that are difficult to achieve with chip-based architectures. A key challenge to utilizing this new detection scheme is the incorporation of functionalized MEMS structures within complex microfluidic channel architectures. The ability to accomplish this integration is currently limited by the processing approaches used to seal lids on pre-etched microfluidic channels. This report describes Sandia's first construction of MEMS instrumented microfluidic chips, which were fabricated by combining our leading capabilities in MEMS processing with our low-temperature photolithographic method for fabricating microfluidic channels. We have explored in-situ cantilevers and other similar passive MEMS devices as a new approach to directly sense fluid transport, and have successfully monitored local flow rates and viscosities within microfluidic channels. Actuated MEMS structures have also been incorporated into microfluidic channels, and the electrical requirements for actuation in liquids have been quantified with an elegant theory. Electrostatic actuation in water has been accomplished, and a novel technique for monitoring local electrical conductivities has been invented.
The lead probe neutron detector was originally designed by Spencer and Jacobs in 1965. The detector is based on lead activation due to the following neutron scattering reactions: {sup 207}Pb(n, n'){sup 207m}Pb and {sup 208}Pb(n, 2n){sup 207m}Pb. Delayed gammas from the metastable state of {sup 207m}Pb are counted using a plastic scintillator. The half-life of {sup 207m}Pb is 0.8 seconds. In the work reported here, MCNP was used to optimize the efficiency of the lead probe by suitably modifying the original geometry. A prototype detector was then built and tested. A 'layer cake' design was investigated in which thin (< 5 mm) layers of lead were sandwiched between thicker ({approx} 1 - 2 cm) layers of scintillator. An optimized 'layer cake' design had Figures of Merit (derived from the code) which were a factor of 3 greater than the original lead probe for DD neutrons, and a factor of 4 greater for DT neutrons, while containing 30% less lead. A smaller scale, 'proof of principle' prototype was built by Bechtel/Nevada to verify the code results. Its response to DD neutrons was measured using the DD dense plasma focus at Texas A&M and it conformed to the predicted performance. A voltage and discriminator sweep was performed to determine optimum sensitivity settings. It was determined that a calibration operating point could be obtained using a {sup 133}Ba 'bolt' as is the case with the original lead probe.
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled 'Obstacle Detection for Autonomous Navigation'. The principal goal of this project was to develop a mathematical framework for obstacle detection. The framework provides a basis for solutions to many complex obstacle detection problems critical to successful autonomous navigation. Another goal of this project was to characterize sensing requirements in terms of physical characteristics of obstacles, vehicles, and terrain. For example, a specific vehicle traveling at a specific velocity over a specific terrain requires a sensor with a certain range of detection, resolution, field-of-view, and sufficient sensitivity to specific obstacle characteristics. In some cases, combinations of sensors were required to distinguish between different hazardous obstacles and benign terrain. In our framework, the problem was posed as a multidimensional, multiple-hypothesis, pattern recognition problem. Features were extracted from selected sensors that allow hazardous obstacles to be distinguished from benign terrain and other types of obstacles. Another unique thrust of this project was to characterize different terrain classes with respect to both positive (e.g., rocks, trees, fences) and negative (e.g., holes, ditches, drop-offs) obstacles. The density of various hazards per square kilometer was statistically quantified for different terrain categories (e.g., high desert, ponderosa forest, and prairie). This quantification reflects the scale, or size, and mobility of different types of vehicles. The tradeoffs between obstacle detection, position location, path planning, and vehicle mobility capabilities were also to be characterized.
This report addresses the development of automated video-screening technology to assist security forces in protecting our homeland against terrorist threats. A threat of specific interest to this project is the covert placement and subsequent remote detonation of bombs (e.g., briefcase bombs) inside crowded public facilities. Different from existing video motion detection systems, the video-screening technology described in this report is capable of detecting changes in the static background of an otherwise, dynamic environment - environments where motion and human activities are persistent. Our goal was to quickly detect changes in the background - even under conditions when the background is visible to the camera less than 5% of the time. Instead of subtracting the background to detect movement or changes in a scene, we subtracted the dynamic scene variations to produce an estimate of the static background. Subsequent comparisons of static background estimates are used to detect changes in the background. Detected changes can be used to alert security forces of the presence and location of potential threats. The results of this research are summarized in two MS Power-point presentations included with this report.
A significant barrier to the deployment of distributed energy resources (DER) onto the power grid is uncertainty on the part of utility engineers regarding impacts of DER on their distribution systems. Because of the many possible combinations of DER and local power system characteristics, these impacts can most effectively be studied by computer simulation. The goal of this LDRD project was to develop and experimentally validate models of transient and steady state source behavior for incorporation into utility distribution analysis tools. Development of these models had not been prioritized either by the distributed-generation industry or by the inverter industry. A functioning model of a selected inverter-based DER was developed in collaboration with both the manufacturer and industrial power systems analysts. The model was written in the PSCAD simulation language, a variant of the ElectroMagnetic Transients Program (EMTP), a code that is widely used and accepted by utilities. A stakeholder team was formed and a methodology was established to address the problem. A list of detailed DER/utility interaction concerns was developed and prioritized. The list indicated that the scope of the problem significantly exceeded resources available for this LDRD project. As this work progresses under separate funding, the model will be refined and experimentally validated. It will then be incorporated in utility distribution analysis tools and used to study a variety of DER issues. The key next step will be design of the validation experiments.
AlGaN/GaN test structures were fabricated with an etched constriction. A nitrogen plasma treatment was used to remove the disordered layer, including natural oxides on the AlGaN surface, before the growth of the silicon nitride passivation film on several of the test structures. A pulsed voltage input, with a 200 ns pulse width, and a four-point measurement were used in a 50 {Omega} environment to determine the room temperature velocity-field characteristic of the structures. The samples performed similarly over low fields, giving a low-field mobility of 545 cm{sup 2} V{sup -1} s{sup -1}. The surface treated sample performed slightly better at higher fields than the untreated sample. The highest velocity measured was 1.25 x 10{sup 7} cm s{sup -1} at a field of 26 kV cm{sup -1}.
We demonstrate the presence of a resonant interaction between a pair of coupled quantum wires, which are realized in the ultra-high mobility two-dimensional electron gas of a GaAs/AlGaAs quantum well. Measuring the conductance of one wire, as the width of the other is varied, we observe a resonant peak in its conductance that is correlated with the point at which the swept wire pinches off. We discuss this behavior in terms of recent theoretical predictions concerning local spin-moment formation in quantum wires.
Drainage of water from the region between an advancing probe tip and a flat sample is reconsidered under the assumption that the tip and sample surfaces are both coated by a thin water "interphase" (of width approximately a few nanometers) whose viscosity is much higher than that of the bulk liquid. A formula derived by solving the Navier-Stokes equations allows one to extract an interphase viscosity of ∼59 kPa·s (or ∼6.6 × 10 7 times the viscosity of bulk water at 25°C) from interfacial force microscope measurements with both tip and sample functionalized hydrophilic by OH-terminated tri(ethylene glycol) undecylthiol, self-assambled monolayers.
The use of triaxial magnetic fields to create a variety of isotropic and anisotropic magnetic particle/polymer composites with significantly enhanced magnetic susceptibilities was analyzed. It was shown that a rich variety of structures can be created because both the field amplitudes and frequencies can be varied. It was found that the susceptibility anisotropy of these composites can be controlled over a wide range by judicious adjustment of the relative field amplitudes. The results show that with coherent particle motions, magnetostatic energies that are quite close to the ground state can be achieved.
The geologic model implicit in the original site characterization report for the Bayou Choctaw Strategic Petroleum Reserve Site near Baton Rouge, Louisiana, has been converted to a numerical, computer-based three-dimensional model. The original site characterization model was successfully converted with minimal modifications and use of new information. The geometries of the salt diapir, selected adjacent sedimentary horizons, and a number of faults have been modeled. Models of a partial set of the several storage caverns that have been solution-mined within the salt mass are also included. Collectively, the converted model appears to be a relatively realistic representation of the geology of the Bayou Choctaw site as known from existing data. A small number of geometric inconsistencies and other problems inherent in 2-D vs. 3-D modeling have been noted. Most of the major inconsistencies involve faults inferred from drill hole data only. Modem computer software allows visualization of the resulting site model and its component submodels with a degree of detail and flexibility that was not possible with conventional, two-dimensional and paper-based geologic maps and cross sections. The enhanced visualizations may be of particular value in conveying geologic concepts involved in the Bayou Choctaw Strategic Petroleum Reserve site to a lay audience. A Microsoft WindowsTM PC-based viewer and user-manipulable model files illustrating selected features of the converted model are included in this report.
These Technical Safety Requirements (TSRs) identify the operational conditions, boundaries, and administrative controls for the safe operation of the Auxiliary Hot Cell Facility (AHCF) at Sandia National Laboratories, in compliance with 10 CFR 830, 'Nuclear Safety Management.' The bases for the TSRs are established in the AHCF Documented Safety Analysis (DSA), which was issued in compliance with 10 CFR 830, Subpart B, 'Safety Basis Requirements.' The AHCF Limiting Conditions of Operation (LCOs) apply only to the ventilation system, the high efficiency particulate air (HEPA) filters, and the inventory. Surveillance Requirements (SRs) apply to the ventilation system, HEPA filters, and associated monitoring equipment; to certain passive design features; and to the inventory. No Safety Limits are necessary, because the AHCF is a Hazard Category 3 nuclear facility.
This report describes an LDRD-supported experimental-theoretical collaboration on the enhanced low-dose-rate sensitivity (ELDRS) problem. The experimental work led to a method for elimination of ELDRS, and the theoretical work led to a suite of bimolecular mechanisms that explain ELDRS and is in good agreement with various ELDRS experiments. The model shows that the radiation effects are linear in the limit of very low dose rates. In this limit, the regime of most concern, the model provides a good estimate of the worst-case effects of low dose rate ionizing radiation.
This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.
An increase in photocurrent has been observed at silicon electrodes coated with nanostructured porous silica films as compared to bare, unmodified silicon. Ultimately, to utilize this effect in devices such as sensors or microchip power supplies, the physical phenomena behind this observation need to be well characterized. To this end, Electrochemical Impedance Spectroscopy (EIS) was used to characterize the effect of surfactant-templated mesoporous silica films deposited onto silicon electrodes on the electrical properties of the electrode space-charge region in an aqueous electrolyte solution, as the electrical properties of this space-charge region are responsible for the photobehavior of semiconductor devices. A significant shift in apparent flat-band potential was observed for electrodes modified with the silica film when compared to bare electrodes; the reliability of this data is suspect, however, due to contributions from surface states to the overall capacitance of the system. To assist in the interpretation of this EIS data, a series of measurements at Pt electrodes was performed with the hope of decoupling electrode and film contributions from the EIS spectra. Surprisingly, the frequency-dependent impedance data for Pt electrodes coated with a surfactant-templated film was nearly identical to that observed for bare Pt electrodes, indicating that the mesoporous film had little effect on the transport of small electrolyte ions to the electrode surface. Pore-blocking agents (tetraalkylammonium salts) were not observed to inhibit this transport process. However, untemplated (non-porous) silica films dramatically increased film resistance, indicating that our EIS data for the Pt electrodes is reliable. Overall, our preliminary conclusion is that a shift in electrical properties in the space-charge region induced by the presence of a porous silica film is responsible for the increase in observed photocurrent.
The waters of the Pecos River in New Mexico must be delivered to three primary users: (1) The Pecos River Compact: each year a percentage of water from natural river flow must be delivered to Texas; (2) Agriculture: Carlsbad Irrigation District has a storage and diversion right and Fort Sumner Irrigation District has a direct flow diversion right; and, (3) Endangered Species Act: an as yet unspecified amount of water is to support Pecos Bluntnose Shiner Minnow habitat within and along the Pecos River. Currently, the United States Department of Interior Bureau of Reclamation, the New Mexico Interstate Stream Commission, and the United States Department of the Interior Fish and Wildlife Service are studying the Pecos Bluntnose Shiner Minnow habitat preference. Preliminary work by Fish and Wildlife personnel in the critical habitat suggest that water depth and water velocity are key parameters defining minnow habitat preference. However, river flows that provide adequate preferred habitat to support this species have yet to be determined. Because there is a limited amount of water in the Pecos River and its reservoirs, it is critical to allocate water efficiently such that habitat is maintained, while honoring commitments to agriculture and to the Pecos River Compact. This study identifies the relationship between Pecos River flow rates in cubic feet per second (cfs) and water depth and water velocity.
We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.
A laser safety and hazard analysis was performed for the airborne AURA (Big Sky Laser Technology) lidar system based on the 2000 version of the American National Standard Institute's (ANSI) Standard Z136.1, for the Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for the Safe Use of Lasers Outdoors. The AURA lidar system is installed in the instrument pod of a Proteus airframe and is used to perform laser interaction experiments and tests at various national test sites. The targets are located at various distances or ranges from the airborne platform. In order to protect personnel, who may be in the target area and may be subjected to exposures, it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength, calculate the Nominal Ocular Hazard Distance (NOHD), and determine the maximum 'eye-safe' dwell times for various operational altitudes and conditions. It was also necessary to calculate the appropriate minimum Optical Density (ODmin) of the laser safety eyewear used by authorized personnel who may receive hazardous exposures during ground base operations of the airborne AURA laser system (system alignment and calibration).
The Advanced Concepts Group (ACG) at Sandia National Laboratories is exploring the use of Red Teaming to help intelligence analysts with two key processes: determining what a piece or pieces of information might imply and deciding what other pieces of information need to be found to support or refute hypotheses about what actions a suspected terrorist organization might be pursuing. In support of this effort, the ACG hosted a terrorism red gaming event in Albuquerque on July 22-24, 2003. The game involved two 'red teams' playing the roles of two terrorist cells - one focused on implementing an RDD attack on the DC subway system and one focused on a bio attack against the same target - and two 'black teams' playing the role of the intelligence collection system and of intelligence analysts trying to decide what plans the red teams might be pursuing. This exercise successfully engaged human experts to seed a proposed compute engine with detailed operational plans for hypothetical terrorist scenarios.
The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.