Analysis of TOF-SIMS spectral series and spectral image series using AXSIA
Abstract not provided.
Abstract not provided.
Proposed for publication in IEEE Transactions on Geoscience and Remote Sensing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in the Journal of Crystal Growth.
Abstract not provided.
Abstract not provided.
This paper presents an automated tool for local, conformal refinement of all-hexahedral meshes based on the insertion of multi-directional twist planes into the spatial twist continuum. The refinement process is divided into independent refinement steps. In each step, an inserted twist plane modifies a single sheet or two parallel hex sheets. Six basic templates, chosen and oriented based on the number of nodes selected for refinement, replace original mesh elements. The contributions of this work are (1) the localized refinement of mesh regions defined by individual or groups of nodes, element edges, element faces or whole elements within an all-hexahedral mesh, (2) the simplification of template-based refinement into a general method and (3) the use of hex sheets for the management of template insertion in multi-directional refinement.
Abstract not provided.
We present Mark-It, a marking user interface that reduced the time to decompose a set of CAD models exhibiting a range of decomposition problems by as much as fifty percent. Instead of performing about 50 mesh decomposition operations using a conventional UI, Mark-It allows users to perform the same operations by drawing 2D marks in the context of the 3D model. The motivation for this study was to test the potential of a marking user interface for the decomposition aspect of the meshing process. To evaluate Mark-It, we designed a user study that consisted of a brief tutorial of both the non-marking and marking UIs, performing the steps to decompose four models contributed to us by experienced meshers at Sandia National Laboratories, and a post-study debriefing to rate the speed, preference, and overall learnability of the two interfaces. Our primary contributions are a practical user interface design for speeding-up mesh decomposition and an evaluation that helps characterize the pros and cons of the new user interface.
In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.
For telemedicine to realize the vision of anywhere, anytime access to care, the question of how to create a fully interoperable technical infrastructure must be addressed. After briefly discussing how 'technical interoperability' compares with other types of interoperability being addressed in the telemedicine community today, this paper describes reasons for pursuing technical interoperability, presents a proposed framework for realizing technical interoperability, identifies key issues that will need to be addressed if technical interoperability is to be achieved, and suggests a course of action that the telemedicine community might follow to accomplish this goal.
Abstract not provided.
This paper describes an assessment of a variety of battery technologies for high pulse power applications. Sandia National Laboratories (SNL) is performing the assessment activities in collaboration with NSWC-Dahlgren. After an initial study of specifications and manufacturers' data, the assessment team identified the following electrochemistries as promising for detailed evaluation: lead-acid (Pb-acid), nickel/metal hydride (Ni/MH), nickel/cadmium (Ni/Cd), and a recently released high power lithium-ion (Li-ion) technology. In the first three technology cases, test cells were obtained from at least two and in some instances several companies that specialize in the respective electrochemistry. In the case of the Li-ion technology, cells from a single company and are being tested. All cells were characterized in Sandia's battery test labs. After several characterization tests, the Pb-acid technology was identified as a backup technology for the demanding power levels of these tests. The other technologies showed varying degrees of promise. Following additional cell testing, the assessment team determined that the Ni/MH technology was suitable for scale-up and acquired 50-V Ni/MH modules from two suppliers for testing. Additional tests are underway to better characterize the Ni/Cd and the Li-ion technologies as well. This paper will present the testing methodology and results from these assessment activities.
Proposed for publication in IJNME.
Abstract not provided.
An experimental program was conducted to study a proposed approach for oil reintroduction in the Strategic Petroleum Reserve (SPR). The goal was to assess whether useful oil is rendered unusable through formation of a stable oil-brine emulsion during reintroduction of degassed oil into the brine layer in storage caverns. An earlier report (O'Hern et al., 2003) documented the first stage of the program, in which simulant liquids were used to characterize the buoyant plume that is produced when a jet of crude oil is injected downward into brine. This report documents the final two test series. In the first, the plume hydrodynamics experiments were completed using SPR oil, brine, and sludge. In the second, oil reinjection into brine was run for approximately 6 hours, and sampling of oil, sludge, and brine was performed over the next 3 months so that the long-term effects of oil-sludge mixing could be assessed. For both series, the experiment consisted of a large transparent vessel that is a scale model of the proposed oil-injection process at the SPR. For the plume hydrodynamics experiments, an oil layer was floated on top of a brine layer in the first test series and on top of a sludge layer residing above the brine in the second test series. The oil was injected downward through a tube into the brine at a prescribed depth below the oil-brine or sludge-brine interface. Flow rates were determined by scaling to match the ratio of buoyancy to momentum between the experiment and the SPR. Initially, the momentum of the flow produces a downward jet of oil below the tube end. Subsequently, the oil breaks up into droplets due to shear forces, buoyancy dominates the flow, and a plume of oil droplets rises to the interface. The interface was deflected upward by the impinging oil-brine plume. Videos of this flow were recorded for scaled flow rates that bracket the equivalent pumping rates in an SPR cavern during injection of degassed oil. Image-processing analyses were performed to quantify the penetration depth and width of the oil jet. The measured penetration depths were shallow, as predicted by penetration-depth models, in agreement with the assumption that the flow is buoyancy-dominated, rather than momentum-dominated. The turbulent penetration depth model overpredicted the measured values. Both the oil-brine and oil-sludge-brine systems produced plumes with hydrodynamic characteristics similar to the simulant liquids previously examined, except that the penetration depth was 5-10% longer for the crude oil. An unexpected observation was that centimeter-size oil 'bubbles' (thin oil shells completely filled with brine) were produced in large quantities during oil injection. The mixing experiments also used layers of oil, sludge, and brine from the SPR. Oil was injected at a scaled flow rate corresponding to the nominal SPR oil injection rates. Injection was performed for about 6 hours and was stopped when it was evident that brine was being ingested by the oil withdrawal pump. Sampling probes located throughout the oil, sludge, and brine layers were used to withdraw samples before, during, and after the run. The data show that strong mixing caused the water content in the oil layer to increase sharply during oil injection but that the water content in the oil dropped back to less than 0.5% within 16 hours after injection was terminated. On the other hand, the sediment content in the oil indicated that the sludge and oil appeared to be well mixed. The sediment settled slowly but the oil had not returned to the baseline, as-received, sediment values after approximately 2200 hours (3 months). Ash content analysis indicated that the sediment measured during oil analysis was primarily organic.
Proposed for publication in CrossTalk.
Abstract not provided.
Existing approaches in multiscale science and engineering have evolved from a range of ideas and solutions that are reflective of their original problem domains. As a result, research in multiscale science has followed widely diverse and disjoint paths, which presents a barrier to cross pollination of ideas and application of methods outside their application domains. The status of the research environment calls for an abstract mathematical framework that can provide a common language to formulate and analyze multiscale problems across a range of scientific and engineering disciplines. In such a framework, critical common issues arising in multiscale problems can be identified, explored and characterized in an abstract setting. This type of overarching approach would allow categorization and clarification of existing models and approximations in a landscape of seemingly disjoint, mutually exclusive and ad hoc methods. More importantly, such an approach can provide context for both the development of new techniques and their critical examination. As with any new mathematical framework, it is necessary to demonstrate its viability on problems of practical importance. At Sandia, lab-centric, prototype application problems in fluid mechanics, reacting flows, magnetohydrodynamics (MHD), shock hydrodynamics and materials science span an important subset of DOE Office of Science applications and form an ideal proving ground for new approaches in multiscale science.
Proposed for publication in Langmuir.
Abstract not provided.
Abstract not provided.
ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.
Proposed for publication in Tetrahedron Letters.
We have been engaged in a search for coordination catalysts for the copolymerization of polar monomers (such as vinyl chloride and vinyl acetate) with ethylene. We have been investigating complexes of late transition metals with heterocyclic ligands. In this report we describe the synthesis of a symmetrical bis-thiadiazole. We have characterized one of the intermediates using single crystal X-ray diffraction. Several unsuccessful approaches toward 1 are also described, which shed light on some of the unique chemistry of thiadiazoles.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in IEEE Transactions on Antennas and Propagation.
Abstract not provided.
Proposed for publication in Journal of Computational Physics.
Abstract not provided.
Proposed for publication in Journal American Ceramic Society.
A new family of framework titanosilicates, A{sub 2}TiSi{sub 6}O{sub 15} (A=K, Rb, Cs) (space group Cc), has recently been synthesized using the hydrothermal method. This group of phases can potentially be utilized for storage of radioactive elements, particularly {sup 137}Cs, due to its high stability under electron radiation and chemical leaching. Here, we report the syntheses and structures of two intermediate members in the series: KRbTiSi{sub 6}O{sub 15} and RbCsTiSi{sub 6}O{sub 15}. Rietveld analysis of powder synchrotron X-ray diffraction data reveals that they adopt the same framework topology as the end-members, with no apparent Rb/K or Rb/Cs ordering. To study energetics of the solid solution series, high-temperature drop-solution calorimetry using molten 2PbO {center_dot} B{sub 2}O{sub 3} as the solvent at 975 K has been performed for the end-members and intermediate phases. As the size of the alkali cation increases, the measured enthalpies of formation from the constituent oxides and from the elements ({Delta}H{sub f,el}) become more exothermic, suggesting that this framework structure favors the cation in the sequence Cs{sup +}, Rb{sup +}, and K{sup +}. This trend is consistent with the higher melting temperatures of A{sub 2}TiSi{sub 6}O{sub 15} phases with increase in the alkali cation size.
Proposed for publication in Physics of Plasmas.
Abstract not provided.
As electronic and optical components reach the micro- and nanoscales, efficient assembly and packaging require the use of adhesive bonds. This work focuses on resolving several fundamental issues in the transition from macro- to micro- to nanobonding. A primary issue is that, as bondline thicknesses decrease, knowledge of the stability and dewetting dynamics of thin adhesive films is important to obtain robust, void-free adhesive bonds. While researchers have studied dewetting dynamics of thin films of model, non-polar polymers, little experimental work has been done regarding dewetting dynamics of thin adhesive films, which exhibit much more complex behaviors. In this work, the areas of dispensing small volumes of viscous materials, capillary fluid flow, surface energetics, and wetting have all been investigated. By resolving these adhesive-bonding issues, we are allowing significantly smaller devices to be designed and fabricated. Simultaneously, we are increasing the manufacturability and reliability of these devices.
This report describes criticality benchmark experiments containing rhodium that were conducted as part of a Department of Energy Nuclear Energy Research Initiative project. Rhodium is an important fission product absorber. A capability to perform critical experiments with low-enriched uranium fuel was established as part of the project. Ten critical experiments, some containing rhodium and others without, were conducted. The experiments were performed in such a way that the effects of the rhodium could be accurately isolated. The use of the experimental results to test neutronics codes is demonstrated by example for two Monte Carlo codes. These comparisons indicate that the codes predict the behavior of the rhodium in the critical systems within the experimental uncertainties. The results from this project, coupled with the results of follow-on experiments that investigate other fission products, can be used to quantify and reduce the conservatism of spent nuclear fuel safety analyses while still providing the necessary level of safety.
This report is a comprehensive review of the field of molecular enumeration from early isomer counting theories to evolutionary algorithms that design molecules in silico. The core of the review is a detail account on how molecules are counted, enumerated, and sampled. The practical applications of molecular enumeration are also reviewed for chemical information, structure elucidation, molecular design, and combinatorial library design purposes. This review is to appear as a chapter in Reviews in Computational Chemistry volume 21 edited by Kenny B. Lipkowitz.
It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply to Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.
More than ten years ago, Sandia managers defined a set of traits and characteristics that were needed for success at Sandia. Today, the Sandia National Laboratories Success Profile Competencies continue to be powerful tools for employee and leadership development. The purpose of this report is to revisit the historical events that led to the creation and adaptation of the competencies and to position them for integration in future employee selection, development, and succession planning processes. This report contains an account of how the competencies were developed, testimonies of how they are used within the organization, and a description of how they will be foundational elements of new processes.
Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp}>10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.
This report summarizes research into effects of electron gun control on piezoelectric polyvinylidene fluoride (PVDF) structures. The experimental apparatus specific to the electron gun control of this structure is detailed, and the equipment developed for the remote examination of the bimorph surface profile is outlined. Experiments conducted to determine the optimum electron beam characteristics for control are summarized. Clearer boundaries on the bimorphs control output capabilities were determined, as was the closed loop response. Further controllability analysis of the bimorph is outlined, and the results are examined. In this research, the bimorph response was tested through a matrix of control inputs of varying current, frequency, and amplitude. Experiments also studied the response to electron gun actuation of piezoelectric bimorph thin film covered with multiple spatial regions of control. Parameter ranges that yielded predictable control under certain circumstances were determined. Research has shown that electron gun control can be used to make macrocontrol and nanocontrol adjustments for PVDF structures. The control response and hysteresis are more linear for a small range of energy levels. Current levels needed for optimum control are established, and the generalized controllability of a PVDF bimorph structure is shown.
Field-structured composites (FSCs) were produced by hosting micron-sized gold-coated nickel particles in a pre-polymer and allowing the mixture to cure in a magnetic field environment. The feasibility of controlling a composite's electrical conductivity using feedback control applied to the field coils was investigated. It was discovered that conductivity in FSCs is primarily determined by stresses in the polymer host matrix due to cure shrinkage. Thus, in cases where the structuring field was uniform and unidirectional so as to produce chainlike structures in the composite, no electrical conductivity was measured until well after the structuring field was turned off at the gel point. In situations where complex, rotating fields were used to generate complex, three-dimensional structures in a composite, very small, but measurable, conductivity was observed prior to the gel point. Responsive, sensitive prototype chemical sensors were developed based on this technology with initial tests showing very promising results.
Micromachines have the potential to significantly impact future weapon component designs as well as other defense, industrial, and consumer product applications. For both electroplated (LIGA) and surface micromachined (SMM) structural elements, the influence of processing on structure, and the resultant effects on material properties are not well understood. The behavior of dynamic interfaces in present as-fabricated microsystem materials is inadequate for most applications and the fundamental relationships between processing conditions and tribological behavior in these systems are not clearly defined. We intend to develop a basic understanding of deformation, fracture, and surface interactions responsible for friction and wear of microelectromechanical system (MEMS) materials. This will enable needed design flexibility for these devices, as well as strengthen our understanding of material behavior at the nanoscale. The goal of this project is to develop new capabilities for sub-microscale mechanical and tribological measurements, and to exercise these capabilities to investigate material behavior at this size scale.
Less toxic, storable, hypergolic propellants are desired to replace nitrogen tetroxide (NTO) and hydrazine in certain applications. Hydrogen peroxide is a very attractive replacement oxidizer, but finding acceptable replacement fuels is more challenging. The focus of this investigation is to find fuels that have short hypergolic ignition delays, high specific impulse, and desirable storage properties. The resulting hypergolic fuel/oxidizer combination would be highly desirable for virtually any high energy-density applications such as small but powerful gas generating systems, attitude control motors, or main propulsion. These systems would be implemented on platforms ranging from guided bombs to replacement of environmentally unfriendly existing systems to manned space vehicles.
For over a half-century, the soldiers and civilians deployed to conflict areas in UN peacekeeping operations have monitored ceasefires and peace agreements of many types with varying degrees of effectiveness. Though there has been a significant evolution of peacekeeping, especially in the 1990s, with many new monitoring functions, the UN has yet to incorporate monitoring technologies into its operations in a systematic fashion. Rather, the level of technology depends largely on the contributing nations and the individual field commanders. In most missions, sensor technology has not been used at all. So the UN has not been able to fully benefit from the sensor technology revolution that has seen effectiveness greatly amplified and costs plummet. This paper argues that monitoring technologies need not replace the human factor, which is essential for confidence building in conflict areas, but they can make peacekeepers more effective, more knowledgeable and safer. Airborne, ground and underground sensors can allow peacekeepers to do better monitoring over larger areas, in rugged terrain, at night (when most infractions occur) and in adverse weather conditions. Technology also allows new ways to share gathered information with the parties to create confidence and, hence, better pre-conditions for peace. In the future sensors should become 'tools of the trade' to help the UN keep the peace in war-torn areas.
The objective of the autonomous micro-explosive subsurface tracing system is to image the location and geometry of hydraulically induced fractures in subsurface petroleum reservoirs. This system is based on the insertion of a swarm of autonomous micro-explosive packages during the fracturing process, with subsequent triggering of the energetic material to create an array of micro-seismic sources that can be detected and analyzed using existing seismic receiver arrays and analysis software. The project included investigations of energetic mixtures, triggering systems, package size and shape, and seismic output. Given the current absence of any technology capable of such high resolution mapping of subsurface structures, this technology has the potential for major impact on petroleum industry, which spends approximately $1 billion dollar per year on hydraulic fracturing operations in the United States alone.
This report presents the result of an effort to re-implement the Parallel Virtual File System (PVFS) using Portals as the transport. This report provides short overviews of PVFS and Portals, and describes the design and implementation of PVFS over Portals. Finally, the results of performance testing of both stock PVFS and PVFS over Portals are presented.
Sandia National Laboratories has developed a portfolio of programs to address the critical skills needs of the DP labs, as identified by the 1999 Chiles Commission Report. The goals are to attract and retain the best and the brightest students and transition them into Sandia - and DP Complex - employees. The US Department of Energy/Defense Programs University Partnerships funded nine laboratory critical skills development programs in FY03. This report provides a qualitative and quantitative evaluation of these programs and their status.
The New Mexico Environment Department (NMED) requires a Corrective Measures Evaluation to evaluate potential remedial alternatives for contaminants of concern (COCs) in groundwater at Sandia National Laboratories New Mexico (SNUNM) Technical Area (TA)-V. These COCs consist of trichloroethene, tetrachloroethene, and nitrate. This document presents the current conceptual model of groundwater flow and transport at TA-V that will provide the basis for a technically defensible evaluation. Characterization is defined by nine requirement areas that were identified in the NMED Compliance Order on Consent. These characterization requirement areas consist of geohydrologic characteristics that control the subsurface distribution and transport of contaminants. This conceptual model document summarizes the regional geohydrologic setting of SNUNM TA-V. The document also presents a summary of site-specific geohydrologic data and integrates these data into the current conceptual model of flow and contaminant transport. This summary includes characterization of the local geologic framework; characterization of hydrologic conditions at TA-V, including recharge, hydraulics of vadose-zone and aquifer flow, and the aquifer field of flow as it pertains to downgradient receptors. The summary also discusses characterization of contaminant transport in the subsurface, including discussion about source term inventory, release, and contaminant distribution and transport in the vadose zone and aquifer.
This document, which is prepared as directed by the Compliance Order on Consent (COOC) issued by the New Mexico Environment Department, identifies and outlines a process to evaluate remedial alternatives to identify a corrective measure for the Sandia National Laboratories New Mexico Technical Area (TA)-V Groundwater. The COOC provides guidance for implementation of a Corrective Measures Evaluation (CME) for the TA-V Groundwater. This Work Plan documents an initial screening of remedial technologies and presents a list of possible remedial alternatives for those technologies that passed the screening. This Work Plan outlines the methods for evaluating these remedial alternatives and describes possible site-specific evaluation activities necessary to estimate remedy effectiveness and cost. These methods will be reported in the CME Report. This Work Plan outlines the CME Report, including key components and a description of the corrective measures process.
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.
Finite element meshes are used to approximate the solution to some differential equation when no exact solution exists. A finite element mesh consists of many small (but finite, not infinitesimal or differential) regions of space that partition the problem domain, {Omega}. Each region, or element, or cell has an associated polynomial map, {Phi}, that converts the coordinates of any point, x = ( x y z ), in the element into another value, f(x), that is an approximate solution to the differential equation, as in Figure 1(a). This representation works quite well for axis-aligned regions of space, but when there are curved boundaries on the problem domain, {Omega}, it becomes algorithmically much more difficult to define {Phi} in terms of x. Rather, we define an archetypal element in a new coordinate space, r = ( r s t ), which has a simple, axis-aligned boundary (see Figure 1(b)) and place two maps onto our archetypal element:
In Phase I of this project, reported in SAND97-1922, Sandia National Laboratories applied a systems approach to identifying innovative biomedical technologies with the potential to reduce U.S. health care delivery costs while maintaining care quality. The effort provided roadmaps for the development and integration of technology to meet perceived care delivery requirements and an economic analysis model for development of care pathway costs for two conditions: coronary artery disease (CAD) and benign prostatic hypertrophy (BPH). Phases II and III of this project, which are presented in this report, were directed at detailing the parameters of telemedicine that influence care delivery costs and quality. These results were used to identify and field test the communication, interoperability, and security capabilities needed for cost-effective, secure, and reliable health care via telemedicine.
Abstract not provided.
Proposed for publication in Macromolecules.
Abstract not provided.
Langmuir
Bulk and surface energies are calculated for endmembers of the isostructural rhombohedral carbonate mineral family, including Ca, Cd, Co, Fe, Mg, Mn, Ni, and Zn compositions. The calculations for the bulk agree with the densities, bond distances, bond angles, and lattice enthalpies reported in the literature. The calculated energies also correlate with measured dissolution rates: the lattice energies show a log-linear relationship to the macroscopic dissolution rates at circumneutral pH. Moreover, the energies of ion pairs translated along surface steps are calculated and found to predict experimentally observed microscopic step retreat velocities. Finally, pit formation excess energies decrease with increasing pit size, which is consistent with the nonlinear dissolution kinetics hypothesized for the initial stages of pit formation.
Numerical Linear Algebra with Applications
This paper develops a general framework for applying algebraic multigrid techniques to constrained systems of linear algebraic equations that arise in applications with discretized PDEs. We discuss constraint coarsening strategies for constructing multigrid coarse grid spaces and several classes of multigrid smoothers for these systems. The potential of these methods is investigated with their application to contact problems in solid mechanics. Published in 2004 by John Wiley &Sons, Ltd.
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics
The effects of side wall movement on granular packings were investigated. The studies showed that the resultant structure of the pack did not depend strongly on the magnitude of the wall movement, as long as the packing was moved for an equivalent distance. The main effect of wall movement was to drive the particle-wall and particle-particle contacts to the Coulomb criterion. This forced the packing in the high wall velocity case to obey the Janssen form, which took the Coulomb criterion as one of its main assumptions.
The conversion of nitrogen in char (char-N) to NO was studied both experimentally and computationally. In the experiments, pulverized coal char was produced from a U.S. high-volatile bituminous coal and burned in a dilute suspension at 1170 K, 1370 K and 1570 K, at an excess oxygen concentration of 8% (dry), with different levels of background NO. In some experiments, hydrogen bromide (HBr) was added to the vitiated air as a tool to alter the concentration of gas-phase radicals. During char combustion, low NO concentration and high temperature promoted the conversion of char-N to NO. HBr addition altered NO production in a way that depended on temperature. At 1170 K the presence of HBr increased NO production by 80%, whereas the addition of HBr decreased NO production at higher temperatures by 20%. To explain these results, three mechanistic descriptions of char-N evolution during combustion were evaluated with computational models that simulated (a) homogeneous chemistry in a plug-flow reactor with entrained particle combustion, and (b) homogeneous chemistry in the boundary layer surrounding a reacting particle. The observed effect of HBr on NO production could only be captured by a chemical mechanism that considered significant release of HCN from the char particle. Release of HCN also explained changes in NO production with temperature and NO concentration. Thus, the combination of experiments and simulations suggests that HCN evolution from the char during pulverized coal combustion plays an essential role in net NO production. Keywords: Coal; Char; Nitric oxide; Halogen.
Proposed for publication in Physics of Plasmas.
Abstract not provided.
Proposed for publication in the International Journal of Plasticity.
Abstract not provided.
Proposed for publication in Semiconductor Science and Technology.
AlGaN/GaN test structures were fabricated with an etched constriction. A nitrogen plasma treatment was used to remove the disordered layer, including natural oxides on the AlGaN surface, before the growth of the silicon nitride passivation film on several of the test structures. A pulsed voltage input, with a 200 ns pulse width, and a four-point measurement were used in a 50 {Omega} environment to determine the room temperature velocity-field characteristic of the structures. The samples performed similarly over low fields, giving a low-field mobility of 545 cm{sup 2} V{sup -1} s{sup -1}. The surface treated sample performed slightly better at higher fields than the untreated sample. The highest velocity measured was 1.25 x 10{sup 7} cm s{sup -1} at a field of 26 kV cm{sup -1}.
Proposed for publication in Semiconductor Science and Technology.
We demonstrate the presence of a resonant interaction between a pair of coupled quantum wires, which are realized in the ultra-high mobility two-dimensional electron gas of a GaAs/AlGaAs quantum well. Measuring the conductance of one wire, as the width of the other is varied, we observe a resonant peak in its conductance that is correlated with the point at which the swept wire pinches off. We discuss this behavior in terms of recent theoretical predictions concerning local spin-moment formation in quantum wires.
This report describes a new algorithm for the joint estimation of carrier phase, symbol timing and data in a Turbo coded phase shift keyed (PSK) digital communications system. Jointly estimating phase, timing and data can give processing gains of several dB over conventional processing, which consists of joint estimation of carrier phase and symbol timing followed by estimation of the Turbo-coded data. The new joint estimator allows delay and phase locked loops (DLL/PLL) to work at lower bit energies where Turbo codes are most useful. Performance results of software simulations and of a field test are given, as are details of a field programmable gate array (FPGA) implementation that is currently in design.
Containment of chemical wastes in near-surface and repository environments is accomplished by designing engineered barriers to fluid flow. Containment barrier technologies such as clay liners, soil/bentonite slurry walls, soil/plastic walls, artificially grouted sediments and soils, and colloidal gelling materials are intended to stop fluid transport and prevent plume migration. However, despite their effectiveness in the short-term, all of these barriers exhibit geochemical or geomechanical instability over the long-term resulting in degradation of the barrier and its ability to contain waste. No technologically practical or economically affordable technologies or methods exist at present for accomplishing total remediation, contaminant removal, or destruction-degradation in situ. A new type of containment barrier with a potentially broad range of environmental stability and longevity could result in significant cost-savings. This report documents a research program designed to establish the viability of a proposed new type of containment barrier derived from in situ precipitation of clays in the pore space of contaminated soils or sediments. The concept builds upon technologies that exist for colloidal or gel stabilization. Clays have the advantages of being geologically compatible with the near-surface environment and naturally sorptive for a range of contaminants, and further, the precipitation of clays could result in reduced permeability and hydraulic conductivity, and increased mechanical stability through cementation of soil particles. While limited success was achieved under certain controlled laboratory conditions, the results did not warrant continuation to the field stage for multiple reasons, and the research program was thus concluded with Phase 2.
Thermionic energy conversion in a miniature format shows potential as a viable, high efficiency, micro to macro-scale power source. A microminiature thermionic converter (MTC) with inter-electrode spacings on the order of microns has been prototyped and evaluated at Sandia. The remaining enabling technology is the development of low work function materials and processes that can be integrated into these converters to increase power production at modest temperatures (800 - 1300 K). The electrode materials are not well understood and the electrode thermionic properties are highly sensitive to manufacturing processes. Advanced theoretical, modeling, and fabrication capabilities are required to achieve optimum performance for MTC diodes. This report describes the modeling and fabrication efforts performed to develop micro dispenser cathodes for use in the MTC.
Li-ion cells are being developed for high-power applications in hybrid electric vehicles currently being designed for the FreedomCAR (Freedom Cooperative Automotive Research) program. These cells offer superior performance in terms of power and energy density over current cell chemistries. Cells using this chemistry are the basis of battery systems for both gasoline and fuel cell based hybrids. However, the safety of these cells needs to be understood and improved for eventual widespread commercial application in hybrid electric vehicles. The thermal behavior of commercial and prototype cells has been measured under varying conditions of cell composition, age and state-of-charge (SOC). The thermal runaway behavior of full cells has been measured along with the thermal properties of the cell components. We have also measured gas generation and gas composition over the temperature range corresponding to the thermal runaway regime. These studies have allowed characterization of cell thermal abuse tolerance and an understanding of the mechanisms that result in cell thermal runaway.
The Mixed Waste Landfill occupies 2.6 acres in the north-central portion of Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico. The landfill accepted low-level radioactive and mixed waste from March 1959 to December 1988. This report represents the Corrective Measures Study that has been conducted for the Mixed Waste Landfill. The purpose of the study was to identify, develop, and evaluate corrective measures alternatives and recommend the corrective measure(s) to be taken at the site. Based upon detailed evaluation and risk assessment using guidance provided by the U.S. Environmental Protection Agency and the New Mexico Environment Department, the U.S. Department of Energy and Sandia National Laboratories recommend that a vegetative soil cover be deployed as the preferred corrective measure for the Mixed Waste Landfill. The cover would be of sufficient thickness to store precipitation, minimize infiltration and deep percolation, support a healthy vegetative community, and perform with minimal maintenance by emulating the natural analogue ecosystem. There would be no intrusive remedial activities at the site and therefore no potential for exposure to the waste. This alternative poses minimal risk to site workers implementing institutional controls associated with long-term environmental monitoring as well as routine maintenance and surveillance of the site.
A model of malicious attacks against an infrastructure system is developed that uses a network representation of the system structure together with a Hidden Markov Model of an attack at a node of that system and a Markov Decision Process model of attacker strategy across the system as a whole. We use information systems as an illustration, but the analytic structure developed can also apply to attacks against physical facilities or other systems that provide services to customers. This structure provides an explicit mechanism to evaluate expected losses from malicious attacks, and to evaluate changes in those losses that would result from system hardening. Thus, we provide a basis for evaluating the benefits of system hardening. The model also allows investigation of the potential for the purchase of an insurance contract to cover the potential losses when safeguards are breached and the system fails.
The work reported in this document involves a development effort to provide combat commanders and systems engineers with a capability to explore and optimize system concepts that include operational concepts as part of the design effort. An infrastructure and analytic framework has been designed and partially developed that meets a gap in systems engineering design for combat related complex systems. The system consists of three major components: The first component consists of a design environment that permits the combat commander to perform 'what-if' types of analyses in which parts of a course of action (COA) can be automated by generic system constructs. The second component consists of suites of optimization tools designed to integrate into the analytical architecture to explore the massive design space of an integrated design and operational space. These optimization tools have been selected for their utility in requirements development and operational concept development. The third component involves the design of a modeling paradigm for the complex system that takes advantage of functional definitions and the coupled state space representations, generic measures of effectiveness and performance, and a number of modeling constructs to maximize the efficiency of computer simulations. The system architecture has been developed to allow for a future extension in which the operational concept development aspects can be performed in a co-evolutionary process to ensure the most robust designs may be gleaned from the design space(s).
Program transformation is a restricted form of software construction that can be amenable to formal verification. When successful, the nature of the evidence provided by such a verification is considered strong and can constitute a major component of an argument that a high-consequence or safety-critical system meets its dependability requirements. This article explores the application of novel higher-order strategic programming techniques to the development of a portion of a class loader for a restricted implementation of the Java Virtual Machine (JVM). The implementation is called the SSP and is intended for use in high-consequence safety-critical embedded systems. Verification of the strategic program using ACL2 is also discussed.
In many strategic systems, the choice combinator provides a powerful mechanism for controlling the application of rules and strategies to terms. The ability of the choice combinator to exercise control over rewriting is based on the premise that the success and failure of strategy application can be observed. In this paper we present a higher-order strategic framework with the ability to dynamically construct strategies containing the choice combinator. To this framework, a combinator called hide is introduced that prevents the successful application of a strategy from being observed by the choice combinator. We then explore the impact of this new combinator on a real-world problem involving a restricted implementation of the Java Virtual Machine.
This report assembles models for the response of a wire interacting with a conducting ground to an electromagnetic pulse excitation. The cases of an infinite wire above the ground as well as resting on the ground and buried beneath the ground are treated. The focus is on the characteristics and propagation of the transmission line mode. Approximations are used to simplify the description and formulas are obtained for the current. The semi-infinite case, where the short circuit current can be nearly twice that of the infinite line, is also examined.
Network-centric systems that depend on mobile wireless ad hoc networks for their information exchange require detailed analysis to support their development. In many cases, this critical analysis is best provided with high-fidelity system simulations that include the effects of network architectures and protocols. In this research, we developed a high-fidelity system simulation capability using an HLA federation. The HLA federation, consisting of the Umbra system simulator and OPNET Modeler network simulator, provides a means for the system simulator to both affect, and be affected by, events in the network simulator. Advances are also made in increasing the fidelity of the wireless communication channel and reducing simulation run-time with a dead reckoning capability. A simulation experiment is included to demonstrate the developed modeling and simulation capability.
We have researched several new focused ion beam (FIB) micro-fabrication techniques that offer control of feature shape and the ability to accurately define features onto nonplanar substrates. These FIB-based processes are considered useful for prototyping, reverse engineering, and small-lot manufacturing. Ion beam-based techniques have been developed for defining features in miniature, nonplanar substrates. We demonstrate helices in cylindrical substrates having diameters from 100 {micro}m to 3 mm. Ion beam lathe processes sputter-define 10-{micro}m wide features in cylindrical substrates and tubes. For larger substrates, we combine focused ion beam milling with ultra-precision lathe turning techniques to accurately define 25-100 {micro}m features over many meters of path length. In several cases, we combine the feature defining capability of focused ion beam bombardment with additive techniques such as evaporation, sputter deposition and electroplating in order to build geometrically-complex, functionally-simple devices. Damascene methods that fabricate bound, metal microcoils have been developed for cylindrical substrates. Effects of focused ion milling on surface morphology are also highlighted in a study of ion-milled diamond.
A survey has been carried out to quantify the performance and life of over 700,000 valve-regulated lead-acid (VRLA) cells, which have been or are being used in stationary applications across the United States. The findings derived from this study have not identified any fundamental flaws of VRLA battery technology. There is evidence that some cell designs are more successful in float duty than others. A significant number of the VRLA cells covered by the survey were found to have provided satisfactory performance.
The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.
This report summarizes research advances pursued with award funding issued by the DOE to Drexel University through the Presidential Early Career Award (PECASE) program. Professor Rich Cairncross was the recipient of this award in 1997. With it he pursued two related research topics under Sandia's guidance that address the outstanding issue of fluid-structural interactions of liquids with deformable solid materials, focusing mainly on the ubiquitous dynamic wetting problem. The project focus in the first four years was aimed at deriving a predictive numerical modeling approach for the motion of the dynamic contact line on a deformable substrate. A formulation of physical model equations was derived in the context of the Galerkin finite element method in an arbitrary Lagrangian/Eulerian (ALE) frame of reference. The formulation was successfully integrated in Sandia's Goma finite element code and tested on several technologically important thin-film coating problems. The model equations, the finite-element implementation, and results from several applications are given in this report. In the last year of the five-year project the same physical concepts were extended towards the problem of capillary imbibition in deformable porous media. A synopsis of this preliminary modeling and experimental effort is also discussed.
Science historian James Burke is well known for his stories about how technological innovations are intertwined and embedded in the culture of the time, for example, how the steam engine led to safety matches, imitation diamonds, and the landing on the moon.1 A lesson commonly drawn from his stories is that the path of science and technology (S&T) is nonlinear and unpredictable. Viewed another way, the lesson is that the solution to one problem can lead to solutions to other problems that are not obviously linked in advance, i.e., there is a ripple effect. The motto for Sandia's approach to research and development (R&D) is 'Science with the mission in mind.' In our view, our missions contain the problems that inspire our R&D, and the resulting solutions almost always have multiple benefits. As discussed below, Sandia's Laboratory Directed Research and Development (LDRD) Program is structured to bring problems relevant to our missions to the attention of researchers. LDRD projects are then selected on the basis of their programmatic merit as well as their technical merit. Considerable effort is made to communicate between investment areas to create the ripple effect. In recent years, attention to the ripple effect and to the performance of the LDRD Program, in general, has increased. Inside Sandia, as it is the sole source of discretionary research funding, LDRD funding is recognized as being the most precious of research dollars. Hence, there is great interest in maximizing its impact, especially through the ripple effect. Outside Sandia, there is increased scrutiny of the program's performance to be sure that it is not a 'sandbox' in which researchers play without relevance to national security needs. Let us therefore address the performance of the LDRD Program in fiscal year 2003 and then show how it is designed to maximize impact.
The ASCI Grid Services (initially called Distributed Resource Management) project was started under DisCom{sup 2} when distant and distributed computing was identified as a technology critical to the success of the ASCI Program. The goals of the Grid Services project has and continues to be to provide easy, consistent access to all the ASCI hardware and software resources across the nuclear weapons complex using computational grid technologies, increase the usability of ASCI hardware and software resources by providing interfaces for resource monitoring, job submission, job monitoring, and job control, and enable the effective use of high-end computing capability through complex-wide resource scheduling and brokering. In order to increase acceptance of the new technology, the goal included providing these services in both the unclassified as well as the classified user's environment. This paper summarizes the many accomplishments and lessons learned over approximately five years of the ASCI Grid Services Project. It also provides suggestions on how to renew/restart the effort for grid services capability when the situation is right for that need.
To establish mechanical material properties of cellular concrete mixes, a series of quasi-static, compression and tension tests have been completed. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Results from the uniaxial and triaxial compression tests established failure criteria for the cellular concrete in terms of stress invariants I{sub 1} and J{sub 2}. {radical}J{sub 2} (MPa) = 297.2 - 278.7 exp{sup -0.000455 I}{sub 1}{sup (MPa)} for the 90-pcf concrete {radical}J{sub 2} (MPa) = 211.4 - 204.2 exp {sup -0.000628 I}{sub 1}{sup (MPa)} for the 60-pcf concrete
For many decades, engineers and scientists have studied the effects of high power microwaves (HPM) on electronics. These studies usually focus on means of delivering energy to upset electronic equipment and ways to protect equipment from HPM. The motivation for these studies is to develop the knowledge necessary either to cause disruption or to protect electronics from disruption. Since electronic circuits must absorb sufficient energy to fail and the source used to deliver this energy is far away from the electronic circuit, the source must emit a large quantity of energy. In free space, for example, as the distance between the source and the target increases, the source energy must increase by the square of distance. The HPM community has dedicated substantial resources to the development of higher energy sources as a result. Recently, members of the HPM community suggested a new disruption mechanism that could potentially cause system disruptions at much lower energy levels. The new mechanism, based on nonlinear dynamics, requires an expanded theory of circuit operation. This report summarizes an investigation of electronic circuit nonlinear behavior as it applies to inductor-resistor-diode circuits (known as the Linsay circuit) and phased-locked-loops. With the improvement in computing power and the need to model circuit behavior with greater precision, the nonlinear effects of circuit has become very important. In addition, every integrated circuit has as part of its design a protective circuit. These protective circuits use some variation of semiconductor junctions that can interact with parasitic components, present in every real system. Hence, the protective circuit can behave as a Linsay circuit. Although the nonlinear behavior is understandable, it is difficult to model accurately. Many researchers have used classical diode models successfully to show nonlinear effects within predicted regions of operation. However, these models do not accurately predict measured results. This study shows that models based on SPICE, although they exhibit chaotic behavior, do not properly reproduce circuit behavior without modifying diode parameters. This report describes the models and considerations used to model circuit behavior in the nonlinear range of operation. Further, it describes how a modified SPICE diode model improves the simulation results. We also studied the nonlinear behavior of a phased-locked-loop. Phased-locked loops are fundamental building block to many major systems (aileron, seeker heads, etc). We showed that an injected RF signal could drive the phased-locked-loop into chaos. During these chaotic episodes, the frequency of the phased-locked-loop takes excursion outside its normal range of operation. In addition to these excursions, the phased-locked-loop and the system it is controlling requires some time to get back into normal operation. The phased-locked-loop only needs to be upset enough long enough to keep it off balance.
A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.
The goal of this Laboratory Directed Research & Development (LDRD) effort was to design, synthesize, and evaluate organic-inorganic nanocomposite membranes for solubility-based separations, such as the removal of higher hydrocarbons from air streams, using experiment and theory. We synthesized membranes by depositing alkylchlorosilanes on the nanoporous surfaces of alumina substrates, using techniques from the self-assembled monolayer literature to control the microstructure. We measured the permeability of these membranes to different gas species, in order to evaluate their performance in solubility-based separations. Membrane design goals were met by manipulating the pore size, alkyl group size, and alkyl surface density. We employed molecular dynamics simulation to gain further understanding of the relationship between membrane microstructure and separation performance.
The Maximum Permissible Exposure (MPE) is central to laser hazard analysis and is in general a function of the radiant wavelength. The selection of a laser for a particular application may allow for flexibility in the selection of the radiant wavelength. This flexibility would allow the selection of a particular laser based on the MPE and the hazards associated with that radiant wavelength. The Calculations of the MPEs for various laser wavelength ranges are presented. Techniques for determining eye safe viewing distances for both aided and unaided viewing and the determination of flight hazard distances are presented as well.
A key factor in our ability to produce and predict the stability of metal-based macro- to nano-scale structures and devices is a fundamental understanding of the localized nature of corrosion. Corrosion processes where physical dimensions become critical in the degradation process include localized corrosion initiation in passivated metals, microgalvanic interactions in metal alloys, and localized corrosion in structurally complex materials like nanocrystalline metal films under atmospheric and inundated conditions. This project focuses on two areas of corrosion science where a fundamental understanding of processes occurring at critical dimensions is not currently available. Sandia will study the critical length scales necessary for passive film breakdown in the inundated aluminum (Al) system and the chemical processes and transport in ultra-thin water films relevant to the atmospheric corrosion of nanocrystalline tungsten (W) films. Techniques are required that provide spatial information without significantly perturbing or masking the underlying relationships. Al passive film breakdown is governed by the relationship between area of the film sampled and its defect structure. We will combine low current measurements with microelectrodes to study the size scale required to observe a single initiation event and record electrochemical breakdown events. The resulting quantitative measure of stability will be correlated with metal grain size, secondary phase size and distribution to understand which metal properties control stability at the macro- and nano-scale. Mechanisms of atmospheric corrosion on W are dependent on the physical dimensions and continuity of adsorbed water layers as well as the chemical reactions that take place in this layer. We will combine electrochemical and scanning probe microscopic techniques to monitor the chemistry and resulting material transport in these thin surface layers. A description of the length scales responsible for driving the corrosion of the nanocrystalline metal films will be developed. The techniques developed and information derived from this work will be used to understand and predict degradation processes in microelectronic and microsystem devices critical to Sandia's mission.
Microelectromechanical systems (MEMS) comprise a new class of devices that include various forms of sensors and actuators. Recent studies have shown that microscale cantilever structures are able to detect a wide range of chemicals, biomolecules or even single bacterial cells. In this approach, cantilever deflection replaces optical fluorescence detection thereby eliminating complex chemical tagging steps that are difficult to achieve with chip-based architectures. A key challenge to utilizing this new detection scheme is the incorporation of functionalized MEMS structures within complex microfluidic channel architectures. The ability to accomplish this integration is currently limited by the processing approaches used to seal lids on pre-etched microfluidic channels. This report describes Sandia's first construction of MEMS instrumented microfluidic chips, which were fabricated by combining our leading capabilities in MEMS processing with our low-temperature photolithographic method for fabricating microfluidic channels. We have explored in-situ cantilevers and other similar passive MEMS devices as a new approach to directly sense fluid transport, and have successfully monitored local flow rates and viscosities within microfluidic channels. Actuated MEMS structures have also been incorporated into microfluidic channels, and the electrical requirements for actuation in liquids have been quantified with an elegant theory. Electrostatic actuation in water has been accomplished, and a novel technique for monitoring local electrical conductivities has been invented.
The lead probe neutron detector was originally designed by Spencer and Jacobs in 1965. The detector is based on lead activation due to the following neutron scattering reactions: {sup 207}Pb(n, n'){sup 207m}Pb and {sup 208}Pb(n, 2n){sup 207m}Pb. Delayed gammas from the metastable state of {sup 207m}Pb are counted using a plastic scintillator. The half-life of {sup 207m}Pb is 0.8 seconds. In the work reported here, MCNP was used to optimize the efficiency of the lead probe by suitably modifying the original geometry. A prototype detector was then built and tested. A 'layer cake' design was investigated in which thin (< 5 mm) layers of lead were sandwiched between thicker ({approx} 1 - 2 cm) layers of scintillator. An optimized 'layer cake' design had Figures of Merit (derived from the code) which were a factor of 3 greater than the original lead probe for DD neutrons, and a factor of 4 greater for DT neutrons, while containing 30% less lead. A smaller scale, 'proof of principle' prototype was built by Bechtel/Nevada to verify the code results. Its response to DD neutrons was measured using the DD dense plasma focus at Texas A&M and it conformed to the predicted performance. A voltage and discriminator sweep was performed to determine optimum sensitivity settings. It was determined that a calibration operating point could be obtained using a {sup 133}Ba 'bolt' as is the case with the original lead probe.
This report addresses the development of automated video-screening technology to assist security forces in protecting our homeland against terrorist threats. A threat of specific interest to this project is the covert placement and subsequent remote detonation of bombs (e.g., briefcase bombs) inside crowded public facilities. Different from existing video motion detection systems, the video-screening technology described in this report is capable of detecting changes in the static background of an otherwise, dynamic environment - environments where motion and human activities are persistent. Our goal was to quickly detect changes in the background - even under conditions when the background is visible to the camera less than 5% of the time. Instead of subtracting the background to detect movement or changes in a scene, we subtracted the dynamic scene variations to produce an estimate of the static background. Subsequent comparisons of static background estimates are used to detect changes in the background. Detected changes can be used to alert security forces of the presence and location of potential threats. The results of this research are summarized in two MS Power-point presentations included with this report.
A significant barrier to the deployment of distributed energy resources (DER) onto the power grid is uncertainty on the part of utility engineers regarding impacts of DER on their distribution systems. Because of the many possible combinations of DER and local power system characteristics, these impacts can most effectively be studied by computer simulation. The goal of this LDRD project was to develop and experimentally validate models of transient and steady state source behavior for incorporation into utility distribution analysis tools. Development of these models had not been prioritized either by the distributed-generation industry or by the inverter industry. A functioning model of a selected inverter-based DER was developed in collaboration with both the manufacturer and industrial power systems analysts. The model was written in the PSCAD simulation language, a variant of the ElectroMagnetic Transients Program (EMTP), a code that is widely used and accepted by utilities. A stakeholder team was formed and a methodology was established to address the problem. A list of detailed DER/utility interaction concerns was developed and prioritized. The list indicated that the scope of the problem significantly exceeded resources available for this LDRD project. As this work progresses under separate funding, the model will be refined and experimentally validated. It will then be incorporated in utility distribution analysis tools and used to study a variety of DER issues. The key next step will be design of the validation experiments.
AIAA Journal of Guidance, Control and Dynamics
Abstract not provided.
Langmuir
Drainage of water from the region between an advancing probe tip and a flat sample is reconsidered under the assumption that the tip and sample surfaces are both coated by a thin water "interphase" (of width approximately a few nanometers) whose viscosity is much higher than that of the bulk liquid. A formula derived by solving the Navier-Stokes equations allows one to extract an interphase viscosity of ∼59 kPa·s (or ∼6.6 × 10 7 times the viscosity of bulk water at 25°C) from interfacial force microscope measurements with both tip and sample functionalized hydrophilic by OH-terminated tri(ethylene glycol) undecylthiol, self-assambled monolayers.
IEEE Transactions on Geoscience and Remote Sensing
Coherent cross-track synthetic aperture radar (SAR) stereo is shown to produce high-resolution three-dimensional maps of the earth surface. This mode utilizes image pairs with common synthetic apertures but different squint angles allowing automated stereo correspondence and disparity estimation using complex correlation calculations. This paper presents two Ku-hand, coherent cross-track stereo collects over rolling and rugged terrain. The first collect generates a digital elevation map (DEM) with 1-m posts over rolling terrain using complex SAR imagery with spatial resolution of 0.125 m and a stereo convergence angle of 13.8°. The second collect produces multiple DEMs with 3-m posts over rugged terrain utilizing complex SAR imagery with spatial resolutions better than 0.5 m and stereo convergence angles greater than 40°. The resulting DEMS are compared to ground-truth DEMs and relative height root-mean-square (RMS), linear error 90-percent confidence (LE90), and maximum height error are reported.
Abstract not provided.
Abstract not provided.
Proposed for publication in Evolutionary Computations.
We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.
The geologic model implicit in the original site characterization report for the Bayou Choctaw Strategic Petroleum Reserve Site near Baton Rouge, Louisiana, has been converted to a numerical, computer-based three-dimensional model. The original site characterization model was successfully converted with minimal modifications and use of new information. The geometries of the salt diapir, selected adjacent sedimentary horizons, and a number of faults have been modeled. Models of a partial set of the several storage caverns that have been solution-mined within the salt mass are also included. Collectively, the converted model appears to be a relatively realistic representation of the geology of the Bayou Choctaw site as known from existing data. A small number of geometric inconsistencies and other problems inherent in 2-D vs. 3-D modeling have been noted. Most of the major inconsistencies involve faults inferred from drill hole data only. Modem computer software allows visualization of the resulting site model and its component submodels with a degree of detail and flexibility that was not possible with conventional, two-dimensional and paper-based geologic maps and cross sections. The enhanced visualizations may be of particular value in conveying geologic concepts involved in the Bayou Choctaw Strategic Petroleum Reserve site to a lay audience. A Microsoft WindowsTM PC-based viewer and user-manipulable model files illustrating selected features of the converted model are included in this report.
These Technical Safety Requirements (TSRs) identify the operational conditions, boundaries, and administrative controls for the safe operation of the Auxiliary Hot Cell Facility (AHCF) at Sandia National Laboratories, in compliance with 10 CFR 830, 'Nuclear Safety Management.' The bases for the TSRs are established in the AHCF Documented Safety Analysis (DSA), which was issued in compliance with 10 CFR 830, Subpart B, 'Safety Basis Requirements.' The AHCF Limiting Conditions of Operation (LCOs) apply only to the ventilation system, the high efficiency particulate air (HEPA) filters, and the inventory. Surveillance Requirements (SRs) apply to the ventilation system, HEPA filters, and associated monitoring equipment; to certain passive design features; and to the inventory. No Safety Limits are necessary, because the AHCF is a Hazard Category 3 nuclear facility.
This report describes an LDRD-supported experimental-theoretical collaboration on the enhanced low-dose-rate sensitivity (ELDRS) problem. The experimental work led to a method for elimination of ELDRS, and the theoretical work led to a suite of bimolecular mechanisms that explain ELDRS and is in good agreement with various ELDRS experiments. The model shows that the radiation effects are linear in the limit of very low dose rates. In this limit, the regime of most concern, the model provides a good estimate of the worst-case effects of low dose rate ionizing radiation.
This document describes the 2003 SNL ASCI Software Quality Engineering (SQE) assessment of twenty ASCI application code teams and the results of that assessment. The purpose of this assessment was to determine code team compliance with the Sandia National Laboratories ASCI Applications Software Quality Engineering Practices, Version 2.0 as part of an overall program assessment.
An increase in photocurrent has been observed at silicon electrodes coated with nanostructured porous silica films as compared to bare, unmodified silicon. Ultimately, to utilize this effect in devices such as sensors or microchip power supplies, the physical phenomena behind this observation need to be well characterized. To this end, Electrochemical Impedance Spectroscopy (EIS) was used to characterize the effect of surfactant-templated mesoporous silica films deposited onto silicon electrodes on the electrical properties of the electrode space-charge region in an aqueous electrolyte solution, as the electrical properties of this space-charge region are responsible for the photobehavior of semiconductor devices. A significant shift in apparent flat-band potential was observed for electrodes modified with the silica film when compared to bare electrodes; the reliability of this data is suspect, however, due to contributions from surface states to the overall capacitance of the system. To assist in the interpretation of this EIS data, a series of measurements at Pt electrodes was performed with the hope of decoupling electrode and film contributions from the EIS spectra. Surprisingly, the frequency-dependent impedance data for Pt electrodes coated with a surfactant-templated film was nearly identical to that observed for bare Pt electrodes, indicating that the mesoporous film had little effect on the transport of small electrolyte ions to the electrode surface. Pore-blocking agents (tetraalkylammonium salts) were not observed to inhibit this transport process. However, untemplated (non-porous) silica films dramatically increased film resistance, indicating that our EIS data for the Pt electrodes is reliable. Overall, our preliminary conclusion is that a shift in electrical properties in the space-charge region induced by the presence of a porous silica film is responsible for the increase in observed photocurrent.
The waters of the Pecos River in New Mexico must be delivered to three primary users: (1) The Pecos River Compact: each year a percentage of water from natural river flow must be delivered to Texas; (2) Agriculture: Carlsbad Irrigation District has a storage and diversion right and Fort Sumner Irrigation District has a direct flow diversion right; and, (3) Endangered Species Act: an as yet unspecified amount of water is to support Pecos Bluntnose Shiner Minnow habitat within and along the Pecos River. Currently, the United States Department of Interior Bureau of Reclamation, the New Mexico Interstate Stream Commission, and the United States Department of the Interior Fish and Wildlife Service are studying the Pecos Bluntnose Shiner Minnow habitat preference. Preliminary work by Fish and Wildlife personnel in the critical habitat suggest that water depth and water velocity are key parameters defining minnow habitat preference. However, river flows that provide adequate preferred habitat to support this species have yet to be determined. Because there is a limited amount of water in the Pecos River and its reservoirs, it is critical to allocate water efficiently such that habitat is maintained, while honoring commitments to agriculture and to the Pecos River Compact. This study identifies the relationship between Pecos River flow rates in cubic feet per second (cfs) and water depth and water velocity.
We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.
A laser safety and hazard analysis was performed for the airborne AURA (Big Sky Laser Technology) lidar system based on the 2000 version of the American National Standard Institute's (ANSI) Standard Z136.1, for the Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for the Safe Use of Lasers Outdoors. The AURA lidar system is installed in the instrument pod of a Proteus airframe and is used to perform laser interaction experiments and tests at various national test sites. The targets are located at various distances or ranges from the airborne platform. In order to protect personnel, who may be in the target area and may be subjected to exposures, it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength, calculate the Nominal Ocular Hazard Distance (NOHD), and determine the maximum 'eye-safe' dwell times for various operational altitudes and conditions. It was also necessary to calculate the appropriate minimum Optical Density (ODmin) of the laser safety eyewear used by authorized personnel who may receive hazardous exposures during ground base operations of the airborne AURA laser system (system alignment and calibration).
The Advanced Concepts Group (ACG) at Sandia National Laboratories is exploring the use of Red Teaming to help intelligence analysts with two key processes: determining what a piece or pieces of information might imply and deciding what other pieces of information need to be found to support or refute hypotheses about what actions a suspected terrorist organization might be pursuing. In support of this effort, the ACG hosted a terrorism red gaming event in Albuquerque on July 22-24, 2003. The game involved two 'red teams' playing the roles of two terrorist cells - one focused on implementing an RDD attack on the DC subway system and one focused on a bio attack against the same target - and two 'black teams' playing the role of the intelligence collection system and of intelligence analysts trying to decide what plans the red teams might be pursuing. This exercise successfully engaged human experts to seed a proposed compute engine with detailed operational plans for hypothetical terrorist scenarios.
The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.
This report describes work done in FY2003 under Advanced and Exploratory Studies funding for Advanced Weapons Controllers. The contemporary requirements and envisioned missions for nuclear weapons are changing from the class of missions originally envisioned during development of the current stockpile. Technology available today in electronics, computing, and software provides capabilities not practical or even possible 20 years ago. This exploratory work looks at how Weapon Electrical Systems can be improved to accommodate new missions and new technologies while maintaining or improving existing standards in nuclear safety and reliability.
A concurrent computational and experimental investigation of thermal transport is performed with the goal of improving understanding of, and predictive capability for, thermal transport in microdevices. The computational component involves Monte Carlo simulation of phonon transport. In these simulations, all acoustic modes are included and their properties are drawn from a realistic dispersion relation. Phonon-phonon and phonon-boundary scattering events are treated independently. A new set of phonon-phonon scattering coefficients are proposed that reflect the elimination of assumptions present in earlier analytical work from the simulation. The experimental component involves steady-state measurement of thermal conductivity on silicon films as thin as 340nm at a range of temperatures. Agreement between the experiment and simulation on single-crystal silicon thin films is excellent, Agreement for polycrystalline films is promising, but significant work remains to be done before predictions can be made confidently. Knowledge gained from these efforts was used to construct improved semiclassical models with the goal of representing microscale effects in existing macroscale codes in a computationally efficient manner.
The work discussed in this report was supported by a Campus Fellowship LDRD. The report contains three papers that were published by the fellowship recipient and these papers form the bulk of his dissertation. They are reproduced here to satisfy LDRD reporting requirements.
As MEMS transducers are scaled up in size, the threshold is quickly crossed to where magnetoquasistatic (MQS) transducers are superior for force production compared to electroquasistatic (EQS) transducers. Considerable progress has been made increasing the force output of MEMS EQS transducers, but progress with MEMS MQS transducers has been more modest. A key reason for this has been the difficulty implementing efficient lithographically-fabricated magnetic coil structures. The contribution of this study is a planar multilayer polyphase coil architecture which provides for the lithographic implementation of efficient stator windings suitable for linear magnetic machines. A millimeter-scale linear actuator with complex stator windings was fabricated using this architecture. The stators of the actuator were fabricated using a BCB/Cu process, which does not require replanarization of the wafer between layers. The prototype stator was limited to thin copper layers (3 {micro}m) due to the use of evaporated metal at the time of fabrication. Two layers of metal were implemented in the prototype, but the winding architecture naturally supports additional metal layer pairs. It was found in laboratory tests that the windings can support very high current densities of 4 x 10{sup 9}A/m{sup 2} without damage. Force production normal to the stator was calculated to be 0.54 N/A. For thin stators such as this one, force production increases approximately linearly with the thickness of the windings and a six-layer stator fabricated using a newly implemented electroplated BCB/Cu process (six layers of 15 {micro}m thick metal) is projected to produce approximately 8.8 N/A.
A laser safety hazard evaluation and pertinent output measurements were performed (June 2003 through August 2003) on several VITAL-2 Variable Intensity Tactical Aiming Light--infrared laser, associated with the Proforce M-4 system used in force-on-force exercises. The VITAL-2 contains two diode lasers presenting 'Extended Source' viewing out to a range on the order of 1.3 meters before reverting to a 'Small Source' viewing hazard. Laser hazard evaluation was performed in concert with the ANSI Std. Z136.1-2000 for the safe use of lasers and the ANSI Std. Z136.6-2000 for the safe use of lasers outdoors. The results of the laser hazard analysis for the VITAL-2, indicates that this Tactical Aiming IR laser presents a Class 1 laser hazard to personnel in the area of use. Field measurements performed on 71 units confirmed that the radiant outputs were at all times below the Allowable Emission Limit and that the irradiance of the laser spot was at all locations below the Maximum Exposure Limit. This system is eye-safe and it may be used under current SNL policy in force-on-force exercises. The VITAL-2 Variable Intensity Tactical Aiming Light does not present a laser hazard greater than Class 1, to aided viewing with binoculars.
Proposed for publication in Journal of Physics D.
Alloying element loss from the weld pool during laser spot welding of stainless steel was investigated experimentally and theoretically. The experimental work involved determination of work-piece weight loss and metal vapor composition for various welding conditions. The transient temperature and velocity fields in the weld pool were numerically simulated. The vaporization rates of the alloying elements were modeled using the computed temperature profiles. The fusion zone geometry could be predicted from the transient heat transfer and fluid flow model for various welding conditions. The laser power and the pulse duration were the most important variables in determining the transient temperature profiles. The velocity of the liquid metal in the weld pool increased with time during heating and convection played an increasingly important role in the heat transfer. The peak temperature and velocity increased significantly with laser power density and pulse duration. At very high power densities, the computed temperatures were higher than the boiling point of 304 stainless steel. As a result, evaporation of alloying elements was caused by both the total pressure and the concentration gradients. The calculations showed that the vaporization occurred mainly from a small region under the laser beam where the temperatures were very high. The computed vapor loss was found to be lower than the measured mass loss because of the ejection of tiny metal droplets owing to the recoil force exerted by the metal vapours. The ejection of metal droplets has been predicted by computations and verified by experiments.
Detailed experiments involving extensive high resolution transmission electron microscopy (TEM) revealed significant microstructural differences between Cu sulfides formed at low and high relative humidity (RH). It was known from prior experiments that the sulfide grows linearly with time at low RH up to a sulfide thickness approaching or exceeding one micron, while the sulfide initially grows linearly with time at high RH then becomes sub-linear at a sulfide thickness less than about 0.2 microns, with the sulfidation rate eventually approaching zero. TEM measurements of the Cu2S morphology revealed that the Cu2S formed at low RH has large sized grains (75 to greater than 150 nm) that are columnar in structure with sharp, abrupt grain boundaries. In contrast, the Cu2S formed at high RH has small equiaxed grains of 20 to 50 nm in size. Importantly, the small grains formed at high RH have highly disordered grain boundaries with a high concentration of nano-voids. Two-dimensional diffusion modeling was performed to determine whether the existence of localized source terms at the Cu/Cu2S interface could be responsible for the suppression of Cu sulfidation at long times at high RH. The models indicated that the existence of static localized source terms would not predict the complete suppression of growth that was observed. Instead, the models suggest that the diffusion of Cu through Cu2S becomes restricted during Cu2S formation at high RH. The leading speculation is that the extensive voiding that exists at grain boundaries in this material greatly reduces the flux of Cu between grains, leading to a reduction in the rate of sulfide film formation. These experiments provide an approach for adding microstructural information to Cu sulfidation rate computer models. In addition to the microstructural studies, new micro-patterned test structures were developed in this LDRD to offer insight into the point defect structure of Cu2S and to permit measurement of surface reaction rates during Cu sulfidation. The surface reaction rate was measured by creating micropatterned Cu lines of widths ranging from 5 microns to 100 microns. When sulfidized, the edges of the Cu lines show greater sulfidation than the center, an effect known as microloading. Measurement of the sulfidation profile enables an estimate of the ratio of the diffusivity of H2S in the gas phase to the surface reaction rate constant, k. Our measurements indicated that the gas phase diffusivity exceeds k by more than 10, but less than 100. This is consistent with computer simulations of the sulfidation process. Other electrical test structures were developed to measure the electrical conductivity of Cu2S that forms on Cu. This information can be used to determine relative vacancy concentrations in the Cu2S layer as a function of RH. The test structures involved micropatterned Cu disks and thin films, and the initial measurements showed that the electrical approach is feasible for point defect studies in Cu2S.
Proposed for publication in The Aerospace Testing International.
Abstract not provided.
Journal of Crystal Growth
In the epitaxial lateral overgrowth of GaN, mass transport and the effects of crystal-growth kinetics lead to a wide range of observed feature growth rates depending on the dimensions of the masked and exposed regions. Based on a simple model, scaling relationships are derived that reveal the dynamic similarity of growth behavior across pattern designs. A time-like quantity is introduced that takes into account the varying transport effects, and provides a dimensionless time basis for analyzing crystal growth kinetics in this system. Illustrations of these scaling relationships are given through comparison with experiment. Published by Elsiver B.V.
Presented here are principles by which Sandia National Laboratories conducts its partnering activities with private industry.
Applied Mechanics Reviews
The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.
Water Resources Research
Estimates of mass transfer timescales from 316 solute transport experiments reported in 35 publications are compared to the pore-water velocities and residence times, as well as the experimental durations. New tracer experiments were also conducted in columns of different lengths so that the velocity and the advective residence time could be varied independently. In both the experiments reported in the literature and the new experiments, the estimated mass transfer timescale (inverse of the mass-transfer rate coefficient) is better correlated to residence time and the experimental duration than to velocity. Of the measures considered, the experimental duration multiplied by 1 + β (where β is the capacity coefficient, defined as the ratio of masses in the immobile and mobile domains at equilibrium) best predicted the estimated mass transfer timescale. This relation is consistent with other work showing that aquifer and soil material commonly produce multiple timescales of mass transfer.
International Journal of Computational Geometry and Applications
Given a finite set of points in Euclidean space, we can ask what is the minimum number of times a piecewise-linear path must change direction in order to pass through all of them. We prove some new upper and lower bounds for the rectilinear version of this problem in which all motion is orthogonal to the coordinate axes. We also consider the more general case of arbitrary directions.
Journal of Manufacturing Processes
Deep X-ray lithography based techniques such as LIGA (German acronym representing Lithographie, Galvanoformung, and Abformung) are being currently used to fabricate net-shape components for microelectromechanical systems (MEMS). Unlike other microfabrication techniques, LIGA lends itself to a broad range of materials, including metals, alloys, polymers, as well as ceramics and composites. Currently, Ni and Ni alloys are the materials of choice for LIGA microsystems. While Ni alloys may meet the structural requirements for MEMS, their tribological (friction and wear) behavior poses great challenges for the reliable operation of LIGA-fabricated MEMS. Typical sidewall morphologies of LIGA-fabricated parts are described, and their role in the tribological behavior of MEMS is discussed. The adaptation of commercial plasma-enhanced chemical vapor deposition to coat the sidewalls of LIGA-fabricated parts with diamond-like nanocomposite is described.
Physics of Plasmas
The propagation of a 30 kA, 3.5 Mev electron beam which was focused into gas and plasma-filled cells was discussed. Gas cells which were used for X-ray radiography were produced using pulsed-power accelerators, onto a high atomic number target to generate bremsstrahlung radiation. The effectiveness of beam focusing using neutral gas, partially ionized gas, and fully ionized (plasma-filled) cells was investigated using numerical simulation. It was observed in an optimized gas cell that an initial plasma density approaching 1016 cm-3 was sufficient to prevent significant net currents and the subsequent beam sweep.
Proceedings of the Hawaii International Conference on System Sciences
Our national security, economic prosperity, and national well-being are dependent upon a set of highly interdependent critical infrastructures. Examples of these infrastructures include the national electrical grid, oil and natural gas systems, telecommunication and information networks, transportation networks, water systems, and banking and financial systems. Given the importance of their reliable and secure operations, understanding the behavior of these infrastructures - particularly when stressed or under attack - is crucial. Models and simulations can provide considerable insight into the complex nature of their behaviors and operational characteristics. These models and simulations must include interdependencies among infrastructures if they are to provide accurate representations of infrastructure characteristics and operations. A number of modeling and simulation approaches under development today directly address interdependencies and offer considerable insight into the operational and behavioral characteristics of critical infrastructures.
Journal of the IEST
Acoustic testing using commercial sound system components is becoming more popular as a cost effective way of generating the required environment both in and out of a reverberant chamber. This paper will present the development of such a sound system that uses a state-of-the-art random vibration controller to perform closed-loop control in the reverberant chamber at Sandia National Laboratories. Test data will be presented that demonstrates narrow-band controlability, performance and some limitations of commercial sound generation equipment in a reverberant chamber.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
We present two ways in which dynamic self-assembly can be used to perform computation, via stochastic protein networks and self-assembling software. We describe our protein-emulating agent-based simulation infrastructure, which is used for both types of computations, and the few agent properties sufficient for dynamic self-assembly. Examples of protein-network-based computation and self-assembling software are presented. We describe some novel capabilities that are enabled by the inherently dynamic nature of the self-assembling executable code. © Springer-Verlag 2004.
AEU - International Journal of Electronics and Communications
Finite difference equations are derived for the simulation of dielectric waveguides using an Hz -Ez formulation defined on a nonuniform triangular grid. The resulting equations may be solved as a banded eigenproblem for waveguide structures of arbitrary shape composed of regions of piecewise constant isotropic dielectric, and all transverse fields then computed from the solutions. Benchmark comparisons are presented for problems with analytic solutions, as well as a sample calculation of the propagation loss of a hollow Bragg fiber.
International Symposium on Combustion, Abstracts of Works-in-Progress Posters
The structure of laminar inverse diffusion flames (IDF) of methane and ethylene in air was studied using a cylindrical co-flowing burner. IDF were similar to normal diffusion flames, except that the relative positions of the fuel and oxidizer were reversed. Radiation from soot surrounding the IDF masked the reaction zone in visible images. As a result, flame heights determined from visible images were overestimated. The height of the reaction zone as indicated by OH LIF was a more relevant measure of height. The concentration and position of PAH and soot were observed using LIF and laser-induced incandescence (LII). PAH LIF and soot LII indicated that PAH and soot are present on the fuel side of the flame, and that soot is located closer to the reaction zone than PAH. Ethylene flames produced significantly higher PAH LIF and soot LII signals than methane flames, which was consistent with the sooting propensity of ethylene. The soot and PAH were present on the fuel side of the reaction zone, but the soot was closer to the reaction zone than the PAH. This is an abstract of a paper presented at the 30th International Symposium on combustion (Chicago, IL 7/25-30/2004).
American Society of Mechanical Engineers, Aerospace Division (Publication) AD
This paper describes an array of in-plane piezoelectric actuator segments laminated onto a comer-supported substrate to create a thin bimorph for reflector applications. An electric field distribution over the actuator segments causes the segments to expand or contract, thereby effecting plate deflection. To achieve a desired bimorph shape, the shape is first expressed as a two-dimensional series expansion. Then, using coefficients from the series expansion, an inverse problem is solved that determines the electric field distribution realizing the desired plate shape. A static example is presented where the desired deflection shape is a paraboloid. Copyright © 2004 by ASME.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
An important component of ubiquitous computing is the ability to quickly sense the dynamic environment to learn context awareness in real-time. To pervasively capture detailed information of movements, we present a decentralized algorithm for feature extraction within a wireless sensor network. By approaching this problem in a distributed manner, we are able to work within the real constraint of wireless battery power and its effects on processing and network communications. We describe a hardware platform developed for low-power ubiquitous wireless sensing and a distributed feature extraction methodology which is capable of providing more information to the user of events while reducing power consumption. We demonstrate how the collaboration between sensor nodes can provide a means of organizing large networks into information-based clusters. © Springer-Verlag 2004.
Journal of Chemometrics
Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors. Copyright © 2005 John Wiley & Sons, Ltd.
European Physical Journal D
We report the cooling of nitric oxide molecules in a single collision between an argon atom and an NO molecule at collision energies of 5.65 ± 0.36 kJ/mol and 14.7 ± 0.9 kJ/mol in a crossed molecular beam apparatus. We have produced in significant numbers (∼108 molecules cm -3 per quantum state) translationally cold NO(2Π 1/2, v′ = 0, j′ = 7.5) molecules in a specific quantum state with an upper-limit laboratory-frame rms velocity of 14.8 ± 1.1 m/s, corresponding to a temperature of 406 ± 28 mK. The translational cooling results from the kinematic collapse of the velocity distribution of the NO molecules after collision. Increasing the collision energy by increasing the velocity of the argon atoms, as we do here, does shift the scattering angle at which the cold molecules appear, but does not result in an experimentally measurable change in the velocity spread of the cold NO. This is entirely consistent with our analysis of the kinematics of the scattering which predicts that the velocity spread will actually decrease with increasing argon atom velocity. © EDP Sciences, Società, Italiana di Fisica, Springer-Verlag 2004.
Collection of Technical Papers - 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference
Abstract not provided.
Proceedings of the ACM/IEEE SC 2004 Conference: Bridging Communities
It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moores Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPS (1 Zettaflops. The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moores Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moores Law will be very high by todays standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops, depending on architecture , but distinctly below the level required for the most ambitious applications. Having established that Moores Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moores Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a "multi-architecture" for supercomputers that combines a microprocessor with other "advanced" concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the "advanced" features would provide a boost to the limits of performance.
International Symposium on Combustion, Abstracts of Works-in-Progress Posters
Many practical combustion devices and uncontrolled fires involve high Reynolds number nonpremixed turbulent flames that feature non-equilibrium finite-rate chemistry effects, e.g., local flame extinction and reignition, where enhanced transport of mass and heat away from the flame due to rapid turbulent mixing exceeds the local burning rate. Probability density function methods have shown promise in predicting piloted nonpremixed CH4-air flames over a range of Reynolds numbers and varying degrees of flame extinction and reignition. A study was carried out to quantify and characterize the kinetics of localized extinction and reignition in the Sandia flames D, E, and F, for which detailed velocity and scalar data exists. PDF methods in large eddy simulation to predict the filtered mass density function (FMDF) was used. A simple idealized mixing simulation was performed of a nonpremixed turbulent fuel jet in an air co-flow. Mixing statistics from the Monte Carlo-based FMDF solution of the chemical species scalar were compared to those from a more traditional Eulerian mixing simulation using gradient transport-based subgrid closure models. The FMDF solution will be performed with the Euclidian minimum spanning tree mixing model that uses the phenomenological connection between physical space and state space for mixing events. This is an abstract of a paper presented at the 30th International Symposium on Combustion (Chicago, IL 7/25-30/2004).
ACS National Meeting Book of Abstracts
The synthesis, characterization, and separations capability of defect-free, thin-film zeolite membranes were presented. The one-micron thick sodium-aluminosilicate films of Silicalite-1 and ZSM-5 were synthesized by hydrothermal methods on either disk- or tube-supports. Techniques for growing membranes on both Al2O3 substrates as well as oxide-coated stainless steel substrates were presented. The resulting defect-free zeolite films had high flux rates at room temperature (∼ 10-7 mole/Pa-sec-sq m) and showed selective separations (3-7) between pure gases of H2 and CH4, O2, N2, CO2, CO, SF6. Results from mixed gas studies showed similar flux rates as pure gases with enhanced selectivity (15-50) for H2. The selectivity through both Silicalite-1 and ZSM-5 membranes was compared and contrasted for several gas mixtures. Data comparisons for defect-free and "defect-filled" membranes were also discussed. Under operation, the flow through these membranes quickly reached its maximum value and was stable over long periods of time. Results from experiments at high temperatures, ≤ 300°C, were compared with the data obtained at room temperature. This is an abstract of a paper presented at the 228th ACS National Meeting (Philadelphia, PA, 8/22-26/2004).
American Society of Mechanical Engineers, Heat Transfer Division, (Publication) HTD
A new constitutive model relating proton conductivity to water content in a polymer electrolyte or membrane is presented. Our constitutive model is based on Faraday's law and the Nernst-Einstein equation; and it depends on the molar volumes of dry membrane and water but otherwise requires no adjustable parameters. We derive our constitutive model in two different ways. Predictions of proton conductivity as a function of membrane water content computed from our constitutive model are compared with that from a representative correlation and other models as well as experimental data from the literature and those obtained in our laboratory using a 4-point probe. Copyright © 2004 by ASME.
American Society of Mechanical Engineers, Micro-Electro Mechanical Systems Division, (Publications) MEMS
A coupled-physics analysis code has been developed to simulate the electrical, thermal, and mechanical responses of surface micromachined (SMM) actuators. Our objective is to optimize the design and performance of these micro actuators. Since many new designs of these electro-thermal actuators have shuttles or platforms between beams, calculating the local Joule heating requires a multi-dimensional electrostatics analysis. Moreover, the electrical solution is strongly coupled to the temperature distribution since the electrical resistivity is temperature dependent. Thus, it is essential to perform a more comprehensive simulation that solves the coupled electrostatics, thermal, and mechanical equations. Results of the coupled-physics analyses will be presented. Copyright © 2004 by ASME.
American Society of Mechanical Engineers, Heat Transfer Division, (Publication) HTD
The process of removing liquid water droplets in polymer electrolyte fuel cells (PEFC) is examined using a simple analytical model and two-dimensional simulations. Specifically, the stability of a droplet adhering to the wall of the cathode flow channel is examined as a function of the geometry of the flow channel, the applied pressure gradient, and the wetting properties. The result is a prediction of the critical droplet size as a function of the difference between the advancing and receding contact angles, or contact angle hysteresis. The analytical model is shown to qualitatively predict this stability limit when compared to two-dimensional simulation results. The simulations are performed using both Arbitrary Lagrangian Eulerian (ALE) methods and level set methods. The ALE and level set predictions are shown to be in good agreement. Copyright © 2004 by ASME.
American Society of Mechanical Engineers, Micro-Electro Mechanical Systems Division, (Publications) MEMS
Surface micromachined structures with high aspect ratios are often utilized as sensor platforms in microelectromechanical systems (MEMS) devices. These structures generally fail by suction or adhesion to the underlying substrate during operation, or related initial processing. Such failures represent a major disadvantage in mass production of MEMS devices with highly compliant structures. Fortunately, most suction failures can be prevented or repaired in a number of ways. Passive approaches implemented during fabrication or release include: (1) utilizing special low adhesion coatings and (2) processing with low surface energy rinse agents. These methods, however, increase both the processing time and cost and are not entirely effective. Active approaches, such as illuminating stiction-failed microstructures with pulsed laser irradiation, have proven to be very effective for stiction repair [1-5]. A more recent and promising method, introduced by Gupta et al. [6], utilized laser-induced stress waves to repair stiction-failed microstructures. This approach represents a logical extension of the laser spallation technique for debonding thin films from substrates [7-9]. The method transmits stress waves into MEMS structures by laser-irradiating the back side of the substrate opposite the stiction-failed structures. This paper presents an experimental study that compares the stress wave repair method with the thermomechanical repair method on identical arrays of stiction-failed cantilevers. Copyright © 2004 by ASME.
Applications of X-Rays in Mechanical Engineering 2004
X-ray radiography has long been recognized as a valuable tool for detecting internal features and flaws. Recent developments in microfabrication and composite materials have extended inspection requirements to the resolution limits of conventional radiography. Our work has been directed toward pushing both detection and measurement capabilities to a smaller scale. Until recently, we have used conventional contact radiography, optimized to resolve small features. With the recent purchase of a nano-focus (sub-micron) x-ray source, we are now investigating projection radiography, phase contrast imaging and micro-computed tomography (μ-CT). Projection radiography produces a magnified image that is limited in spatial resolution mainly by the source size, not by film grain size or detector pixel size. Under certain conditions phase contrast can increase the ability to resolve small features such as cracks, especially in materials with low absorption contrast. Micro-computed tomography can provide three-dimensional measurements on a micron scale and has been shown to provide better sensitivity than simple radiographs. We have included applications of these techniques to small-scale measurements not easily made by mechanical or optical means. Examples include void detection in meso-scale nickel MEMS parts, measurement of edge profiles in thick gold lithography masks, and characterization of the distribution of phases in composite materials. Our work, so far, has been limited to film. Copyright © 2004 by ASME.
2004 SEG Annual Meeting
Three-dimensional seismic wave propagation within a heterogeneous isotropic poroelastic medium is simulated with an explicit, time-domain, finite-difference algorithm. A system of thirteen, coupled, first-order partial differential equations is solved for the velocity vector components, stress tensor components, and pressure associated with solid and fluid constituents of the composite medium. A massively parallel computational implementation, utilizing the spatial domain decomposition strategy, allows investigation of large-scale earth models and/or broadband wave propagation within reasonable execution times.
4OR
In Combinatorial Optimization, one is frequently faced with linear programming (LP) problems with exponentially many constraints, which can be solved either using separation or what we call compact optimization. The former technique relies on a separation algorithm, which, given a fractional solution, tries to produce a violated valid inequality. Compact optimization relies on describing the feasible region of the LP by a polynomial number of constraints, in a higher dimensional space. A commonly held belief is that compact optimization does not perform as well as separation in practice. In this paper,we report on an application in which compact optimization does in fact largely outperform separation. The problem arises in structural proteomics, and concerns the comparison of 3-dimensional protein folds. Our computational results show that compact optimization achieves an improvement of up to two orders of magnitude over separation. We discuss some reasons why compact optimization works in this case but not, e.g., for the LP relaxation of the TSP. © Springer-Verlag 2004.
This report summarizes a series of structural calculations that examine effects of raising the Waste Isolation Pilot Plant repository horizon from the original design level upward 2.43 meters. These calculations allow evaluation of various features incorporated in conceptual models used for performance assessment. Material presented in this report supports the regulatory compliance re-certification, and therefore begins by replicating the calculations used in the initial compliance certification application. Calculations are then repeated for grid changes appropriate for the new horizon raised to Clay Seam G. Results are presented in three main areas: 1. Disposal room porosity, 2. Disturbed rock zone characteristics, and 3. Anhydrite marker bed failure. No change to the porosity surface for the compliance re-certification application is necessary to account for raising the repository horizon, because the new porosity surface is essentially identical. The disturbed rock zone evolution and devolution are charted in terms of a stress invariant criterion over the regulatory period. This model shows that the damage zone does not extend upward to MB 138, but does reach MB 139 below the repository. Damaged salt would be expected to heal in nominally 100 years. The anhydrite marker beds sustain states of stress that promote failure and substantial marker bed deformation into the room assures fractured anhydrite will sustain in the proximity of the disposal rooms.
This report documents the results obtained during a one-year Laboratory Directed Research and Development (LDRD) initiative aimed at investigating coupled structural acoustic interactions by means of algorithm development and experiment. Finite element acoustic formulations have been developed based on fluid velocity potential and fluid displacement. Domain decomposition and diagonal scaling preconditioners were investigated for parallel implementation. A formulation that includes fluid viscosity and that can simulate both pressure and shear waves in fluid was developed. An acoustic wave tube was built, tested, and shown to be an effective means of testing acoustic loading on simple test structures. The tube is capable of creating a semi-infinite acoustic field due to nonreflecting acoustic termination at one end. In addition, a micro-torsional disk was created and tested for the purposes of investigating acoustic shear wave damping in microstructures, and the slip boundary conditions that occur along the wet interface when the Knudsen number becomes sufficiently large.
Motivated by observations about job runtimes on the CPlant system, we use a trace-driven microsimulator to begin characterizing the performance of different classes of allocation algorithms on jobs with different communication patterns in space-shared parallel systems with mesh topology. We show that relative performance varies considerably with communication pattern. The Paging strategy using the Hilbert space-filling curve and the Best Fit heuristic performed best across several communication patterns.
We have made progress in developing a new statistical mechanics approach to designing self organizing systems that is unique to SNL. The primary application target for this ongoing research has been the development of new kinds of nanoscale components and hardware systems. However, this research also enables an out of the box connection to the field of software development. With appropriate modification, the collective behavior physics ideas for enabling simple hardware components to self organize may also provide design methods for a new class of software modules. Our current physics simulations suggest that populations of these special software components would be able to self assemble into a variety of much larger and more complex software systems. If successful, this would provide a radical (disruptive technology) path to developing complex, high reliability software unlike any known today. This high risk, high payoff opportunity does not fit well into existing SNL funding categories, as it is well outside of the mainstreams of both conventional software development practices and the nanoscience research area that spawned it. This LDRD effort was aimed at developing and extending the capabilities of self organizing/assembling software systems, and to demonstrate the unique capabilities and advantages of this radical new approach for software development.
Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.
An estimate of the distribution of fatigue ranges or extreme loads for wind turbines may be obtained by separating the problem into two uncoupled parts, (1) a turbine specific portion, independent of the site and (2) a site-specific description of environmental variables. We consider contextually appropriate probability models to describe the turbine specific response for extreme loads or fatigue. The site-specific portion is described by a joint probability distribution of a vector of environmental variables, which characterize the wind process at the hub-height of the wind turbine. Several approaches are considered for combining the two portions to obtain an estimate of the extreme load, e.g., 50-year loads or fatigue damage. We assess the efficacy of these models to obtain accurate estimates, including various levels of epistemic uncertainty, of the turbine response.
The quantitative analysis of ammonia binding sites in the Davison (Type 3A) zeolite desiccant using solid-state {sup 15}N MAS NMR spectroscopy is reported. By utilizing 15N enriched ammonia ({sup 15}NH{sub 3}) gas, the different adsorption/binding sites within the zeolite were investigated as a function of NH{sub 3} loading. Using {sup 15}N MAS NMR multiple sites were resolved that have distinct cross-polarization dynamics and chemical shift behavior. These differences in the {sup 15}N NMR were used to characterize the adsorption environments in both the pure 3A zeolite and the silicone-molded forms of the desiccant.
This SAND report provides the technical progress through October 2003 of the Sandia-led project, 'Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,' funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org
Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g., weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of experimental work evaluating mass transfer processes from solid phase energetics to soil pore water. The experimental work is used as a basis to formulate a mass transfer numerical model, which has been incorporated into the porous media simulation code T2TNT. This report documents the results of the Phase III experimental effort, which evaluated the impacts of surface deposits versus buried deposits, energetic material particle size, and low order detonation debris. Next year, the energetic material mass transfer model will be refined and a 2-d screening model will be developed for initial site-specific applications. A technology development roadmap was created to show how specific R&D efforts are linked to technology and products for key customers.
A Micro Electro Mechanical System (MEMS) typically consists of micron-scale parts that move through a gas at atmospheric or reduced pressure. In this situation, the gas-molecule mean free path is comparable to the geometric features of the microsystem, so the gas flow is noncontinuum. When mean-free-path effects cannot be neglected, the Boltzmann equation must be used to describe the gas flow. Solution of the Boltzmann equation is difficult even for the simplest case because of its sevenfold dimensionality (one temporal dimension, three spatial dimensions, and three velocity dimensions) and because of the integral nature of the collision term. The Direct Simulation Monte Carlo (DSMC) method is the method of choice to simulate high-speed noncontinuum flows. However, since DSMC uses computational molecules to represent the gas, the inherent statistical noise must be minimized by sampling large numbers of molecules. Since typical microsystem velocities are low (< 1 m/s) compared to molecular velocities ({approx}400 m/s), the number of molecular samples required to achieve 1% precision can exceed 1010 per cell. The Discrete Velocity Gas (DVG) method, an approach motivated by radiation transport, provides another way to simulate noncontinuum gas flows. Unlike DSMC, the DVG method restricts molecular velocities to have only certain discrete values. The transport of the number density of a velocity state is governed by a discrete Boltzmann equation that has one temporal dimension and three spatial dimensions and a polynomial collision term. Specification and implementation of DVG models are discussed, and DVG models are applied to Couette flow and to Fourier flow. While the DVG results for these benchmark problems are qualitatively correct, the errors in the shear stress and the heat flux can be order-unity even for DVG models with 88 velocity states. It is concluded that the DVG method, as described herein, is not sufficiently accurate to simulate the low-speed gas flows that occur in microsystems.
CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by the model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.
This report is the latest in a continuing series that highlights the recent technical accomplishments associated with the work being performed within the Materials and Process Sciences Center. Our research and development activities primarily address the materials-engineering needs of Sandia's Nuclear-Weapons (NW) program. In addition, we have significant efforts that support programs managed by the other laboratory business units. Our wide range of activities occurs within six thematic areas: Materials Aging and Reliability, Scientifically Engineered Materials, Materials Processing, Materials Characterization, Materials for Microsystems, and Materials Modeling and Simulation. We believe these highlights collectively demonstrate the importance that a strong materials-science base has on the ultimate success of the NW program and the overall DOE technology portfolio.
The catalytic combustion of natural gas has been the topic of much research over the past decade. Interest in this technology results from a desire to decrease or eliminate the emissions of harmful nitrogen oxides (NOX) from gas turbine power plants. A low-pressure drop catalyst support, such as a ceramic monolith, is ideal for this high-temperature, high-flow application. A drawback to the traditional honeycomb monoliths under these operating conditions is poor mass transfer to the catalyst surface in the straight-through channels. 'Robocasting' is a unique process developed at Sandia National Laboratories that can be used to manufacture ceramic monoliths with alternative 3-dimensional geometries, providing tortuous pathways to increase mass transfer while maintaining low pressure drops. This report details the mass transfer effects for novel 3-dimensional robocast monoliths, traditional honeycomb-type monoliths, and ceramic foams. The mass transfer limit is experimentally determined using the probe reaction of CO oxidation over a Pt / {gamma}-Al{sub 2}O{sub 3} catalyst, and the pressure drop is measured for each monolith sample. Conversion versus temperature data is analyzed quantitatively using well-known dimensionless mass transfer parameters. The results show that, relative to the honeycomb monolith support, considerable improvement in mass transfer efficiency is observed for robocast samples synthesized using an FCC-like geometry of alternating rods. Also, there is clearly a trade-off between enhanced mass transfer and increased pressure drop, which can be optimized depending on the particular demands of a given application.
This document introduces the use of Trilinos, version 3.1. Trilinos has been written to support, in a rigorous manner, the solver needs of the engineering and scientific applications at Sandia National Laboratories. Aim of this manuscript is to present the basic features of some of the Trilinos packages. The presented material includes the definition of distributed matrices and vectors with Epetra, the iterative solution of linear system with AztecOO, incomplete factorizations with IFPACK, multilevel methods with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. With the help of several examples, some of the most important classes and methods are detailed to the inexperienced user. For the most majority, each example is largely commented throughout the text. Other comments can be found in the source of each example. This document is a companion to the Trilinos User's Guide and Trilinos Development Guides. Also, the documentation included in each of the Trilinos' packages is of fundamental importance.
We report our conclusions in support of the FY 2003 Science and Technology Milestone ST03-3.5. The goal of the milestone was to develop a research plan for expanding Sandia's capabilities in materials modeling and simulation. From inquiries and discussion with technical staff during FY 2003 we conclude that it is premature to formulate the envisioned coordinated research plan. The more appropriate goal is to develop a set of computational tools for making scale transitions and accumulate experience with applying these tools to real test cases so as to enable us to attack each new problem with higher confidence of success.
Simulation-based life-cycle-engineering and the ASCI program have resulted in models of unprecedented size and fidelity. The validation of these models requires high-resolution, multi-parameter diagnostics. Within the thermal-fluids disciplines, the need for detailed, high-fidelity measurements exceeds the limits of current engineering sciences capabilities and severely tests the state of the art. The focus of this LDRD is the development and application of filtered Rayleigh scattering (FRS) for high-resolution, nonintrusive measurement of gas-phase velocity and temperature. With FRS, the flow is laser-illuminated and Rayleigh scattering from naturally occurring sources is detected through a molecular filter. The filtered transmission may be interpreted to yield point or planar measurements of three-component velocities and/or thermodynamic state. Different experimental configurations may be employed to obtain compromises between spatial resolution, time resolution, and the quantity of simultaneously measured flow variables. In this report, we present the results of a three-year LDRD-funded effort to develop FRS combustion thermometry and Aerosciences velocity measurement systems. The working principles and details of our FRS opto-electronic system are presented in detail. For combustion thermometry we present 2-D, spatially correlated FRS results from nonsooting premixed and diffusion flames and from a sooting premixed flame. The FRS-measured temperatures are accurate to within {+-}50 K (3%) in a premixed CH4-air flame and within {+-}100 K for a vortex-strained diluted CH4-air diffusion flame where the FRS technique is severely tested by large variation in scattering cross section. In the diffusion flame work, FRS has been combined with Raman imaging of the CH4 fuel molecule to correct for the local light scattering properties of the combustion gases. To our knowledge, this is the first extension of FRS to nonpremixed combustion and the first use of joint FRS-Raman imaging. FRS has been applied to a sooting C2H4-air flame and combined with LII to assess the upper sooting limit where FRS may be utilized. The results from this sooting flame show FRS temperatures has potential for quantitative temperature imaging for soot volume fractions of order 0.1 ppm. FRS velocity measurements have been performed in a Mach 3.7 overexpanded nitrogen jet. The FRS results are in good agreement with expected velocities as predicted by inviscid analysis of the jet flowfield. We have constructed a second FRS opto-electronic system for measurements at Sandia's hypersonic wind tunnel. The details of this second FRS system are provided here. This facility is currently being used for velocity characterization of these production hypersonic facilities.
Molecular analysis of cancer, at the genomic level, could lead to individualized patient diagnostics and treatments. The developments to follow will signal a significant paradigm shift in the clinical management of human cancer. Despite our initial hopes, however, it seems that simple analysis of microarray data cannot elucidate clinically significant gene functions and mechanisms. Extracting biological information from microarray data requires a complicated path involving multidisciplinary teams of biomedical researchers, computer scientists, mathematicians, statisticians, and computational linguists. The integration of the diverse outputs of each team is the limiting factor in the progress to discover candidate genes and pathways associated with the molecular biology of cancer. Specifically, one must deal with sets of significant genes identified by each method and extract whatever useful information may be found by comparing these different gene lists. Here we present our experience with such comparisons, and share methods developed in the analysis of an infant leukemia cohort studied on Affymetrix HG-U95A arrays. In particular, spatial gene clustering, hyper-dimensional projections, and computational linguistics were used to compare different gene lists. In spatial gene clustering, different gene lists are grouped together and visualized on a three-dimensional expression map, where genes with similar expressions are co-located. In another approach, projections from gene expression space onto a sphere clarify how groups of genes can jointly have more predictive power than groups of individually selected genes. Finally, online literature is automatically rearranged to present information about genes common to multiple groups, or to contrast the differences between the lists. The combination of these methods has improved our understanding of infant leukemia. While the complicated reality of the biology dashed our initial, optimistic hopes for simple answers from microarrays, we have made progress by combining very different analytic approaches.
A mine dog evaluation project initiated by the Geneva International Center for Humanitarian Demining is evaluating the capability and reliability of mine detection dogs. The performance of field-operational mine detection dogs will be measured in test minefields in Afghanistan containing actual, but unfused landmines. Repeated performance testing over two years through various seasonal weather conditions will provide data simulating near real world conditions. Soil samples will be obtained adjacent to the buried targets repeatedly over the course of the test. Chemical analysis results from these soil samples will be used to evaluate correlations between mine dog detection performance and seasonal weather conditions. This report documents the analytical chemical methods and results from the fifth batch of soils received. This batch contained samples from Kharga, Afghanistan collected in June 2003.
IEEE International Conference on Intelligent Robots and Systems
Statistical active contour models (aka statistical pressure snakes) have attractive properties for use in mobile manipulation platforms as both a method for use in visual servoing and as a natural component of a human-computer interface. Unfortunately, the constantly changing illumination expected in outdoor environments presents problems for statistical pressure snakes and for their image gradient-based predecessors. This paper introduces a new color-based variant of statistical pressure snakes that gives superior performance under dynamic lighting conditions and improves upon the previously published results of attempts to incorporate color imagery into active deformable models.
2003 IEEE Power Engineering Society General Meeting, Conference Proceedings
The term "Stationary Battery" tends to conjure up many interpretations among power engineers, depending on one's perspectives on battery energy storage. The primary application that immediately comes to mind is that for standby or UPS use, but that is not the only application for stationary batteries. Currently, changes are underway where large stationary batteries are being used in grid-tied cycling applications for Distributed Energy Resource systems. The current IEEE standards developed for standby application do not apply to these new stationary applications, but many engineers are totally unaware of the not-so-subtle differences in standby battery and cycling battery O&M requirements. The purpose of this paper is to introduce engineers who will be using stationary batteries in cycling applications to the differences in battery system management standards currently in use in the IEEE to preclude the improper application of standards for standby applications with those intended for cycling applications.
2003 IEEE Power Engineering Society General Meeting, Conference Proceedings
As the need for stored electrical energy has grown, the lead-acid battery has been the primary storage component until very recently. Although improvements in lead-acid technology have been made over the years, short life expectancy and poor component reliability have driven energy storage customers in search of longer life and higher reliability storage technologies. New technology batteries have been developed as well as other non-battery storage devices that are meeting the needs for higher energy densities and more reliability. This paper discusses these emerging energy storage technologies and how they are being used in modern energy storage requirements.
Journal of Elasticity
The deformation of an infinite bar subjected to a self-equilibrated load distribution is investigated using the peridynamic formulation of elasticity theory. The peridynamic theory differs from the classical theory and other nonlocal theories in that it does not involve spatial derivatives of the displacement field. The bar problem is formulated as a linear Fredholm integral equation and solved using Fourier transform methods. The solution is shown to exhibit, in general, features that are not found in the classical result. Among these are decaying oscillations in the displacement field and progressively weakening discontinuities that propagate outside of the loading region. These features, when present, are guaranteed to decay provided that the wave speeds are real. This leads to a one-dimensional version of St. Venant's principle for peridynamic materials that ensures the increasing smoothness of the displacement field remotely from the loading region. The peridynamic result converges to the classical result in the limit of short-range forces. An example gives the solution to the concentrated load problem, and hence provides the Green's function for general loading problems.
International Solar Energy Conference
The integration and approaches utilized in the various stages of the Advanced Dish Development System (ADDS) project are presented and described. Insights gained from integration of the ADDS are also discussed. The ADDS project focuses on development of a product that meets the needs of the remote power market and helps to identify key technology development needs that resulted in a system that is closer to commercialization. A pursuance of solving problems, a lack of fear of breaking things, and hands-on involvement by design engineers are the key components leading to rapid improvement of the project.
Proceedings of SPIE - The International Society for Optical Engineering
Modern high-performance Synthetic Aperture Radar (SAR) systems have evolved into highly versatile, robust, and reliable tactical sensors, offering images and information not available from other sensor systems. For example, real-time images are routinely formed by the Sandia-designed General Atomics (AN/APY-8) Lynx SAR yielding 4-inch resolution at 25 km range (representing better than arc-second resolutions) in clouds, smoke, and rain. Sandia's Real-Time Visualization (RTV) program operates an Interferometric SAR (IFSAR) system that forms three dimensional (3-D) topographic maps in near real-time with National Imagery and Mapping Agency (MIMA) Digital Terrain Elevation Data (DTED) level 4 performance (3 meter post spacing with 0.8-meter height accuracy) or better. When exported to 3-D rendering software, this data allows remarkable interactive fly-through experiences. Coherent Change Detection (CCD) allows detecting tire tracks on dirt roads, foot-prints, and other minor, otherwise indiscernible ground disturbances long after their originators have left the scene. Ground Moving Target Indicator (GMTI) radar modes allow detecting and tracking moving vehicles. A Sandia program known as "MiniSAR" is developing technologies that are expected to culminate in a fully functioning, high-performance, real-time SAR that weighs less than 20 Ibs. The purpose of this paper is to provide an overview of recent technology developments, as well as current on-going research and development efforts at Sandia National Laboratories.
Digest of Technical Papers-IEEE International Pulsed Power Conference
Sandia National Laboratories' Z machine provides a unique capability to a number of National Nuclear Security Administration (NNSA) and basic science communities, and routinely produces x-ray power more than 5 times, and energy 50 times, greater than any other non-pulsed power laboratory device. To address an increasing demand and widening range of research interests, Sandia's Z refurbishment (ZR) program intends to increase Z utilization by providing the capability to double the number of shots per year, improve the overall precision for better reproducibility and enhanced data quality, and increase delivered current to provide additional performance capability. Reliability and operations analysis has been included from the onset of the ZR program to maximize performance and operations capacity. Preliminary analysis using a system-level reliability model highlighted Z failure modes requiring reliability improvement to help meet the increased ZR requirements. Preliminary results from analysis with a developed Z and ZR operations simulation model indicate, from an overall operations perspective including penalty costs and personnel resources, the scheduled maintenance activities and unscheduled repairs most in need of reduced time requirements and rates of occurrence.
Proceedings of the International Conference on Radioactive Waste Management and Environmental Remediation, ICEM
Vadose Zone Monitoring System (VZMS) was used for the long-term performance assessment of a corrective action management unit (CAMU) containment cell at Sandia National Laboratories, New Mexico. A cost saving of approximately $200 million was realized by utilization of the CAMU versus off-site waste disposition. The VZMS permits the analysis of volatile organic compounds (VOC) concentrations in the soil gas directly underlying the containment cell. The configuration of the VZMS allowed for changes in the requirements for selected monitoring components, monitoring frequency and level of sensitivity.
Proceedings of SPIE - The International Society for Optical Engineering
Coherent stereo pairs from cross-track synthetic aperture radar (SAR) collects allow fully automated correlation matching using magnitude and phase data. Yet, automated feature matching (correspondence) becomes more difficult when imaging rugged terrain utilizing large stereo crossing angle geometries because high-relief features can undergo significant spatial distortions. These distortions sometimes cause traditional, shift-only correlation matching to fail. This paper presents a possible solution addressing this difficulty. Changing the complex correlation maximization search from shift-only to shift-and-scaling using the downhill simplex method results in higher correlation. This is shown on eight coherent spotlight-mode cross-track stereo pairs with stereo crossing angles averaging 93.7° collected over terrain with slopes greater than 20°. The resulting digital elevation maps (DEMs) are compared to ground truth. Using the shift-scaling correlation approach to calculate disparity, height errors decrease and the number of reliable DEM posts increase.
2003 IEEE Power Engineering Society General Meeting, Conference Proceedings
The existing IEEE stationary battery maintenance and testing standards fall into two basic categories: those associated with grid-tied standby applications and those associated with stand-alone photovoltaic cycling applications. These applications differ in several significant ways which in turn influence their associated standards. A review of the factors influencing the maintenance and testing of stationary battery systems provides the reasons for the differences between these standards and some of the hazards of using a standard inappropriate to the application. This review also provides a background on why these standards will need to be supplemented in the future to support emerging requirements of other applications, such as grid-tied cycling and photovoltaic hybrid applications.
33rd AIAA Fluid Dynamics Conference and Exhibit
The Detached Eddy Simulation (DES) and steadystate Reynolds-Averaged Navier-Stokes (RANS) turbulence modeling approaches are examined for the incompressible flow over a square cross-section cylinder at a Reynolds number of 21,400. A compressible flow code is used which employes a second-order Roe upwind spatial discretization. Efforts are made to assess the numerical accuracy of the DES predictions with regards to statistical convergence, iterative convergence, and temporal and spatial discretization error. Three-dimensional DES simulations compared well with two-dimensional DES simulations, suggesting that the dominant vortex shedding mechanism is effectively two-dimensional. The two-dimensional simulations are validated via comparison to experimental data for mean and RMS velocities as well as Reynolds stress in the cylinder wake. The steady-state RANS models significantly overpredict the size of the recirculation zone, thus underpredicting the drag coefficient relative to the experimental value. The DES model is found to give good agreement with the experimental velocity data in the wake, drag coefficient, and recirculation zone length.
Proceedings of SPIE - The International Society for Optical Engineering
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=l) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Transactions - Geothermal Resources Council
The implementation of GeoPowering the West (GPW), a communication and outreach component of the Department of Energy (DOE) to bring geothermal heat and power to homes and business across the West was discussed. GPQ helps to overcome financial risks, environmantal misconceptions, transactional costs, creates public awareness and define the benefits of geothermal development. The GPW complements the research and development activities conducted by the department and its national laboratories. It was stated that the GPW will continue to provide technical assistance to states that are considering to implement Renewable energy policies.
IEEE Transactions on Nuclear Science
This paper defines a process for selecting dosimetry-quality cross sections. The recommended cross-section evaluation depends on screening high-quality evaluations with quantified uncertainties, down-selecting based on comparison to experiments in standard neutron fields, and consistency checking in reference neutron fields. This procedure is illustrated for the 23Na(n, γ)24 Na reaction.
IEEE Transactions on Nuclear Science
Thermoluminescent dosimeters (TLDs), particularly CaF2:Mn, are often used as photon dosimeters in mixed (n/γ) field environments. In these mixed field environments, it is desirable to separate the photon response of a dosimeter from the neutron response. For passive dosimeters that measure an integral response, such as TLDs, the separation of the two components must be performed by postexperiment analysis because the TLD reading system cannot distinguish between photon- and neutron-produced response. Using a model of an aluminum-equilibrated TLD-400 (CaF2:Mn) chip, a systematic effort has been made to analytically determine the various components that contribute to the neutron response of a TLD reading. The calculations were performed for five measured reactor neutron spectra and one theoretical thermal neutron spectrum. The five measured reactor spectra all have experimental values for aluminum-equilibrated TLD-400 chips. Calculations were used to determine the percentage of the total TLD response produced by neutron interactions in the TLD and aluminum equilibrator. These calculations will aid the Sandia National Laboratories-Radiation Metrology Laboratory (SNL-RML) in the interpretation of the uncertainty for TLD dosimetry measurements in the mixed field environments produced by SNL reactor facilities.
Proceedings of SPIE - The International Society for Optical Engineering
A UV generation system consisting of a quasi-monolithic nonplanar-ring-oscillator image-rotating OPO, called the RISTRA OPO, is presented. High beam quality and the absence of mirror adjustments due to the monolithic design make this OPO well-suited for demanding applications such as satellite deployment. Initial tests of self seeding using low-quality flattop beams with poor spatial overlap between the OPO's cavity mode and the spatial mode of the injected signal pulsed showed pump depletion of 63%.
Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings
The decoding of received error control encoded bit streams is fairly straightforward when the channel encoding algorithms are efficient and known. But if the encoding scheme is unknown or part of the data is missing, how would one design a viable decoder for the received transmission? Communication engineers may not frequently encounter this situation, but for computational biologists this is an immediate challenge as we attempt to decipher and understand the vast amount of sequence data produced by genome sequencing projects. Assuming the systematic parity check block code model of protein translation initiation, this work presents an approach for determining the generator matrix given a set of potential codewords. The resulting generators and corresponding parity matrices are applied to valid and invalid Escherichia coli K-12 MG1655 messenger RNA leader sequences. The generators constructed using strict subsets of the 16S ribosomal RNA performed better than those constructed using the (5,2) block code model in earlier work.
Proposed for publication in Nuclear Fusion.
The DIII-D research program is developing the scientific basis for advanced tokamak (AT) modes of operation in order to enhance the attractiveness of the tokamak as an energy producing system. Since the last international atomic energy agency (IAEA) meeting, we have made significant progress in developing the building blocks needed for AT operation: (1) we have doubled the magnetohydrodynamic (MHD) stable tokamak operating space through rotational stabilization of the resistive wall mode; (2) using this rotational stabilization, we have achieved {beta}{sub N}H{sub 89} {ge} 10 for 4{tau}{sub E} limited by the neoclassical tearing mode (NTM); (3) using real-time feedback of the electron cyclotron current drive (ECCD) location, we have stabilized the (m, n) = (3, 2) NTM and then increased {beta}{sub T} by 60%; (4) we have produced ECCD stabilization of the (2, 1) NTM in initial experiments; (5) we have made the first integrated AT demonstration discharges with current profile control using ECCD; (6) ECCD and electron cyclotron heating (ECH) have been used to control the pressure profile in high performance plasmas; and (7) we have demonstrated stationary tokamak operation for 6.5 s (36{tau}{sub E}) at the same fusion gain parameter of {beta}{sub N}H{sub 89}/q{sub 95}{sup 2} {approx_equal} as ITER but at much higher q{sub 95} = 4.2. We have developed general improvements applicable to conventional and AT operating modes: (1) we have an existence proof of a mode of tokamak operation, quiescent H-mode, which has no pulsed, edge localized modes (ELM) heat load to the divertor and which can run for long periods of time (3.8 s or 25{tau}{sub E}) with constant density and constant radiated power; (2) we have demonstrated real-time disruption detection and mitigation for vertical disruption events using high pressure gas jet injection of noble gases; (3) we have found that the heat and particle fluxes to the inner strike points of balanced, double-null divertors are much smaller than to the outer strike points. We have made detailed investigations of the edge pedestal and scrape-off layer (SOL): (1) atomic physics and plasma physics both play significant roles in setting the width of the edge density barrier in H-mode; (2) ELM heat flux conducted to the divertor decreases as density increases; (3) intermittent, bursty transport contributes to cross field particle transport in the SOL of H-mode and, especially, L-mode plasmas.
The Disturbed Rock Zone constitutes an important geomechanical element of the Waste Isolation Pilot Plant. The science and engineering underpinning the disturbed rock zone provide the basis for evaluating ongoing operational issues and their impact on performance assessment. Contemporary treatment of the disturbed rock zone applied to the evaluation of the panel closure system and to a new mining horizon improves the level of detail and quantitative elements associated with a damaged zone surrounding the repository openings. Technical advancement has been realized by virtue of ongoing experimental investigations and international collaboration. The initial portion of this document discusses the disturbed rock zone relative to operational issues pertaining to re-certification of the repository. The remaining sections summarize and document theoretical and experimental advances that quantify characteristics of the disturbed rock zone as applied to nuclear waste repositories in salt.
This report describes the complete revision of a deuterium equation of state (EOS) model published in 1972. It uses the same general approach as the 1972 EOS, i.e., the so-called 'chemical model,' but incorporates a number of theoretical advances that have taken place during the past thirty years. Three phases are included: a molecular solid, an atomic solid, and a fluid phase consisting of both molecular and atomic species. Ionization and the insulator-metal transition are also included. The most important improvements are in the liquid perturbation theory, the treatment of molecular vibrations and rotations, and the ionization equilibrium and mixture models. In addition, new experimental data and theoretical calculations are used to calibrate certain model parameters, notably the zero-Kelvin isotherms for the molecular and atomic solids, and the quantum corrections to the liquid phase. The report gives a general overview of the model, followed by detailed discussions of the most important theoretical issues and extensive comparisons with the many experimental data that have been obtained during the last thirty years. Questions about the validity of the chemical model are also considered. Implications for modeling the 'giant planets' are also discussed.
This report summarizes the development of new biocompatible self-assembly procedures enabling the immobilization of genetically engineered cells in a compact, self-sustaining, remotely addressable sensor platform. We used evaporation induced self-assembly (EISA) to immobilize cells within periodic silica nanostructures, characterized by unimodal pore sizes and pore connectivity, that can be patterned using ink-jet printing or photo patterning. We constructed cell lines for the expression of fluorescent proteins and induced reporter protein expression in immobilized cells. We investigated the role of the abiotic/biotic interface during cell-mediated self-assembly of synthetic materials.
This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.
Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.
On October 22-24, 2003, about 40 experts involved in various aspects of homeland security from the United States and four other Pacific region countries meet in Kihei, Hawaii to engage in a free-wheeling discussion and brainstorm (a 'fest') of the role that technology could play in winning the war on terrorism in the Pacific region. The result of this exercise is a concise and relatively thorough definition of the terrorism problem in the Pacific region, emphasizing the issues unique to Island nations in the Pacific setting, along with an action plan for developing working demonstrators of advanced technological solutions to these issues. In this approach, the participants were asked to view the problem and their potential solutions from multiple perspectives, and then to identify barriers (especially social and policy barriers) to any proposed technological solution. The final step was to create a roadmap for further action. This roadmap includes plans to: (1) create a conceptual monitoring and tracking system for people and things moving around the region that would be 'scale free', and develop a simple concept demonstrator; (2) pursue the development of a system to improve local terrorism context information, perhaps through the creation of an information clearinghouse for Pacific law enforcement; (3) explore the implementation of a Hawaii based pilot system to explore hypothetical terrorist scenarios and the development of fusion and analysis tools to work with this data (Sandia); and (4) share information concerning the numerous activities ongoing at various organizations around the understanding and modeling of terrorist behavior.
The goal of this LDRD was to investigate III-antimonide/nitride based materials for unique semiconductor properties and applications. Previous to this study, lack of basic information concerning these alloys restricted their use in semiconductor devices. Long wavelength emission on GaAs substrates is of critical importance to telecommunication applications for cost reduction and integration into microsystems. Currently InGaAsN, on a GaAs substrate, is being commercially pursued for the important 1.3 micrometer dispersion minima of silica-glass optical fiber; due, in large part, to previous research at Sandia National Laboratories. However, InGaAsN has not shown great promise for 1.55 micrometer emission which is the low-loss window of single mode optical fiber used in transatlantic fiber. Other important applications for the antimonide/nitride based materials include the base junction of an HBT to reduce the operating voltage which is important for wireless communication links, and for improving the efficiency of a multijunction solar cell. We have undertaken the first comprehensive theoretical, experimental and device study of this material with promising results. Theoretical modeling has identified GaAsSbN to be a similar or potentially superior candidate to InGaAsN for long wavelength emission on GaAs. We have confirmed these predictions by producing emission out to 1.66 micrometers and have achieved edge emitting and VCSEL electroluminescence at 1.3 micrometers. We have also done the first study of the transport properties of this material including mobility, electron/hole mass, and exciton reduced mass. This study has increased the understanding of the III-antimonide/nitride materials enough to warrant consideration for all of the target device applications.
This report describes the research accomplishments achieved under the LDRD Project 'Radiation Hardened Optoelectronic Components for Space-Based Applications.' The aim of this LDRD has been to investigate the radiation hardness of vertical-cavity surface-emitting lasers (VCSELs) and photodiodes by looking at both the effects of total dose and of single-event upsets on the electrical and optical characteristics of VCSELs and photodiodes. These investigations were intended to provide guidance for the eventual integration of radiation hardened VCSELs and photodiodes with rad-hard driver and receiver electronics from an external vendor for space applications. During this one-year project, we have fabricated GaAs-based VCSELs and photodiodes, investigated ionization-induced transient effects due to high-energy protons, and measured the degradation of performance from both high-energy protons and neutrons.
This one-year feasibility study was aimed at developing finite element modeling capabilities for simulating nano-scale tests. This work focused on methods to model: (1) the adhesion of a particle to a substrate, and (2) the delamination of a thin film from a substrate. Adhesion was modeled as a normal attractive force that depends on the distance between opposing material surfaces while delamination simulations used a cohesive zone model. Both of these surface interaction models had been implemented in a beta version of the three-dimensional, transient dynamics, PRESTO finite element code, and the present study verified that implementation. Numerous illustrative calculations have been performed using these models, and when possible comparisons were made with existing solutions. These capabilities are now available in PRESTO version 1.07.
All ceramics and powder metals, including the ceramics components that Sandia uses in critical weapons components such as PZT voltage bars and current stacks, multi-layer ceramic MET's, ahmindmolybdenum & alumina cermets, and ZnO varistors, are manufactured by sintering. Sintering is a critical, possibly the most important, processing step during manufacturing of ceramics. The microstructural evolution, the macroscopic shrinkage, and shape distortions during sintering will control the engineering performance of the resulting ceramic component. Yet, modeling and prediction of sintering behavior is in its infancy, lagging far behind the other manufacturing models, such as powder synthesis and powder compaction models, and behind models that predict engineering properties and reliability. In this project, we developed a model that was capable of simulating microstructural evolution during sintering, providing constitutive equations for macroscale simulation of shrinkage and distortion during sintering. And we developed macroscale sintering simulation capability in JAS3D. The mesoscale model can simulate microstructural evolution in a complex powder compact of hundreds or even thousands of particles of arbitrary shape and size by 1. curvature-driven grain growth, 2. pore migration and coalescence by surface diffusion, 3. vacancy formation, grain boundary diffusion and annihilation. This model was validated by comparing predictions of the simulation to analytical predictions for simple geometries. The model was then used to simulate sintering in complex powder compacts. Sintering stress and materials viscous module were obtained from the simulations. These constitutive equations were then used by macroscopic simulations for simulating shrinkage and shape changes in FEM simulations. The continuum theory of sintering embodied in the constitutive description of Skorohod and Olevsky was combined with results from microstructure evolution simulations to model shrinkage and deformation during. The continuum portion is based on a finite element formulation that allows 3D components to be modeled using SNL's nonlinear large-deformation finite element code, JAS3D. This tool provides a capability to model sintering of complex three-dimensional components. The model was verified by comparing to simulations results published in the literature. The model was validated using experimental results from various laboratory experiments performed by Garino. In addition, the mesoscale simulations were used to study anisotropic shrinkage in aligned, elongated powder compacts. Anisotropic shrinkage occurred in all compacts with aligned, elongated particles. However, the direction of higher shrinkage was in some cases along the direction of elongation and in other cases in the perpendicular direction depending on the details of the powder compact. In compacts of simple-packed, mono-sized, elongated particles, shrinkage was higher in the direction of elongation. In compacts of close-packed, mono-sized, elongated particles and of elongated particles with a size and shape distribution, the shrinkage was lower in the direction of elongation. We also explored the concept of a sintering stress tensor rather than the traditional sintering stress scalar concept for the case of anisotropic shrinkage. A thermodynamic treatment of this is presented. A method to calculate the sintering stress tensor is also presented. A user-friendly code that can simulate microstructural evolution during sintering in 2D and in 3D was developed. This code can run on most UNIX platforms and has a motif-based GUI. The microstructural evolution is shown as the code is running and many of the microstructural features, such as grain size, pore size, the average grain boundary length (in 2D) and area (in 3D), etc. are measured and recorded as a function of time. The overall density as the function of time is also recorded.
The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomous deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.
In this paper, the effect of viscous wave motion on a micro rotational resonator is discussed. This work shows the inadequacy of developing theory to represent energy losses due to shear motion in air. Existing theory predicts Newtonian losses with little slip at the interface. Nevertheless, experiments showed less effect due to Newtonian losses and elevated levels of slip for small gaps. Values of damping were much less than expected. Novel closed form solutions for the response of components are presented. The stiffness of the resonator is derived using Castigliano's theorem, and viscous fluid motion above and below the resonator is derived using a wave approach. Analytical results are compared with experimental results to determine the utility of existing theory. It was found that existing macro and molecular theory is inadequate to describes measured responses.
Proposed for publication in Reliability Engineering and System Safety.
Abstract not provided.
This report summarizes the Mentoring Program at Sandia National Laboratories (SNL), which has been an on-going success since its inception in 1995. The Mentoring Program provides a mechanism to develop a workforce able to respond to changing requirements and complex customer needs. The program objectives are to enhance employee contributions through increased knowledge of SNL culture, strategies, and programmatic direction. Mentoring is a proven mechanism for attracting new employees, retaining employees, and developing leadership. It helps to prevent the loss of corporate knowledge from attrition and retirement, and it increases the rate and level of contributions of new managers and employees, also spurring cross-organizational teaming. The Mentoring Program is structured as a one-year partnership between an experienced staff member or leader and a less experienced one. Mentors and mentees are paired according to mutual objectives and interests. Support is provided to the matched pairs from their management as well as division program coordinators in both New Mexico and California locations. In addition, bi-monthly large-group training sessions are held.
Large-scale finite element analysis often requires the iterative solution of equations with many unknowns. Preconditioners based on domain decomposition concepts have proven effective at accelerating the convergence of iterative methods like conjugate gradients for such problems. A study of two new domain decomposition preconditioners is presented here. The first is based on a substructuring approach and can viewed as a primal counterpart of the dual-primal variant of the finite element tearing and interconnecting method called FETI-DP. The second uses an algebraic approach to construct a coarse problem for a classic overlapping Schwarz method. The numerical properties of both preconditioners are shown to scale well with problem size. Although developed primarily for structural mechanics applications, the preconditioners are also useful for other problems types. Detailed descriptions of the two preconditioners along with numerical results are included.
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
As part of the Testing Evaluation and Qualification Project, which was contracted by Organization 9336, this paper compares three cubicle-class switches from various vendors to assess how well they would perform in the unclassified networks at Sandia National Laboratories. The switches tested were the SMC TigerSwitch 6709L2, the Cisco Catalyst 2950G-12, and the Extreme Summit 5i. Each switch was evaluated by testing performance, functionality, interoperability, security, and total cost of ownership. The results of this report show the SMC TigerSwitch as being the best choice for cubicle use because of its high performance and very low cost. The Cisco Catalyst is also rated highly for cubicle use and in some cases may be preferred over the SMC TigerSwitch. The Extreme Summit 5i is not recommended for cubicle use due to its size and extremely loud fans but is a full featured, high performance switch that would work very well for access layer switching.
The Unique Signal is a key constituent of Enhanced Nuclear Detonation Safety (ENDS). Although the Unique Signal approach is well prescribed and mathematically assured, there are numerous unsolved mathematical problems that could help assess the risk of deviations from the ideal approach. Some of the mathematics-based results shown in this report are: 1. The risk that two patterns with poor characteristics (easily generated by inadvertent processes) could be combined through exclusive-or mixing to generate an actual Unique Signal pattern has been investigated and found to be minimal (not significant when compared to the incompatibility metric of actual Unique Signal patterns used in nuclear weapons). 2. The risk of generating actual Unique Signal patterns with linear feedback shift registers is minimal, but the patterns in use are not as invulnerable to inadvertent generation by dependent processes as previously thought. 3. New methods of testing pair-wise incompatibility threats have resulted in no significant problems found for the set of Unique Signal patterns currently used. Any new patterns introduced would have to be carefully assessed for compatibility with existing patterns, since some new patterns under consideration were found to be deficient when associated with other patterns in use. 4. Markov models were shown to correspond to some of the engineered properties of Unique Signal sequences. This gives new support for the original design objectives. 5. Potential dependence among events (caused by a variety of communication protocols) has been studied. New evidence has been derived of the risk associated with combined communication of multiple events, and of the improvement in abnormal-environment safety that can be achieved through separate-event communication.
To model the telecommunications infrastructure and its role and robustness to shocks, we must characterize the business and engineering of telecommunications systems in the year 2003 and beyond. By analogy to environmental systems modeling, we seek to develop a 'conceptual model' for telecommunications. Here, the conceptual model is a list of high-level assumptions consistent with the economic and engineering architectures of telecommunications suppliers and customers, both today and in the near future. We describe the present engineering architectures of the most popular service offerings, and describe the supplier markets in some detail. We also develop a characterization of the customer base for telecommunications services and project its likely response to disruptions in service, base-lining such conjectures against observed behaviors during 9/11.