This report summarizes the development of sensor particles for remote detection of trace chemical analytes over broad areas, e.g residual trinitrotoluene from buried landmines or other unexploded ordnance (UXO). We also describe the potential of the sensor particle approach for the detection of chemical warfare (CW) agents. The primary goal of this work has been the development of sensor particles that incorporate sample preconcentration, analyte molecular recognition, chemical signal amplification, and fluorescence signal transduction within a ''grain of sand''. Two approaches for particle-based chemical-to-fluorescence signal transduction are described: (1) enzyme-amplified immunoassays using biocompatible inorganic encapsulants, and (2) oxidative quenching of a unique fluorescent polymer by TNT.
This demonstration project is a collaboration among DOE, Sandia National Laboratories, the University of Texas, El Paso (UTEP), the International Boundary and Water Commission (IBWC), and the US Army Corps of Engineers (USACE). Sandia deployed and demonstrated a field measurement technology that enables the determination of erosion and transport potential of sediments in the Rio Grande. The technology deployed was the Mobile High Shear Stress Flume. This unique device was developed by Sandia's Carlsbad Programs for the USACE and has been used extensively in collaborative efforts on near shore and river systems throughout the United States. Since surface water quantity and quality along with human health is an important part of the National Border Technology Program, technologies that aid in characterizing, managing, and protecting this valuable resource from possible contamination sources is imperative.
Photonic crystals are periodically engineered ''materials'' which are the photonic analogues of electronic crystals. Much like electronic crystal, photonic crystal materials can have a variety of crystal symmetries, such as simple-cubic, closed-packed, Wurtzite and diamond-like crystals. These structures were first proposed in late 1980's. However, due mainly to fabrication difficulties, working photonic crystals in the near-infrared and visible wavelengths are only just emerging. In this article, we review the construction of two- and three-dimensional photonic crystals of different symmetries at infrared and optical wavelengths using advanced semiconductor processing. We further demonstrate that this process lends itself to the creation of line defects (linear waveguides) and point defects (micro-cavities), which are the most basic building blocks for optical signal processing, filtering and routing.
Monitoring the condition of critical structures is vital for not only assuring occupant safety and security during naturally occurring and malevolent events, but also to determine the fatigue rate under normal aging conditions and to allow for efficient upgrades. This project evaluated the feasibility of applying integrated, remotely monitored micro-sensor systems to assess the structural performance of critical infrastructure. These measurement systems will provide forensic data on structural integrity, health, response, and overall structural performance in load environments such as aging, earthquake, severe wind, and blast attacks. We have investigated the development of ''self-powered'' sensor tags that can be used to monitor the state-of-health of a structure and can be embedded in that structure without compromising the integrity of the structure. A sensor system that is powered by converting structural stresses into electrical power via piezoelectric transducers has been demonstrated including work toward integration of that sensor with a novel radio frequency (RF) tagging technology as a means of remotely reading the data from the sensor.
Perhaps the most basic barrier to the widespread deployment of remote manipulators is that they are very difficult to use. Remote manual operations are fatiguing and tedious, while fully autonomous systems are seldom able to function in changing and unstructured environments. An alternative approach to these extremes is to exploit computer control while leaving the operator in the loop to take advantage of the operator's perceptual and decision-making capabilities. This report describes research that is enabling gradual introduction of computer control and decision making into operator-supervised robotic manipulation systems, and its integration on a commercially available, manually controlled mobile manipulator.
This document highlights the Discom's Distance computing and communication team activities at the 2000 Supercomputing conference in Dallas Texas. This conference is sponsored by the IEEE and ACM. Sandia's participation in the conference has now spanned a decade, for the last five years Sandia National Laboratories, Los Alamos National Lab and Lawrence Livermore National Lab have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives, Program rubric to demonstrate ASCI's emerging capabilities in computational science and our combined expertise in high performance computer science and communication networking developments within the program. At SC 2000, DISCOM demonstrated an infrastructure. DISCOM2 uses this forum to demonstrate and focus communication and pre-standard implementation of 10 Gigabit Ethernet, the first gigabyte per second data IP network transfer application, and VPN technology that enabled a remote Distributed Resource Management tools demonstration. Additionally a national OC48 POS network was constructed to support applications running between the show floor and home facilities. This network created the opportunity to test PSE's Parallel File Transfer Protocol (PFTP) across a network that had similar speed and distances as the then proposed DISCOM WAN. The SCINET SC2000 showcased wireless networking and the networking team had the opportunity to explore this emerging technology while on the booth. This paper documents those accomplishments, discusses the details of their convention exhibit floor. We also supported the production networking needs of the implementation, and describes how these demonstrations supports DISCOM overall strategies in high performance computing networking.
This report utilizes the results of the Solar Two project, as well as continuing technology development, to update the technical and economic status of molten-salt power towers. The report starts with an overview of power tower technology, including the progression from Solar One to the Solar Two project. This discussion is followed by a review of the Solar Two project--what was planned, what actually occurred, what was learned, and what was accomplished. The third section presents preliminary information regarding the likely configuration of the next molten-salt power tower plant. This section draws on Solar Two experience as well as results of continuing power tower development efforts conducted jointly by industry and Sandia National Laboratories. The fourth section details the expected performance and cost goals for the first commercial molten-salt power tower plant and includes a comparison of the commercial performance goals to the actual performance at Solar One and Solar Two. The final section summarizes the successes of Solar Two and the current technology development activities. The data collected from the Solar Two project suggest that the electricity cost goals established for power towers are reasonable and can be achieved with some simple design improvements.
This report describes the current state-of-the-art in Autonomous Land Vehicle (ALV) systems and technology. Five functional technology areas are identified and addressed. For each a brief, subjective, preface is first provided which envisions the necessary technology for the deployment of an operational ALV system. Subsequently, a detailed literature review is provided to support and elaborate these views. It is further established how these five technology areas fit together as a functioning whole. The essential conclusion of this report is that the necessary sensors, algorithms and methods to develop and demonstrate an operationally viable all-terrain ALV already exist and could be readily deployed. A second conclusion is that the successful development of an operational ALV system will rely on an effective approach to systems engineering. In particular, a precise description of mission requirements and a clear definition of component functionality is essential.
The {mu}ChemLab{trademark} Laboratory Directed Research and Development (LDRD) Grand Challenge project began in October 1996 and ended in September 2000. The technical managers of the {mu}ChemLab{trademark} project and the LDRD office, with the support of a consultant, conducted a competitive technical and market demand intelligence analysis of the {mu}ChemLab{trademark}. The managers used this knowledge to make project decisions and course adjustments. CTI/MDI positively impacted the project's technology development, uncovered potential technology partnerships, and supported eventual industry partner contacts. CTI/MDI analysis is now seen as due diligence and the {mu}ChemLab{trademark} project is now the model for other Sandia LDRD Grand Challenge undertakings. This document describes the CTI/MDI analysis and captures the more important ''lessons learned'' of this Grand Challenge project, as reported by the project's management team.
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.
Novel technologies often are born prior to identifying application arenas that can provide the financial support for their development and maturation. After creating new technologies, innovators rush to identify some previously difficult-to-meet product or process challenge. In this regard, microsystems technology is following a path that many other electronic technologies have previously faced. From this perspective, the development of a robust technology follows a three-stage approach. First there is the ''That idea will never work.'' stage, which is hurdled only by proving the concept. Next is the ''Why use such a novel (unproven) technology instead of a conventional one?'' stage. This stage is overcome when a particular important device cannot be made economically--or at all--through the existing technological base. This initial incorporation forces at least limited use of the new technology, which in turn provides the revenues and the user base to mature and sustain the technology. Finally there is the ''Sure that technology (e.g., microsystems) is good for that product (e.g., accelerometers and pressure sensors), but the problems are too severe for any other application'' stage which is only overcome with the across-the-board application of the new technology. With an established user base, champions for the technology become willing to apply the new technology as a potential solution to other problems. This results in the widespread diffusion of the previously shunned technology, making the formerly disruptive technology the new standard. Like many technologies in the microelectronics industry, the microsystems community is now traversing this well-worn path. This paper examines the evolution of microsystems technology from the perspective of Sandia National Laboratories' development of a sacrificial surface micromachining technology and the associated infrastructure.
Electromagnetic induction (EMI) by a magnetic dipole located above a dipping interface is of relevance to the petroleum well-logging industry. The problem is fully three-dimensional (3-D) when formulated as above, but reduces to an analytically tractable one-dimensional (1-D) problem when cast as a small tilted coil above a horizontal interface. The two problems are related by a simple coordinate rotation. An examination of the induced eddy currents and the electric charge accumulation at the interface help to explain the inductive and polarization effects commonly observed in induction logs from dipping geological formations. The equivalence between the 1-D and 3-D formulations of the problem enables the validation of a previously published finite element solver for 3-D controlled-source EMI.
This report presents a discussion of directory-enabled policy-based networking with an emphasis on its role as the foundation for securely scalable enterprise networks. A directory service provides the object-oriented logical environment for interactive cyber-policy implementation. Cyber-policy implementation includes security, network management, operational process and quality of service policies. The leading network-technology vendors have invested in these technologies for secure universal connectivity that transverses Internet, extranet and intranet boundaries. Industry standards are established that provide the fundamental guidelines for directory deployment scalable to global networks. The integration of policy-based networking with directory-service technologies provides for intelligent management of the enterprise network environment as an end-to-end system of related clients, services and resources. This architecture allows logical policies to protect data, manage security and provision critical network services permitting a proactive defense-in-depth cyber-security posture. Enterprise networking imposes the consideration of supporting multiple computing platforms, sites and business-operation models. An industry-standards based approach combined with principled systems engineering in the deployment of these technologies allows these issues to be successfully addressed. This discussion is focused on a directory-based policy architecture for the heterogeneous enterprise network-computing environment and does not propose specific vendor solutions. This document is written to present practical design methodology and provide an understanding of the risks, complexities and most important, the benefits of directory-enabled policy-based networking.
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.
Encapsulation is a common process used in manufacturing most non-nuclear components including: firing sets, neutron generators, trajectory sensing signal generators (TSSGs), arming, fusing and firing devices (AF and Fs), radars, programmers, connectors, and batteries. Encapsulation is used to contain high voltage, to mitigate stress and vibration and to protect against moisture. The purpose of the ASCI Encapsulation project is to develop a simulation capability that will allow us to aid in the encapsulation design process, especially for neutron generators. The introduction of an encapsulant poses many problems because of the need to balance ease of processing and properties necessary to achieve the design benefits such as tailored encapsulant properties, optimized cure schedule and reduced failure rates. Encapsulants can fail through fracture or delamination as a result of cure shrinkage, thermally induced residual stresses, voids or incomplete component embedding and particle gradients. Manufacturing design requirements include (1) maintaining uniform composition of particles in order to maintain the desired thermal coefficient of expansion (CTE) and density, (2) mitigating void formation during mold fill, (3) mitigating cure and thermally induced stresses during cure and cool down, and (4) eliminating delamination and fracture due to cure shrinkage/thermal strains. The first two require modeling of the fluid phase, and it is proposed to use the finite element code GOMA to accomplish this. The latter two require modeling of the solid state; however, ideally the effects of particle distribution would be included in the calculations, and thus initial conditions would be set from GOMA predictions. These models, once they are verified and validated, will be transitioned into the SIERRA framework and the ARIA code. This will facilitate exchange of data with the solid mechanics calculations in SIERRA/ADAGIO.
The Segmented Rail Phased Induction Motor (SERAPHIM) has been proposed as a propulsion method for urban maglev transit, advanced monorail, and other forms of high speed ground transportation. In this report we describe the technology, consider different designs, and examine its strengths and weaknesses.
A probabilistic, risk-based performance-assessment methodology is being developed to assist designers, regulators, and involved stakeholders in the selection, design, and monitoring of long-term covers for contaminated subsurface sites. This report presents an example of the risk-based performance-assessment method using a repository site in Monticello, Utah. At the Monticello site, a long-term cover system is being used to isolate long-lived uranium mill tailings from the biosphere. Computer models were developed to simulate relevant features, events, and processes that include water flux through the cover, source-term release, vadose-zone transport, saturated-zone transport, gas transport, and exposure pathways. The component models were then integrated into a total-system performance-assessment model, and uncertainty distributions of important input parameters were constructed and sampled in a stochastic Monte Carlo analysis. Multiple realizations were simulated using the integrated model to produce cumulative distribution functions of the performance metrics, which were used to assess cover performance for both present- and long-term future conditions. Performance metrics for this study included the water percolation reaching the uranium mill tailings, radon flux at the surface, groundwater concentrations, and dose. Results of this study can be used to identify engineering and environmental parameters (e.g., liner properties, long-term precipitation, distribution coefficients) that require additional data to reduce uncertainty in the calculations and improve confidence in the model predictions. These results can also be used to evaluate alternative engineering designs and to identify parameters most important to long-term performance.
In this paper, we discuss several specific threats directed at the routing data of an ad hoc network. We address security issues that arise from wrapping authentication mechanisms around ad hoc routing data. We show that this bolt-on approach to security may make certain attacks more difficult, but still leaves the network routing data vulnerable. We also show that under a certain adversarial model, most existing routing protocols cannot be secured with the aid of digital signatures.
The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.