Internet Protocol version 4 (IPv4) has been a mainstay of the both the Internet and corporate networks for delivering network packets to the desired destination. However, rapid proliferation of network appliances, evolution of corporate networks, and the expanding Internet has begun to stress the limitations of the protocol. Internet Protocol version 6 (IPv6) is the replacement protocol that overcomes the constraints of IPv4. As the emerging Internet network protocol, SNL needs to prepare for its eventual deployment in international, national, customer, and local networks. Additionally, the United States Office of Management and Budget has mandated that IPv6 deployment in government network backbones occurs by 2008. This paper explores the readiness of the Sandia National Laboratories network backbone to support IPv6, the issues that must be addressed before a deployment begins, and recommends the next steps to take to comply with government mandates. The paper describes a joint work effort of the Sandia National Laboratories ASC WAN project team and members of the System Analysis & Trouble Resolution, the Communication & Network Systems, and Network System Design & Implementation Departments.
We developed techniques to design higher efficiency diffractive optical elements (DOEs) with large numerical apertures (NA) for quantum computing and quantum information processing. Large NA optics encompass large solid angles and thus have high collection efficiencies. Qubits in ion trap architectures are commonly addressed and read by lasers1. Large-scale ion-trap quantum computing2 will therefore require highly parallel optical interconnects. Qubit readout in these systems requires detecting fluorescence from the nearly isotropic radiation pattern of single ions, so efficient readout requires optical interconnects with high numerical aperture. Diffractive optical element fabrication is relatively mature and utilizes lithography to produce arrays compatible with large-scale ion-trap quantum computer architectures. The primary challenge of DOEs is the loss associated with diffraction efficiency. This is due to requirements for large deflection angles, which leads to extremely small feature sizes in the outer zone of the DOE. If the period of the diffractive is between λ (the free space wavelength) and 10λ, the element functions in the vector regime. DOEs in this regime, particularly between 1.5λ and 4λ, have significant coupling to unwanted diffractive orders, reducing the performance of the lens. Furthermore, the optimal depth of the zones with periods in the vector regime differs from the overall depth of the DOE. We will present results indicating the unique behaviors around the 1.5λ and 4λ periods and methods to improve the DOE performance.
We present the design and initial fabrication for a wavelength-agile, high-speed modulator that enables a long-term vision for the THz Scannerless Range Imaging (SRI) sensor. This modulator takes the place of the currently utilized SRI micro-channel plate which is limited to photocathode sensitive wavelengths (primarily in the visible and near-IR regimes). The new component is an active Resonant Subwavelength Grating (RSG). An RSG functions as an extremely narrow wavelength and angular band reflector, or mode selector. Theoretical studies predict that the infinite, laterally-extended RSG can reflect 100% of the resonant light while transmitting the balance of the other wavelengths. Previous experimental realization of these remarkable predictions has been impacted primarily by fabrication challenges. Even so, we have demonstrated large-area (1.0mm) passive RSG reflectivity as high as 100.2%, normalized to deposited gold. In this work, we transform the passive RSG design into an active laser-line modulator.
Small platinum clusters have been prepared in zeolite hosts through ion exchange and controlled calcination/reduction processes. In order to enable electrochemical application, the pores of the Pt-zeolite were filled with electrically conductive carbon via infiltration with carbon precursors, polymerization, and pyrolysis. The zeolite host was then removed by acid washing, to leave a Pt/C electrocatalyst possessing quasi-zeolitic porosity and Pt clusters of well-controlled size. The electrocatalysts were characterized by TEM, XRD, EXAFS, nitrogen adsorption and electrochemical techniques. Depending on the synthesis conditions, average Pt cluster sizes in the Pt/C catalysts ranged from 1.3 to 2.0 nm. The presence of ordered porosity/structure in the catalysts was evident in TEM images as lattice fringes, and in XRD as a low-angle diffraction peak with d-spacing similar to the parent zeolite. The catalysts possess micro- and meso-porosity, with pore size distributions that depend upon synthesis variables. Finally, electroactive surface areas as high as 112 m2 gPt-1 have been achieved in Pt/C electrocatalysts which show oxygen reduction performance comparable to standard industrial catalysts.
We have numerically compared the performance of various designs for the core refractive-index (RI) and rare-earth-dopant distributions of large-mode-area fibers for use in bend-loss-filtered, high-power amplifiers. We first established quantitative targets for the key parameters that determine fiber-amplifier performance, including effective LP01 modal area (Aeff, both straight and coiled), bend sensitivity (for handling and packaging), high-order mode discrimination, mode-field displacement upon coiling, and index contrast (manufacturability). We compared design families based on various power-law and hybrid profiles for the RI and evaluated confined rare-earth doping for hybrid profiles. Step-index fibers with straight-fiber Aeff values > 1000 μm2 exhibit large decreases in Aeff and transverse mode-field displacements upon coiling, in agreement with recent calculations of Hadley et al. [Proc. of SPIE, Vol. 6102, 61021S (2006)] and Fini [Opt. Exp. 14, 69 (2006)]. Triangular-profile fibers substantially mitigate these effects, but suffer from excessive bend sensitivity at Aeff values of interest. Square-law (parabolic) profile fibers are free of modal distortion but are hampered by high bend sensitivity (although to a lesser degree than triangular profiles) and exhibit the largest mode displacements. We find that hybrid (combined power-law) profiles provide some decoupling of these tradeoffs and allow all design goals to be achieved simultaneously. We present optimized fiber designs based on this analysis.
This is a pretty pretentious title, but I promised that I would write something autobiographical in this space, and I will do that. However, first I want to thank everyone who contributed to the Festschrift. When Stephen Klippenstein told me that he and Craig Taatjes had gotten approval for it (against my advice), I envisioned a situation where the issue had only two papers, both co-authored by Stephen. Luckily that turned out not to be the case. At the time of this writing there are forty-three manuscripts at various stages of review. I am extremely grateful to Craig and Stephen, to the editors of the journal, and to all the authors for the tribute. It is far and away the most flattering thing that anyone has ever done for me in my career.
This paper introduces approaches that combine micro/nanomolding, or nanoimprinting, techniques with proximity optical phase mask lithographic methods to form three dimensional (3D) nanostructures in thick, transparent layers of photopolymers. The results demonstrate three strategies of this type, where molded relief structures in these photopolymers represent (i) fine (<1 μm) features that serve as the phase masks for their own exposure, (ii) coarse features (>1 μm) that are used with phase masks to provide access to large structure dimensions, and (iii) fine structures that are used together phase masks to achieve large, multilevel phase modulations. Several examples are provided, together with optical modeling of the fabrication process and the transmission properties of certain of the fabricated structures. Lastly, these approaches provide capabilities in 3D fabrication that complement those of other techniques, with potential applications in photonics, microfluidics, drug delivery and other areas.
Since sensitivity to contamination is one of the verities of solid state joining, there is a need for assessing contamination of the part(s) to be joined, preferably nondestructively while it can be remedied. As the surfaces that are joined in pinch welds are inaccessible and thus provide a greater challenge, most of the discussion is of the search for the origin and effect of contamination on pinch welding and ways to detect and mitigate it. An example of contamination and the investigation and remediation of such a system is presented. Suggestions are made for techniques for nondestructive evaluation of contamination of surfaces for other solid state welds as well as for pinch welds. Surfaces that have good visual access are amenable to inspection by diffuse reflection infrared Fourier transform (DRIFT) spectroscopy. Although other techniques are useful for specific classes of contaminants (such as hydrocarbons), DRIFT can be used most classes of contaminants. Surfaces such as the interior of open tubes or stems that are to be pinch welded can be inspected using infrared reflection spectroscopy. It must be demonstrated whether or not this tool can detect graphite based contamination, which has been seen in stems. For tubes with one closed end, the technique that should be investigated is emission infrared spectroscopy.
This paper considers the fundamentals of what happens in a solid when it is impacted by a medium-energy gallium ion. The study of the ion/sample interaction at the nanometer scale is applicable to most focused ion beam (FIB)–based work even if the FIB/sample interaction is only a step in the process, for example, micromachining or microelectronics device processing. Whereas the objective in other articles in this issue is to use the FIB tool to characterize a material or to machine a device or transmission electron microscopy sample, the goal of the FIB in this article is to have the FIB/sample interaction itself become the product. To that end, the FIB/sample interaction is considered in three categories according to geometry: below, at, and above the surface. First, the FIB ions can penetrate the top atom layer(s) and interact below the surface. Ion implantation and ion damage on flat surfaces have been comprehensively examined; however, FIB applications require the further investigation of high doses in three-dimensional profiles. Second, the ions can interact at the surface, where a morphological instability can lead to ripples and surface self-organization, which can depend on boundary conditions for site-specific and compound FIB processing. Third, the FIB may interact above the surface (and/or produce secondary particles that interact above the surface). Such ion beam–assisted deposition, FIB–CVD (chemical vapor deposition), offers an elaborate complexity in three dimensions with an FIB using a gas injection system. Finally, at the nanometer scale, these three regimes—below, at, and above the surface—can require an interdependent understanding to be judiciously controlled by the FIB.
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.
Weak link (WL)/strong link (SL) systems constitute important parts of the overall operational design of high consequence systems, with the SL system designed to permit operation of the system only under intended conditions and the WL system designed to prevent the unintended operation of the system under accident conditions. Degradation of the system under accident conditions into a state in which the WLs have not deactivated the system and the SLs have failed in the sense that they are in a configuration that could permit operation of the system is referred to as loss of assured safety. The probability of such degradation conditional on a specific set of accident conditions is referred to as probability of loss of assured safety (PLOAS). Previous work has developed computational procedures for the calculation of PLOAS under fire conditions for a system involving multiple WLs and SLs and with the assumption that a link fails instantly when it reaches its failure temperature. Extensions of these procedures are obtained for systems in which there is a temperature-dependent delay between the time at which a link reaches its failure temperature and the time at which that link actually fails.
This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.
As engineering challenges grow in the ever-shrinking world of nano-design, methods of making dynamic measurements of nano-materials and systems become more important. The Doppler electron velocimeter (DEV) is a new measurement concept motivated by the increasing importance of nano-dynamics. Nano-dynamics is defined in this context as any phenomenon that causes a dynamically changing phase in an electron beam, and includes traditional mechanical motion, as well as additional phenomena including changing magnetic and electric fields. The DEV is only a theoretical device at this point. Lastly, this article highlights the importance of pursuing nano-dynamics and presents a case that the electron microscope and its associated optics are a viable test bed to develop this new measurement tool.
Several groups of plastic molded CD4011s were electrically tested as part of an Army dormant storage program. These parts had been in storage in missile containers for 4.5 years, and were electrically tested annually. Eight of the parts (out of 1200) failed the electrical tests and were subsequently analyzed to determine the cause of the failures. The root cause was found to be corrosion of the unpassivated Al bondpads. No significant attack of the passivated Al traces was found. Seven of the eight failures occurred in parts stored on a pre-position ship (the Jeb Stuart), suggesting a link between the external environment and observed corrosion.
As the capabilities of numerical simulations increase, decision makers are increasingly relying upon simulations rather than experiments to assess risks across a wide variety of accident scenarios including fires. There are still, however, many aspects of fires that are either not well understood or are difficult to treat from first principles due to the computational expense. For a simulation to be truly predictive and to provide decision makers with information which can be reliably used for risk assessment the remaining physical processes must be studied and suitable models developed for the effects of the physics. The model for the fuel evaporation rate in a liquid fuel pool fire is significant because in well-ventilated fires the evaporation rate largely controls the total heat release rate from the fire. A set of experiments are outlined in this report which will provide data for the development and validation of models for the fuel regression rates in liquid hydrocarbon fuel fires. The experiments will be performed on fires in the fully turbulent scale range (> 1 m diameter) and with a number of hydrocarbon fuels ranging from lightly sooting to heavily sooting. The importance of spectral absorption in the liquid fuels and the vapor dome above the pool will be investigated and the total heat flux to the pool surface will be measured. The importance of convection within the liquid fuel will be assessed by restricting large scale liquid motion in some tests. These data sets will provide a sound, experimentally proven basis for assessing how much of the liquid fuel needs to be modeled to enable a predictive simulation of a fuel fire given the couplings between evaporation of fuel from the pool and the heat release from the fire which drives the evaporation.
Narasimhan Consulting Services, Inc. (NCS), under a contract with the Sandia National Laboratories (SNL), designed and operated pilot scale evaluations of the adsorption and coagulation/filtration treatment technologies aimed at meeting the recently revised arsenic maximum contaminant level (MCL) for drinking water. The standard of 10 {micro}g/L (10 ppb) is effective as of January 2006. The pilot demonstration is a project of the Arsenic Water Technology Partnership program, a partnership between the American Water Works Association Research Foundation (AwwaRF), SNL and WERC (A Consortium for Environmental Education and Technology Development). The pilot evaluation was conducted at Well 30 of the City of Weatherford, OK, which supplies drinking water to a population of more than 10,400. Well water contained arsenic in the range of 16 to 29 ppb during the study. Four commercially available adsorption media were evaluated side by side for a period of three months. Both adsorption and coagulation/filtration effectively reduced arsenic from Well No.30. A preliminary economic analysis indicated that adsorption using an iron oxide media was more cost effective than the coagulation/ filtration technology.
This tutorial is aimed at guiding a user through the process of performing a cable SGEMP simulation. The tutorial starts with processing a differential photon spectrum obtained from a Monte Carlo code such as ITS into a discrete (multi-group) spectrum used in CEPXS and CEPTRE. Guidance is given in the creation of a nite element mesh of the cable geometry. The set-up of a CEPTRE simulation is detailed. Users are instructed in evaluating the quality of the CEPTRE radiation transport results. The post-processing of CEPTRE results using Exostrip is detailed. And finally, an EMPHASIS/CABANA simulation is detailed including the interpretation of the output.
A standard approach to cross-language information retrieval (CLIR) uses Latent Semantic Analysis (LSA) in conjunction with a multilingual parallel aligned corpus. This approach has been shown to be successful in identifying similar documents across languages - or more precisely, retrieving the most similar document in one language to a query in another language. However, the approach has severe drawbacks when applied to a related task, that of clustering documents 'language-independently', so that documents about similar topics end up closest to one another in the semantic space regardless of their language. The problem is that documents are generally more similar to other documents in the same language than they are to documents in a different language, but on the same topic. As a result, when using multilingual LSA, documents will in practice cluster by language, not by topic. We propose a novel application of PARAFAC2 (which is a variant of PARAFAC, a multi-way generalization of the singular value decomposition [SVD]) to overcome this problem. Instead of forming a single multilingual term-by-document matrix which, under LSA, is subjected to SVD, we form an irregular three-way array, each slice of which is a separate term-by-document matrix for a single language in the parallel corpus. The goal is to compute an SVD for each language such that V (the matrix of right singular vectors) is the same across all languages. Effectively, PARAFAC2 imposes the constraint, not present in standard LSA, that the 'concepts' in all documents in the parallel corpus are the same regardless of language. Intuitively, this constraint makes sense, since the whole purpose of using a parallel corpus is that exactly the same concepts are expressed in the translations. We tested this approach by comparing the performance of PARAFAC2 with standard LSA in solving a particular CLIR problem. From our results, we conclude that PARAFAC2 offers a very promising alternative to LSA not only for multilingual document clustering, but also for solving other problems in cross-language information retrieval.
This guide is intended to enable researchers working with seismic data, but lacking backgrounds in computer science and programming, to develop seismic algorithms using the MATLAB-based MatSeis software. Specifically, it presents a series of step-by-step instructions to write four specific functions of increasing complexity, while simultaneously explaining the notation, syntax, and general program design of the functions being written. The ultimate goal is that that the user can use this guide as a jumping off point from which he or she can write new functions that are compatible with and expand the capabilities of the current MatSeis software that has been developed as part of the Ground-based Nuclear Explosion Monitoring Research and Engineering (GNEMRE) program at Sandia National Laboratories.
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
To meet Sandia's engineering challenges it is crucial that we shorten the product realization process. The challenge of RRW is to produce exceptional high quality designs and respond to changes quickly. Computer aided design models are an important element in realizing these objectives. Advances in the use of three dimensional geometric models on the Reliable Robust Warhead (RRW) activity have resulted in business advantage. This approach is directly applicable to other programs within the Laboratories. This paper describes the RRW approach and rationale. Keys to this approach are defined operational states that indicate a pathway for greater model-based realization and responsive infrastructure.
This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.