The outline of this report is: (1) structures of hexagonal Er meal, ErH{sub 2} fluorite, and molybdenum; (2) texture issues and processing effects; (3) idea of pole figure integration; and (4) promising neutron diffraction work. Summary of this report are: (1) ErD{sub 2} and ErT{sub 2} film microstructures are strongly effected by processing conditions; (2) both x-ray and neutron diffraction are being pursued to help diagnose structure/property issues regarding ErT{sub 2} films and these correlations to He retention/release; (3) texture issues are great challenges for determination of site occupancy; and (4) work on pole-figure-integration looks to have promise addressing texture issues in ErD{sub 2} and ErT{sub 2} films.
Hydrogen energy may provide the means to an environmentally friendly future. One of the problems related to its application for transportation is 'on-board' storage. Hydrogen storage in solids has long been recognized as one of the most practical approaches for this purpose. The H-capacity in interstitial hydrides of most metals and alloys is limited to below 2.5% by weight and this is unsatisfactory for on-board transportation applications. Magnesium hydride is an exception with hydrogen capacity of -8.2 wt.%, however, its operating temperature, above 350 C, is too high for practical use. Sodium alanate (NaAlH{sub 4}) absorbs hydrogen up to 5.6 wt.% theoretically; however, its reaction kinetics and partial reversibility do not completely meet the new target for transportation application. Recently Chen et al. [1] reported that (Li{sub 3}N+2H{sub 2} {leftrightarrow} LiNH{sub 2} + 2LiH) provides a storage material with a possible high capacity, up to 11.5 wt.%, although this material is still too stable to meet the operating pressure/temperature requirement. Here we report a new approach to destabilize lithium imide system by partial substitution of lithium by magnesium in the (LiNH{sub 2} + LiH {leftrightarrow} Li{sub 2}NH + H{sub 2}) system with a minimal capacity loss. This Mg-substituted material can reversibly absorb 5.2 wt.% hydrogen at pressure of 30 bar at 200 C. This is a very promising material for on-board hydrogen storage applications. It is interesting to observe that the starting material (2LiNH{sub 2} + MgH{sub 2}) converts to (Mg(NH{sub 2}){sub 2} + 2LiH) after a desorption/re-absorption cycle.
Biosecurity must be implemented without impeding biomedical and bioscience research. Existing security literature and regulatory requirements do not present a comprehensive approach or clear model for biosecurity, nor do they wholly recognize the operational issues within laboratory environments. To help address these issues, the concept of Biosecurity Levels should be developed. Biosecurity Levels would have increasing levels of security protections depending on the attractiveness of the pathogens to adversaries. Pathogens and toxins would be placed in a Biosecurity Level based on their security risk. Specifically, the security risk would be a function of an agent's weaponization potential and consequences of use. To demonstrate the concept, examples of security risk assessments for several human, animal, and plant pathogens will be presented. Higher security than that currently mandated by federal regulations would be applied for those very few agents that represent true weapons threats and lower levels for the remainder.
This paper describes the analyses and the experimental mechanics program to support the National Aeronautics and Space Administration (NASA) investigation of the Shuttle Columbia accident. A synergism of the analysis and experimental effort is required to insure that the final analysis is valid - the experimental program provides both the material behavior and a basis for validation, while the analysis is required to insure the experimental effort provides behavior in the correct loading regime. Preliminary scoping calculations of foam impact onto the Shuttle Columbia's wing leading edge determined if enough energy was available to damage the leading edge panel. These analyses also determined the strain-rate regimes for various materials to provide the material test conditions. Experimental testing of the reinforced carbon-carbon wing panels then proceeded to provide the material behavior in a variety of configurations and strain-rates for flown or conditioned samples of the material. After determination of the important failure mechanisms of the material, validation experiments were designed to provide a basis of comparison for the analytical effort. Using this basis, the final analyses were used for test configuration, instrumentation location, and calibration definition in support of full-scale testing of the panels in June 2003. These tests subsequently confirmed the accident cause.
Photocatalytic porphyrins are used to reduce metal complexes from aqueous solution and, further, to control the deposition of metals onto porphyrin nanotubes and surfactant assembly templates to produce metal composite nanostructures and nanodevices. For example, surfactant templates lead to spherical platinum dendrites and foam-like nanomaterials composed of dendritic platinum nanosheets. Porphyrin nanotubes are reported for the first time, and photocatalytic porphyrin nanotubes are shown to reduce metal complexes and deposit the metal selectively onto the inner or outer surface of the tubes, leading to nanotube-metal composite structures that are capable of hydrogen evolution and other nanodevices.
This report describes the purpose and results of the two-year, Sandia-sponsored Laboratory Directed Research and Development (LDRD) project entitled Understanding Communication in Counterterrorism Crisis Management The purpose of this project was to facilitate the capture of key communications among team members in simulated training exercises, and to learn how to improve communication in that domain. The first section of this document details the scenario development aspects of the simulation. The second section covers the new communication technologies that were developed and incorporated into the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of decision support tools. The third section provides an overview of the features of the simulation and highlights its communication aspects. The fourth section describes the Team Communication Study processes and methodologies. The fifth section discusses future directions and areas in which to apply the new technologies and study results obtained as a result of this LDRD.
The summary of this report is: (1) Optimizing synthesis parameters leads to enhanced catalyst surface areas - Nonlinear relationship between activity and surface area; (2) Catalyst development performed under a staged protocol; (3) Catalytic materials with desired properties have been identified - Meet stage requirements, Performance can be tuned by altering component concentrations, Optimization still necessary at low temperatures; (4) Better activity and tolerance to SO2 - V2O5-based materials ruled out because of durability issues; and (5) Future work will focus on improving overall low temperature activity.
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.
The generalized momentum balance (GMB) methods, explored chiefly by Shabana and his co-workers, treat slap or collision in linear structures as sequences of impulses, thereby maintaining the linearity of the structures throughout. Further, such linear analysis is facilitated by modal representation of the structures. These methods are discussed here and extended. Simulations on a simple two-rod problem demonstrate how this modal impulse approximation affects the system both directly after each impulse as well as over the entire collision. Furthermore, these simulations illustrate how the GMB results differ from the exact solution and how mitigation of these artifacts is achieved. Another modal method discussed in this paper is the idea of imposing piecewise constant forces over short, yet finite, time intervals during contact. The derivation of this method is substantially different than that of the GMB method, yet the numerical results show similar behavior, adding credence to both models. Finally, a novel method combining these two approaches is introduced. The new method produces physically reasonable results that are numerically very close to the exact solution of the collision of two rods. This approach avoids most of the non physical, numerical artifacts of interpenetration or chatter present in the first two methods.
The purpose of modal testing is usually to provide an estimate of a linear structural dynamics model. Typical uses of the experimental modal model are (1) to compare it with a finite element model for model validation or updating; (2) to verify a plant model for a control system; or (3) to develop an experimentally based model to understand structural dynamic responses. Since these are some common end uses, for this article the main goal is to focus on excitation methods to obtain an adequate estimate of a linear structural dynamics model. The purpose of the modal test should also provide the requirements that will drive the rigor of the testing, analysis, and the amount of instrumentation. Sometimes, only the natural frequencies are required. The next level is to obtain relative mode shapes with the frequencies to correlate with a finite element model. More rigor is required to get accurate critical damping ratios if energy dissipation is important. At the highest level, a full experimental model may require the natural frequencies, damping, modal mass, scaled shapes, and, perhaps, other terms to account for out-of-band modes. There is usually a requirement on the uncertainty of the modal parameters, whether it is specifically called out or underlying. These requirements drive the meaning of the word 'adequate' in the phrase 'adequate linear estimate' for the structural dynamics model. The most popular tools for exciting structures in modal tests are shakers and impact hammers. The emphasis here will be on shakers. There have been many papers over the years that mention some of the advantages and issues associated with shaker testing. One study that is focused on getting good data with shakers is that of Peterson. Although impact hammers may seem very convenient, in many cases, shakers offer advantages in obtaining a linear model. The best choice of excitation device is somewhat dependent on the test article and logistical considerations. These considerations will be addressed in this article to help the test team make a choice between impact hammer and various shaker options. After the choice is made, there are still challenges to obtaining data for an adequate linear estimate of the desired structural dynamics model. The structural dynamics model may be a modal model with the desired quantities of natural frequencies, viscous damping ratios, and mode shapes with modal masses, or it may be the frequency response functions (FRFs), or their transforms, which may be constructed from the modal model. In any case, the fidelity of the linear model depends to a large extent on the validity of the experimental data, which are generally gathered in the form of FRFs. With the goal of obtaining an 'adequate linear estimate' for a model of the structural dynamic system under test, consider several common challenges that must be overcome in the excitation setup to gather adequate data.
A laser hazard analysis and safety assessment was performed for each various laser diode candidates associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers. A theoretical laser hazard analysis model for this system was derived and an Excel{reg_sign} spreadsheet model was developed to answer the 'what if questions' associated with the various modes of operations for the various candidate diode lasers.
A Self Organizing Map (SOM) approach was used to analyze physiological data taken from a group of subjects participating in a cooperative video shooting game. The ultimate aim was to discover signatures of group cooperation, conflict, leadership, and performance. Such information could be fed back to participants in a meaningful way, and ultimately increase group performance in national security applications, where the consequences of a poor group decision can be devastating. Results demonstrated that a SOM can be a useful tool in revealing individual and group signatures from physiological data, and could ultimately be used to heighten group performance.
Deposition in next-step devices such as ITER will pose diagnostic challenges. Codeposition of hydrogen with carbon needs to be characterized and understood in the initial hydrogen phase in order to mitigate tritium retention and qualify carbon plasma facing components for DT operations. Plasma facing diagnostic mirrors will experience deposition that is expected to rapidly degrade their reflectivity, posing a challenge to diagnostic design. Some eroded particles will collect as dust on interior surfaces and the quantity of dust will be strictly regulated for safety reasons however, diagnostics of in-vessel dust are lacking. We report results from two diagnostics that relate to these issues. Measurements of deposition on NSTX with 4 Hz time resolution have been made using a quartz microbalance in a configuration that mimics that of a typical diagnostic mirror. Often deposition was observed immediately following the discharge suggesting that diagnostic shutters should be closed as soon as possible after the time period of interest. Material loss was observed following a few discharges. A novel diagnostic to detect dust particles on remote surfaces was commissioned on NSTX.
The Estancia Basin lies about 30 miles to the east of Albuquerque, NM. It is a closed basin in terms of surface water and is somewhat isolated in terms of groundwater. Historically, the primary natural outlet for both surface water and groundwater has been evaporation from the salt lakes in the southeastern portion of the basin. There are no significant watercourses that flow into this basin and groundwater recharge is minimal. During the 20th Century, agriculture grew to become the major user of groundwater in the basin. Significant declines in groundwater levels have accompanied this agricultural use. Domestic and municipal use of the basin groundwater is increasing as Albuquerque population continues to spill eastward into the basin, but this use is projected to be less than 1% of agricultural use well into the 21st Century. This Water Budget model keeps track of the water balance within the basin. The model considers the amount of water entering the basin and leaving the basin. Since there is no significant surface water component within this basin, the balance of water in the groundwater aquifer constitutes the primary component of this balance. Inflow is based on assumptions for recharge made by earlier researchers. Outflow from the basin is the summation of the depletion from all basin water uses. The model user can control future water use within the basin via slider bars that set values for population growth, water system per-capita use, agricultural acreage, and the types of agricultural diversion. The user can also adjust recharge and natural discharge within the limits of uncertainty for those parameters. The model runs for 100 years beginning in 1940 and ending in 2040. During the first 55 years model results can be compared to historical data and estimates of groundwater use. The last 45 years are predictive. The model was calibrated to match to New Mexico Office of State Engineer (NMOSE) estimates of aquifer storage during the historical period by making adjustments to recharge and outflow that were within the parameters uncertainties. Although results of this calibrated model imply that there may be more water remaining in the aquifer than the Estancia Water Plan estimates, this answer is only another possible result in a range of answers that are based on large parameter uncertainties.
This paper investigates the performance of tensor methods for solving small- and large-scale systems of nonlinear equations where the Jacobian matrix at the root is ill-conditioned or singular. This condition occurs on many classes of problems, such as identifying or approaching turning points in path following problems. The singular case has been studied more than the highly ill-conditioned case, for both Newton and tensor methods. It is known that Newton-based methods do not work well with singular problems because they converge linearly to the solution and, in some cases, with poor accuracy. On the other hand, direct tensor methods have performed well on singular problems and have superlinear convergence on such problems under certain conditions. This behavior originates from the use of a special, restricted form of the second-order term included in the local tensor model that provides information lacking in a (nearly) singular Jacobian. With several implementations available for large-scale problems, tensor methods now are capable of solving larger problems. We compare the performance of tensor methods and Newton-based methods for both small- and large-scale problems over a range of conditionings, from well-conditioned to ill-conditioned to singular. Previous studies with tensor methods only concerned the ends of this spectrum. Our results show that tensor methods are increasingly superior to Newton-based methods as the problem grows more ill-conditioned.
Tonopah Test Range (TTR) in Nevada and Kauai Test Facility (KTF) in Hawaii are government-owned, contractor-operated facilities operated by Sandia Corporation, a subsidiary of Lockheed Martin Corporation. The U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA), through the Sandia Site Office (SSO), in Albuquerque, NM, manages TTR and KTF's operations. Sandia Corporation conducts operations at TTR in support of DOE/NNSA's Weapons Ordnance Program and has operated the site since 1957. Westinghouse Government Services subcontracts to Sandia Corporation in administering most of the environmental programs at TTR. Sandia Corporation operates KTF as a rocket preparation launching and tracking facility. This Annual Site Environmental Report (ASER) summarizes data and the compliance status of the environmental protection and monitoring program at TTR and KTF through Calendar Year (CY) 2003. The compliance status of environmental regulations applicable at these sites include state and federal regulations governing air emissions, wastewater effluent, waste management, terrestrial surveillance, and Environmental Restoration (ER) cleanup activities. Sandia Corporation is responsible only for those environmental program activities related to its operations. The DOE/NNSA, Nevada Site Office (NSO) retains responsibility for the cleanup and management of ER TTR sites. Currently, there are no ER Sites at KTF. Environmental monitoring and surveillance programs are required by DOE Order 450.1, Environmental Protection Program (DOE 2003) and DOE Order 231.1 Chg 2., Environment, Safety, and Health Reporting (DOE 1996).
Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned, contractor-operated facility owned by the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) and managed by the Sandia Site Office (SSO), Albuquerque, New Mexico. Sandia Corporation, a wholly-owned subsidiary of Lockheed Martin Corporation, operates SNL/NM. This annual report summarizes data and the compliance status of Sandia Corporation's environmental protection and monitoring programs through December 31, 2003. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 450.1, ''Environmental Protection Program'' (DOE 2003a) and DOE Order 231.1 Chg.2, ''Environment, Safety, and Health Reporting'' (DOE 1996).
Efficient and environmentally sound methods of producing hydrogen are of great importance to the US as it progresses toward the H2 economy. Current studies are investigating the use of high temperature systems driven by nuclear and/or solar energy to drive thermochemical cycles for H2 production. These processes are advantageous since they do not produce greenhouse gas emissions that are a result of hydrogen production from electrolysis or hydrocarbon reformation. Double-substituted perovskites, A1-xSrxCo1-yBy O3-δ (A = Y, La; B = Fe, Ni, Cr, Mn) were synthesized for use as ceramic high-temperature oxygen separation membranes. The materials have promising oxygen sorption properties and were structurally robust under varying temperatures and atmospheres. Post-TGA powder diffraction patterns revealed no structural changes after the temperature and gas treatments, demonstrating the robustness of the material. The most promising material was the La0.1Sr0.9Co1-xMnx O3-δ perovskite. The oxygen sorption properties increased with increasing Mn doping.
This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.
Modeling the response of buried reinforced concrete structures subjected to close-in detonations of conventional high explosives poses a challenge for a number of reasons. Foremost, there is the potential for coupled interaction between the blast and structure. Coupling enters the problem whenever the structure deformation affects the stress state in the neighboring soil, which in turn, affects the loading on the structure. Additional challenges for numerical modeling include handling disparate degrees of material deformation encountered in the structure and surrounding soil, modeling the structure details (e.g., modeling the concrete with embedded reinforcement, jointed connections, etc.), providing adequate mesh resolution, and characterizing the soil response under blast loading. There are numerous numerical approaches for modeling this class of problem (e.g., coupled finite element/smooth particle hydrodynamics, arbitrary Lagrange-Eulerian methods, etc.). The focus of this work will be the use of a coupled Euler-Lagrange (CEL) solution approach. In particular, the development and application of a CEL capability within the Zapotec code is described. Zapotec links two production codes, CTH and Pronto3D. CTH, an Eulerian shock physics code, performs the Eulerian portion of the calculation, while Pronto3D, an explicit finite element code, performs the Lagrangian portion. The two codes are run concurrently with the appropriate portions of a problem solved on their respective computational domains. Zapotec handles the coupling between the two domains. The application of the CEL methodology within Zapotec for modeling coupled blast/structure interaction will be investigated by a series of benchmark calculations. These benchmarks rely on data from the Conventional Weapons Effects Backfill (CONWEB) test series. In these tests, a 15.4-lb pipe-encased C-4 charge was detonated in soil at a 5-foot standoff from a buried test structure. The test structure was composed of a reinforced concrete slab bolted to a reaction structure. Both the slab thickness and soil media were varied in the test series. The wealth of data obtained from these tests along with the variations in experimental setups provide ample opportunity to assess the robustness of the Zapotec CEL methodology.
We explore stability of Random Boolean Networks as a model of biological interaction networks. We introduce surface-to-volume ratio as a measure of stability of the network. Surface is defined as the set of states within a basin of attraction that maps outside the basin by a bit-flip operation. Volume is defined as the total number of states in the basin. We report development of an object-oriented Boolean network analysis code (Attract) to investigate the structure of stable vs. unstable networks. We find two distinct types of stable networks. The first type is the nearly trivial stable network with a few basins of attraction. The second type contains many basins. We conclude that second type stable networks are extremely rare.
Historically, TCP/IP has been the protocol suite used to transfer data throughout the Advanced Simulation and Computing (ASC) community. However, TCP was developed many years ago for an environment very different from the ASC Wide Area Network (WAN) of today. There have been numerous publications that hint of better performance if modifications were made to the TCP algorithms or a different protocol was used to transfer data across a high bandwidth, high delay WAN. Since Sandia National Laboratories wants to maximize the ASC WAN performance to support the Thor's Hammer supercomputer, there is strong interest in evaluating modifications to the TCP protocol and in evaluating alternatives to TCP, such as SCTP, to determine if they provide improved performance. Therefore, the goal of this project is to test, evaluate, compare, and report protocol technologies that enhance the performance of the ASC WAN.
Response of removable epoxy foam (REF) to high heat fluxes is described using a decomposition chemistry model [1] in conjunction with a finite element heat conduction code [2] that supports chemical kinetics and dynamic radiation enclosures. The chemistry model [1] describes the temporal transformation of virgin foam into carbonaceous residue by considering breakdown of the foam polymer structure, desorption of gases not associated with the foam polymer, mass transport of decomposition products from the reaction site to the bulk gas, and phase equilibrium. The finite element foam response model considers the spatial behavior of the foam by using measured and predicted thermophysical properties in combination with the decomposition chemistry model. Foam elements are removed from the computational domain when the condensed mass fractions of the foam elements are close to zero. Element removal, referred to as element death, creates a space within the metal confinement causing radiation to be the dominant mode of heat transfer between the surface of the remaining foam elements and the interior walls of the confining metal skin. Predictions were compared to front locations extrapolated from radiographs of foam cylinders enclosed in metal containers that were heated with quartz lamps [3,4]. The effects of the maximum temperature of the metal container, density of the foam, the foam orientation, venting of the decomposition products, pressurization of the metal container, and the presence or absence of embedded components are discussed.
In a recent paper, Starr and Segalman demonstrated that any Masing model can be represented as a parallel-series Iwan model. A preponderance of the constitutive models that have been suggested for simulating mechanical joints are Masing models, and the purpose of this discussion is to demonstrate how the Iwan representation of those models can yield insight into their character. In particular, this approach can facilitate a critical comparison among numerous plausible constitutive models. It is explicitly shown that three-parameter models such as Smallwood's (Ramberg-Osgood) calculate parameters in such a manner that macro-slip is not an independent parameter, yet the model admits macro-slip. The introduction of a fourth parameter is therefore required. It is shown that when a macro-slip force is specified for the Smallwood model the result is a special case of the Segalman four-parameter model. Both of these models admit a slope discontinuity at the inception of macro-slip. A five-parameter model that has the beneficial features of Segalman's four-parameter model is proposed. This model manifests a force-displacement curve having a continuous first derivative.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.
Hydrogen energy may provide the means to an environmentally friendly future. One of the problems related to its application for transportation is 'on-board' storage. Hydrogen storage in solids has long been recognized as one of the most practical approaches for this purpose. The H-capacity in interstitial hydrides of most metals and alloys is limited to below 2.5% by weight and this is unsatisfactory for on-board transportation applications. Magnesium hydride is an exception with hydrogen capacity of {approx}8.2 wt.%, however, its operating temperature, above 350 C, is too high for practical use. Sodium alanate (NaAlH{sub 4}) absorbs hydrogen up to 5.6 wt.% theoretically; however, its reaction kinetics and partial reversibility do not completely meet the new target for transportation application. Recently Chen et al. [1] reported that (Li{sub 3} N + 2H{sub 2} {leftrightarrow} LiNH{sub 2} + 2LiH) provides a storage material with a possible high capacity, up to 11.5 wt.%, although this material is still too stable to meet the operating pressure/temperature requirement. Here we report a new approach to destabilize lithium imide system by partial substitution of lithium by magnesium in the (LiNH{sub 2 + LiH {leftrightarrow} Li2NH + H2}) system with a minimal capacity loss. This Mg-substituted material can reversibly absorb 5.2 wt.% hydrogen at pressure of 30 bar at 200 C. This is a very promising material for on-board hydrogen storage applications. It is interesting to observe that the starting material (2LiNH{sub 2 + MgH2}) converts to (Mg(NH{sub 2}){sub 2} + 2LiH) after a desorption/re-absorption cycle.
This report documents state-of-the-art methods, tools, and data for the conduct of a fire Probabilistic Risk Assessment (PRA) for a commercial nuclear power plant (NPP) application. The methods have been developed under the Fire Risk Re-quantification Study. This study was conducted as a joint activity between EPRI and the U. S. NRC Office of Nuclear Regulatory Research (RES) under the terms of an EPRI/RES Memorandum of Understanding [RS.1] and an accompanying Fire Research Addendum [RS.2]. Industry participants supported demonstration analyses and provided peer review of this methodology. The documented methods are intended to support future applications of Fire PRA, including risk-informed regulatory applications. The documented method reflects state-of-the-art fire risk analysis approaches. The primary objective of the Fire Risk Study was to consolidate recent research and development activities into a single state-of-the-art fire PRA analysis methodology. Methodological issues raised in past fire risk analyses, including the Individual Plant Examination of External Events (IPEEE) fire analyses, have been addressed to the extent allowed by the current state-of-the-art and the overall project scope. Methodological debates were resolved through a consensus process between experts representing both EPRI and RES. The consensus process included a provision whereby each major party (EPRI and RES) could maintain differing technical positions if consensus could not be reached. No cases were encountered where this provision was invoked. While the primary objective of the project was to consolidate existing state-of-the-art methods, in many areas, the newly documented methods represent a significant advancement over previously documented methods. In several areas, this project has, in fact, developed new methods and approaches. Such advances typically relate to areas of past methodological debate.
In flight tests, certain finned bodies of revolution firing lateral jets experience slower spin rates than expected. The primary cause for the reduced spin rate is the interaction between the lateral jets and the freestream air flowing past the body. This interaction produces vortices that interact with the fins (Vortex-Fin Interaction (VFI)) altering the pressure distribution over the fins and creating torque that counteracts the desired spin (counter torque). The current task is to develop an automated procedure for analyzing the pressures measured at an array of points on the fin surfaces of a body tested in a production-scale wind tunnel to determine the VFI-induced roll torque and compare it to the roll torque experimentally measured with an aerodynamic balance. Basic pressure, force, and torque relationships were applied to finite elements defined by the pressure measurement locations and integrated across the fin surface. The integrated fin pressures will help assess the distinct contributions of the individual fins to the counter torque and aid in correlating the counter torque with the positions and strengths of the vortices. The methodology produced comparisons of the effects of VFI for varying flow conditions such as freestream Mach number and dynamic pressure. The results show that for some cases the calculated counter torque agreed with the measured counter torque; however, the results were less consistent with increased freestream Mach numbers and dynamic pressures.
The analytical model for the depth of correlation (measurement depth) of a microscopic particle image velocimetry (micro-PIV) experiment derived by Olsen and Adrian (Exp. Fluids, 29, pp. S166-S174, 2000) has been modified to be applicable to experiments using high numerical aperture optics. A series of measurements are presented that experimentally quantify the depth of correlation of micro-PIV velocity measurements which employ high numerical aperture and magnification optics. These measurements demonstrate that the modified analytical model is quite accurate in estimating the depth of correlation in micro-PIV measurements using this class of optics. Additionally, it was found that the Gaussian particle approximation made in this model does not significantly affect the model's performance. It is also demonstrated that this modified analytical model easily predicts the depth of correlation when viewing into a medium of a different index of refraction than the immersion medium.
Solid-state {sup 1}H NMR relaxometry studies were conducted on a hydroxy-terminated polybutadiene (HTPB) based polyurethane elastomer thermo-oxidatively aged at 80 C. The {sup 1}H T{sub 1}, T{sub 2}, and T{sub 1{rho}} relaxation times of samples thermally aged for various periods of time were determined as a function of NMR measurement temperature. The response of each measurement was calculated from a best-fit linear function of the relaxation time vs. aging time. It was found that the T{sub 2,H} and T{sub 1{rho},H} relaxation times exhibited the largest response to thermal degradation, whereas T{sub 1,H} showed minimal change. All of the NMR relaxation measurements on solid samples showed significantly less sensitivity to thermal aging than the T{sub 2,H} relaxation times of solvent-swollen samples.
The microstructure and mechanical properties of niobium-modified lead zirconate titanate (PNZT) 95/5 ceramics, where 95/5 refers to the ratio of lead zirconate to lead titanate, were evaluated as a function of lead (Pb) stoichiometry. Chemically-prepared PNZT 95/5 is produced at Sandia National Laboratories by the Ceramics and Glass Processing Department (14154) for use as voltage elements in ferroelectric neutron generator power supplies. PNZT 95/5 was prepared according to the nominal formulation of Pb{sub 0.991+x}(Zr{sub 0.955}Ti{sub 0.045}){sub 0.982}Nb{sub 0.018}O{sub 3+x}, where x (-0.0274 {approx}< x {approx}< 0.0297) refers to the mole fraction of Pb and O that deviated from the stoichiometric value. The Pb concentrations were determined from calcined powders; no adjustments were made to Pb compositions due to weight loss during sintering. The microstructure (second phases, fracture mode and grain size) varied appreciably with Pb stoichiometry, whereas the mechanical properties (hardness, fracture toughness, strength and Weibull parameters) exhibited modest variation. Specimens deficient in Pb, 2.74% (x = -0.0274) and 2.15% (x = -0.02150), had a high area fraction of a zirconia (ZrO{sub 2}) second phase on the order of 0.02. As the Pb content in solid solution increased the ZrO{sub 2} content decreased; no ZrO{sub 2} was observed for the specimen containing 2.97% excess Pb (x = 0.0297). Over the range of Pb stoichiometry most specimens fractured predominately transgranularly; however, 2.97% Pb excess PNZT 95/5 fractured predominately intergranularly. No systematic changes in hardness or Weibull modulus were observed as a function of Pb content. Fracture toughness decreased slightly from 1.8 MPa{center_dot}m{sup 1/2} for Pb deficient specimens to 1.6 MPa{center_dot}m{sup 1/2} for specimens with excess Pb. Although there are microstructural differences with changes in Pb content, the mechanical properties did not vary substantially. However, the average failure stress and fracture toughness for PNZT 95/5 containing 2.97% excess Pb decreased slightly. It is expected that additional increases in Pb content would result in further mechanical property degradation. The decrease in mechanical properties for the 2.97% Pb excess ceramics could be the result of a weaker PbO-rich grain boundary phase present in the material. If better mechanical properties are desired, it is recommended that PNZT 95/5 ceramics are processed by a method whereby any excess Pb is depleted from the final sintered ceramic so that near-stoichiometric values of Pb concentration are reached. Otherwise, a PbO-rich grain boundary phase may exist in the ceramic which could potentially be detrimental to the mechanical properties of PNZT 95/5 ceramics.
This report represents the completion of a Laboratory-Directed Research and Development (LDRD) program to develop and fabricate geometric test structures for the measurement of transport properties in bulk GaN and AlGaN/GaN heterostructures. A large part of this study was spent examining fabrication issues related to the test structures used in these measurements, due to the fact that GaN processing is still in its infancy. One such issue had to do with surface passivation. Test samples without a surface passivation, often failed at electric fields below 50 kV/cm, due to surface breakdown. A silicon nitride passivation layer of approximately 200 nm was used to reduce the effects of surface states and premature surface breakdown. Another issue was finding quality contacts for the material, especially in the case of the AlGaN/GaN heterostructure samples. Poor contact performance in the heterostructures plagued the test structures with lower than expected velocities due to carrier injection from the contacts themselves. Using a titanium-rich ohmic contact reduced the contact resistance and stopped the carrier injection. The final test structures had an etch constriction with varying lengths and widths (8x2, 10x3, 12x3, 12x4, 15x5, and 16x4 {micro}m) and massive contacts. A pulsed voltage input and a four-point measurement in a 50 {Omega} environment was used to determine the current through and the voltage dropped across the constriction. From these measurements, the drift velocity as a function of the applied electric field was calculated and thus, the velocity-field characteristics in n-type bulk GaN and AlGaN/GaN test structures were determined. These measurements show an apparent saturation velocity near to 2.5x10{sup 7} cm/s at 180 kV/cm and 3.1x10{sup 7} cm/s, at a field of 140 kV/cm, for the bulk GaN and AlGaN heterostructure samples, respectively. These experimental drift velocities mark the highest velocities measured in these materials to date and confirm the predictions of previous theoretical models using ensemble Monte Carlo simulations.
The ability to precisely place nanomaterials at predetermined locations is necessary for realizing applications using these new materials. Using an organic template, we demonstrate directed growth of zinc oxide (ZnO) nanorods on silver films from aqueous solution. Spatial organization of ZnO nanorods in prescribed arbitrary patterns was achieved, with unprecedented control in selectivity, crystal orientation, and nucleation density. Surprisingly, we found that caboxylate endgroups of {omega}-alkanethiol molecules strongly inhibit ZnO nucleation. The mechanism for this observed selectivity is discussed.
The first viscous compressible three-dimensional BiGlobal linear instability analysis of leading-edge boundary layer flow has been performed. Results have been obtained by independent application of asymptotic analysis and numerical solution of the appropriate partial-differential eigenvalue problem. It has been shown that the classification of three-dimensional linear instabilities of the related incompressible flow [13] into symmetric and antisymmetric mode expansions in the chordwise coordinate persists for compressible, subsonic flow-regime at sufficiently large Reynolds numbers.
Techniques for mitigating the adsorption of {sup 137}Cs and {sup 60}Co on metal surfaces (e.g. RAM packages) exposed to contaminated water (e.g. spent-fuel pools) have been developed and experimentally verified. The techniques are also effective in removing some of the {sup 60}Co and {sup 137}Cs that may have been adsorbed on the surfaces after removal from the contaminated water. The principle for the {sup 137}Cs mitigation technique is based upon ion-exchange processes. In contrast, {sup 60}Co contamination primarily resides in minute particles of crud that become lodged on cask surfaces. Crud is an insoluble Fe-Ni-Cr oxide that forms colloidal-sized particles as reactor cooling systems corrode. Because of the similarity between Ni{sup 2+} and Co{sup 2+}, crud is able to scavenge and retain traces of cobalt as it forms. A number of organic compounds have a great specificity for combining with nickel and cobalt. Ongoing research is investigating the effectiveness of chemical complexing agent EDTA with regard to its ability to dissolve the host phase (crud) thereby liberating the entrained {sup 60}Co into a solution where it can be rinsed away.
The National Spent Nuclear Fuel Program, located at the Idaho National Laboratory (INL), coordinates and integrates national efforts in management and disposal of US Department of Energy (DOE)-owned spent nuclear fuel. These management functions include development of standardized systems for long-term disposal in the proposed Yucca Mountain repository. Nuclear criticality control measures are needed in these systems to avoid restrictive fissile loading limits because of the enrichment and total quantity of fissile material in some types of the DOE spent nuclear fuel. This need is being addressed by development of corrosion-resistant, neutron-absorbing structural alloys for nuclear criticality control. This paper outlines results of a metallurgical development program that is investigating the alloying of gadolinium into a nickel-chromium-molybdenum alloy matrix. Gadolinium has been chosen as the neutron absorption alloying element due to its high thermal neutron absorption cross section and low solubility in the expected repository environment. The nickel-chromium-molybdenum alloy family was chosen for its known corrosion performance, mechanical properties, and weldability. The workflow of this program includes chemical composition definition, primary and secondary melting studies, ingot conversion processes, properties testing, and national consensus codes and standards work. The microstructural investigation of these alloys shows that the gadolinium addition is present in the alloy as a gadolinium-rich second phase. The mechanical strength values are similar to those expected for commercial Ni-Cr-Mo alloys. The alloys have been corrosion tested with acceptable results. The initial results of weldability tests have also been acceptable. Neutronic testing in a moderated critical array has generated favorable results. An American Society for Testing and Materials material specification has been issued for the alloy and a Code Case has been submitted to the American Society of Mechanical Engineers for code qualification.
This report describes work carried out under a Sandia National Laboratories Excellence in Engineering Fellowship in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Our research group (at UIUC) is developing a intelligent robot, and attempting to teach it language. While there are many aspects of this research, for the purposes of this report the most important are the following ideas. Language is primarily based on semantics, not syntax. To truly learn meaning, the language engine must be part of an embodied intelligent system, one capable of using associative learning to form concepts from the perception of experiences in the world, and further capable of manipulating those concepts symbolically. In the work described here, we explore the use of hidden Markov models (HMMs) in this capacity. HMMs are capable of automatically learning and extracting the underlying structure of continuous-valued inputs and representing that structure in the states of the model. These states can then be treated as symbolic representations of the inputs. We describe a composite model consisting of a cascade of HMMs that can be embedded in a small mobile robot and used to learn correlations among sensory inputs to create symbolic concepts. These symbols can then be manipulated linguistically and used for decision making. This is the project final report for the University Collaboration LDRD project, 'A Robotic Framework for Semantic Concept Learning'.
Finding the central sets, such as center and median sets, of a network topology is a fundamental step in the design and analysis of complex distributed systems. This paper presents distributed synchronous algorithms for finding central sets in general tree structures. Our algorithms are distinguished from previous work in that they take only qualitative information, thus reducing the constants hidden in the asymptotic notation, and all vertices of the topology know the central sets upon their termination.
Extremely short collision mean free paths and near-singular elastic and inelastic differential cross sections (DCS) make analog Monte Carlo simulation an impractical tool for charged particle transport. The widely used alternative, the condensed history method, while efficient, also suffers from several limitations arising from the use of precomputed smooth distributions for sampling. There is much interest in developing computationally efficient algorithms that implement the correct transport mechanics. Here we present a nonanalog transport-based method that incorporates the correct transport mechanics and is computationally efficient for implementation in single event Monte Carlo codes. Our method systematically preserves important physics and is mathematically rigorous. It builds on higher order Fokker-Planck and Boltzmann Fokker-Planck representations of the scattering and energy-loss process, and we accordingly refer to it as a Generalized Boltzmann Fokker-Planck (GBFP) approach. We postulate the existence of nonanalog single collision scattering and energy-loss distributions (differential cross sections) and impose the constraint that the first few momentum transfer and energy loss moments be identical to corresponding analog values. This is effected through a decomposition or hybridizing scheme wherein the singular forward peaked, small energy-transfer collisions are isolated and de-singularized using different moment-preserving strategies, while the large angle, large energy-transfer collisions are described by the exact (analog) DCS or approximated to a high degree of accuracy. The inclusion of the latter component allows the higher angle and energy-loss moments to be accurately captured. This procedure yields a regularized transport model characterized by longer mean free paths and smoother scattering and energy transfer kernels than analog. In practice, acceptable accuracy is achieved with two rigorously preserved moments, but accuracy can be systematically increased to analog level by preserving successively higher moments with almost no change to the algorithm. Details of specific moment-preserving strategies will be described and results presented for dose in heterogeneous media due to a pencil beam and a line source of monoenergetic electrons. Error and runtimes of our nonanalog formulations will be contrasted against condensed history implementations.
The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.
Human behavior is a function of an iterative interaction between the stimulus environment and past experience. It is not simply a matter of the current stimulus environment activating the appropriate experience or rule from memory (e.g., if it is dark and I hear a strange noise outside, then I turn on the outside lights and investigate). Rather, it is a dynamic process that takes into account not only things one would generally do in a given situation, but things that have recently become known (e.g., there have recently been coyotes seen in the area and one is known to be rabid), as well as other immediate environmental characteristics (e.g., it is snowing outside, I know my dog is outside, I know the police are already outside, etc.). All of these factors combine to inform me of the most appropriate behavior for the situation. If it were the case that humans had a rule for every possible contingency, the amount of storage that would be required to enable us to fluidly deal with most situations we encounter would rapidly become biologically untenable. We can all deal with contingencies like the one above with fairly little effort, but if it isn't based on rules, what is it based on? The assertion of the Cognitive Systems program at Sandia for the past 5 years is that at the heart of this ability to effectively navigate the world is an ability to discriminate between different contexts (i.e., Dynamic Context Discrimination, or DCD). While this assertion in and of itself might not seem earthshaking, it is compelling that this ability and its components show up in a wide variety of paradigms across different subdisciplines in psychology. We begin by outlining, at a high functional level, the basic ideas of DCD. We then provide evidence from several different literatures and paradigms that support our assertion that DCD is a core aspect of cognitive functioning. Finally, we discuss DCD and the computational model that we have developed as an instantiation of DCD in more detail. Before commencing with our overview of DCD, we should note that DCD is not necessarily a theory in the classic sense. Rather, it is a description of cognitive functioning that seeks to unify highly similar findings across a wide variety of literatures. Further, we believe that such convergence warrants a central place in efforts to computationally emulate human cognition. That is, DCD is a general principle of cognition. It is also important to note that while we are drawing parallels across many literatures, these are functional parallels and are not necessarily structural ones. That is, we are not saying that the same neural pathways are involved in these phenomena. We are only saying that the different neural pathways that are responsible for the appearance of these various phenomena follow the same functional rules - the mechanisms are the same even if the physical parts are distinct. Furthermore, DCD is not a causal mechanism - it is an emergent property of the way the brain is constructed. DCD is the result of neurophysiology (cf. John, 2002, 2003). Finally, it is important to note that we are not proposing a generic learning mechanism such that one biological algorithm can account for all situation interpretation. Rather, we are pointing out that there are strikingly similar empirical results across a wide variety of disciplines that can be understood, in part, by similar cognitive processes. It is entirely possible, even assumed in some cases (i.e., primary language acquisition) that these more generic cognitive processes are complemented and constrained by various limits which may or may not be biological in nature (cf. Bates & Elman, 1996; Elman, in press).
Dual-frequency reactors employ source rf power supplies to generate plasma and bias supplies to extract ions. There is debate over choices for the source and bias frequencies. Higher frequencies facilitate plasma generation but their shorter wavelengths may cause spatial variations in plasma properties. Electrical nonlinearity of plasma sheaths causes harmonic generation and mixing of source and bias frequencies. These processes, and the resulting spectrum of frequencies, are as much dependent on electrical characteristics of matching networks and on chamber geometry as on plasma sheath properties. We investigated such electrical effects in a 300-mm Applied-Materials plasma reactor. Data were taken for 13.56-MHz bias frequency (chuck) and for source frequencies from 30 to 160 MHz (upper electrode). An rf-magnetic-field probe (B-dot loop) was used to measure the radial variation of fields inside the plasma. We will describe the results of this work.
A laser hazard analysis and safety assessment was performed for the LH-40 IR Laser Rangefinder based on the 2000 version of the American National Standard Institute's Standard Z136.1, for the Safe Use of Lasers and Z136.6, for the Safe Use of Lasers Outdoors. The LH-40 IR Laser is central to the Long Range Reconnaissance and Observation System (LORROS). The LORROS is being evaluated by the Department 4149 Group to determine its capability as a long-range assessment tool. The manufacture lists the laser rangefinder as 'eye safe' (Class 1 laser classified under the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard). It was necessary that SNL validate this prior to its use involving the general public. A formal laser hazard analysis is presented for the typical mode of operation.
This report surveys the needs associated with environmental monitoring and long-term environmental stewardship. Emerging sensor technologies are reviewed to identify compatible technologies for various environmental monitoring applications. The contaminants that are considered in this report are grouped into the following categories: (1) metals, (2) radioisotopes, (3) volatile organic compounds, and (4) biological contaminants. Regulatory drivers are evaluated for different applications (e.g., drinking water, storm water, pretreatment, and air emissions), and sensor requirements are derived from these regulatory metrics. Sensor capabilities are then summarized according to contaminant type, and the applicability of the different sensors to various environmental monitoring applications is discussed.
This report describes both a general methodology and some specific examples of passive radio receivers. A passive radio receiver uses no direct electrical power but makes sole use of the power available in the radio spectrum. These radio receivers are suitable as low data-rate receivers or passive alerting devices for standard, high power radio receivers. Some zero-power radio architectures exhibit significant improvements in range with the addition of very low power amplifiers or signal processing electronics. These ultra-low power radios are also discussed and compared to the purely zero-power approaches.
We modeled the effects of temperature, degree of polymerization, and surface coverage on the equilibrium structure of tethered poly(N-isopropylacrylamide) chains immersed in water. We employed a numerical self-consistent field theory where the experimental phase diagram was used as input to the theory. At low temperatures, the composition profiles are approximately parabolic and extend into the solvent. In contrast, at temperatures above the LCST of the bulk solution, the polymer profiles are collapsed near the surface. The layer thickness and the effective monomer fraction within the layer undergo what appears to be a first-order change at a temperature that depends on surface coverage and chain length. Our results suggest that as a result of the tethering constraint, the phase diagram becomes distorted relative to the bulk polymer solution and exhibits closed loop behavior. As a consequence, we find that the relative magnitude of the layer thickness change at 20 and 40 C is a nonmonotonic function of surface coverage, with a maximum that shifts to lower surface coverage as the chain length increases in qualitative agreement with experiment.
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.
Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoing test program that simulates repeated heat loads from ITER ELMs.
In recent dynamic hohlraum experiments on the Z facility, Al and MgF{sub 2} tracer layers were embedded in cylindrical CH{sub 2} foam targets to provide K-shell lines in the keV spectral region for diagnosing the conditions of the interior hohlraum plasma. The position of the tracers was varied: sometimes they were placed 2 mm from the ends of the foam cylinder and sometimes at the ends of the cylinder. Also varied was the composition of the tracers in the sense that pure Al layers, pure MgF{sub 2} layers, or mixtures of the elements were employed on various shots. Time-resolved K-shell spectra of both Al and Mg show mostly absorption lines. These data can be analyzed with detailed configuration atomic models of carbon, aluminum, and magnesium in which spectra are calculated by solving the radiation transport equation for as many as 4100 frequencies. We report results from shot Z1022 to illustrate the basic radiation physics and the capabilities as well as limitations of this diagnostic method.
We present a formulation for coupling atomistic and continuum simulation methods for application to both quasistatic and dynamic analyses. In our formulation, a coarse-scale continuum discretization is assumed to cover all parts of the computational domain with atomistic crystals introduced only in regions of interest. The geometry of the discretization and crystal are allowed to overlap arbitrarily. Our approach uses interpolation and projection operators to link the kinematics of each region, which are then used to formulate a system potential energy from which we derive coupled expressions for the forces acting in each region. A hyperelastic constitutive formulation is used to compute the stress response of the defect-free continuum with constitutive properties derived from the Cauchy-Born rule. A correction to the Cauchy-Born rule is introduced in the overlap region to minimize fictitious boundary effects. Features of our approach will be demonstrated with simulations in one, two and three dimensions.
With increased terrorist threats in the past few years, it is no longer feasible to feel confident that a facility is well protected with a static security system. Potential adversaries often research their targets, examining procedural and system changes, in order to attack at a vulnerable time. Such system changes may include scheduled sensor maintenance, scheduled or unscheduled changes in the guard force, facility alert level changes, sensor failures or degradation, etc. All of these changes impact the system effectiveness and can make a facility more vulnerable. Currently, a standard analysis of system effectiveness is performed approximately every six months using a vulnerability assessment tool called ASSESS (Analytical Systems and Software for Evaluating Safeguards and Systems). New standards for determining a facility's system effectiveness will be defined by tools that are currently under development, such as ATLAS (Adversary Time-line Analysis System) and NextGen (Next Generation Security Simulation). Although these tools are useful to model analyses at different spatial resolutions and can support some sensor dynamics using statistical models, they are limited in that they require a static system state as input. They cannot account for the dynamics of the system through day-to-day operations. The emphasis of this project was to determine the feasibility of dynamically monitoring the facility security system and performing an analysis as changes occur. Hence, the system effectiveness is known at all times, greatly assisting time-critical decisions in response to a threat or a potential threat.
We have successfully demonstrated selective trapping, concentration, and release of various biological organisms and inert beads by insulator-based dielectrophoresis within a polymeric microfluidic device. The microfluidic channels and internal features, in this case arrays of insulating posts, were initially created through standard wet-etch techniques in glass. This glass chip was then transformed into a nickel stamp through the process of electroplating. The resultant nickel stamp was then used as the replication tool to produce the polymeric devices through injection molding. The polymeric devices were made of Zeonor{reg_sign} 1060R, a polyolefin copolymer resin selected for its superior chemical resistance and optical properties. These devices were then optically aligned with another polymeric substrate that had been machined to form fluidic vias. These two polymeric substrates were then bonded together through thermal diffusion bonding. The sealed devices were utilized to selectively separate and concentrate a biological pathogen simulants. These include spores that were selectively concentrated and released by simply applying D.C. voltages across the plastic replicates via platinum electrodes in inlet and outlet reservoirs. The dielectrophoretic response of the organisms is observed to be a function of the applied electric field and post size, geometry and spacing. Cells were selectively trapped against a background of labeled polystyrene beads and spores to demonstrate that samples of interest can be separated from a diverse background. We have implemented and demonstrated here a methodology to determine the concentration factors obtained in these devices.
The effects of ionizing and neutron radiation on the characteristics and performance of laser diodes are reviewed, and the formation mechanisms for nonradiative recombination centers, the primary type of radiation damage in laser diodes, are discussed. Additional topics include the detrimental effects of aluminum in the active (lasing) volume, the transient effects of high-dose-rate pulses of ionizing radiation, and a summary of ways to improve the radiation hardness of laser diodes. Radiation effects on laser diodes emitting in the wavelength region around 808 nm are emphasized.
We find that small temperature changes cause steps on the NiAl(110) surface to move. We show that this step motion occurs because mass is transferred between the bulk and the surface as the concentration of bulk thermal defects (i.e., vacancies) changes with temperature. Since the change in an island's area with a temperature change is found to scale strictly with the island's step length, the thermally generated defects are created (annihilated) very near the surface steps. To quantify the bulk/surface exchange, we oscillate the sample temperature and measure the amplitude and phase lag of the system response, i.e., the change in an island's area normalized to its perimeter. Using a one-dimensional model of defect diffusion through the bulk in a direction perpendicular to the surface, we determine the migration and formation energies of the bulk thermal defects. During surface smoothing, we show that there is no flow of material between islands on the same terrace and that all islands in a stack shrink at the same rate. We conclude that smoothing occurs by mass transport through the bulk of the crystal rather than via surface diffusion. Based on the measured relative sizes of the activation energies for island decay, defect migration, and defect formation, we show that attachment/detachment at the steps is the rate-limiting step in smoothing.
The goal of z-pinch inertial fusion energy (IFE) is to extend the single-shot z-pinch inertial confinement fusion (ICF) results on Z to a repetitive-shot z-pinch power plant concept for the economical production of electricity. Z produces up to 1.8 MJ of x-rays at powers as high as 230 TW. Recent target experiments on Z have demonstrated capsule implosion convergence ratios of 14-21 with a double-pinch driven target, and DD neutron yields up to 8x10exp10 with a dynamic hohlraum target. For z-pinch IFE, a power plant concept is discussed that uses high-yield IFE targets (3 GJ) with a low rep-rate per chamber (0.1 Hz). The concept includes a repetitive driver at 0.1 Hz, a Recyclable Transmission Line (RTL) to connect the driver to the target, high-yield targets, and a thick-liquid wall chamber. Recent funding by a U.S. Congressional initiative for $4M for FY04 is supporting research on RTLs, repetitive pulsed power drivers, shock mitigation, full RTL cycle planned experiments, high-yield IFE targets, and z-pinch power plant technologies. Recent results of research in all of these areas are discussed, and a Road Map for Z-Pinch IFE is presented.
This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.
The Z-Pinch Power Plant uses the results from Sandia National Laboratories Z accelerator in a power plant application to generate energy pulses using inertial confinement fusion. A collaborative project has been initiated by Sandia to investigate the scientific principles of a power generation system using this technology. Research is under way to develop an integrated concept that describes the operational issues of a 1000 MW electrical power plant. Issues under consideration include: 1-20 gigajoule fusion pulse containment, repetitive mechanical connection of heavy hardware, generation of terawatt pulses every 10 seconds, recycling of ten thousand tons of steel, and manufacturing of millions of hohlraums and capsules per year. Additionally, waste generation and disposal issues are being examined. This paper describes the current concept for the plant and also the objectives for future research.
As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.
As part of DARPA's Software for Distributed Robotics Program within the Information Processing Technologies Office (IPTO), Sandia National Laboratories was tasked with identifying military airborne and maritime missions that require cooperative behaviors as well as identifying generic collective behaviors and performance metrics for these missions. This report documents this study. A prioritized list of general military missions applicable to land, air, and sea has been identified. From the top eight missions, nine generic reusable cooperative behaviors have been defined. A common mathematical framework for cooperative controls has been developed and applied to several of the behaviors. The framework is based on optimization principles and has provably convergent properties. A three-step optimization process is used to develop the decentralized control law that minimizes the behavior's performance index. A connective stability analysis is then performed to determine constraints on the communication sample period and the local control gains. Finally, the communication sample period for four different network protocols is evaluated based on the network graph, which changes throughout the task. Using this mathematical framework, two metrics for evaluating these behaviors are defined. The first metric is the residual error in the global performance index that is used to create the behavior. The second metric is communication sample period between robots, which affects the overall time required for the behavior to reach its goal state.
When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.
This report is a collection of documents written by the group members of the Engineering Sciences Research Foundation (ESRF), Laboratory Directed Research and Development (LDRD) project titled 'A Robust, Coupled Approach to Atomistic-Continuum Simulation'. Presented in this document is the development of a formulation for performing quasistatic, coupled, atomistic-continuum simulation that includes cross terms in the equilibrium equations that arise due to kinematic coupling and corrections used for the calculation of system potential energy to account for continuum elements that overlap regions containing atomic bonds, evaluations of thermo-mechanical continuum quantities calculated within atomistic simulations including measures of stress, temperature and heat flux, calculation used to determine the appropriate spatial and time averaging necessary to enable these atomistically-defined expressions to have the same physical meaning as their continuum counterparts, and a formulation to quantify a continuum 'temperature field', the first step towards constructing a coupled atomistic-continuum approach capable of finite temperature and dynamic analyses.
This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.
An experiment at Sandia National Laboratories confirmed that a ternary salt (Flinabe, a ternary mixture of LiF, BeF{sub 2} and NaF) had a sufficiently low melting temperature ({approx}305 C) to be useful for first wall and blanket applications using flowing molten salts that were investigated in the Advanced Power Extraction (APEX) Program.[1] In the experiment, the salt pool was contained in a stainless steel crucible under vacuum. One thermocouple was placed in the salt and two others were embedded in the crucible. The results and observations from the experiment are reported in the companion paper.[2] The paper presented here will cover a 3-D finite element thermal analysis of the salt pool and crucible. The analysis was done to evaluate the thermal gradients in the salt pool and crucible and to compare the temperatures of the three thermocouples. One salt mixture appeared to melt and to solidify as a eutectic with a visible plateau in the cooling curve (i. e, time versus temperature for the thermocouple in the salt pool). This behavior was reproduced with the thermal model. Cases were run with several values of the thermal conductivity and latent heat of fusion to see the parametric effects of these changes on the respective cooling curves. The crucible was heated by an electrical heater in an inverted well at the base of the crucible. It lost heat primarily by radiation from the outer surfaces of the crucible and the top surface of the salt. The primary independent factors in the model were the emissivity of the crucible (and of the salt) and the fraction of the heater power coupled into the crucible. The model was 'calibrated' using (thermocouple) data and heating power from runs in which the crucible contained no salt.
This paper analyzes the relationship between current renewable energy technology costs and cumulative production, research, development and demonstration expenditures, and other institutional influences. Combining the theoretical framework of 'learning by doing' and developments in 'learning by searching' with the fields of organizational learning and institutional economics offers a complete methodological framework to examine the underlying capital cost trajectory when developing electricity cost estimates used in energy policy planning models. Sensitivities of the learning rates for global wind and solar photovoltaic technologies to changes in the model parameters are tested. The implications of the results indicate that institutional policy instruments play an important role for these technologies to achieve cost reductions and further market adoption.
Waste characterization is probably the most costly part of radioactive waste management. An important part of this characterization is the measurements of headspace gas in waste containers in order to demonstrate the compliance with Resource Conservation and Recovery Act (RCRA) or transportation requirements. The traditional chemical analysis methods, which include all steps of gas sampling, sample shipment and laboratory analysis, are expensive and time-consuming as well as increasing worker's exposure to hazardous environments. Therefore, an alternative technique that can provide quick, in-situ, and real-time detections of headspace gas compositions is highly desirable. This report summarizes the results obtained from a Laboratory Directed Research & Development (LDRD) project entitled 'Potential Application of Microsensor Technology in Radioactive Waste Management with Emphasis on Headspace Gas Detection'. The objective of this project is to bridge the technical gap between the current status of microsensor development and the intended applications of these sensors in nuclear waste management. The major results are summarized below: {sm_bullet} A literature review was conducted on the regulatory requirements for headspace gas sampling/analysis in waste characterization and monitoring. The most relevant gaseous species and the related physiochemical environments were identified. It was found that preconcentrators might be needed in order for chemiresistor sensors to meet desired detection {sm_bullet} A long-term stability test was conducted for a polymer-based chemresistor sensor array. Significant drifts were observed over the time duration of one month. Such drifts should be taken into account for long-term in-situ monitoring. {sm_bullet} Several techniques were explored to improve the performance of sensor polymers. It has been demonstrated that freeze deposition of black carbon (CB)-polymer composite can effectively eliminate the so-called 'coffee ring' effect and lead to a desirable uniform distribution of CB particles in sensing polymer films. The optimal ratio of CB/polymer has been determined. UV irradiation has been shown to improve sensor sensitivity. {sm_bullet} From a large set of commercially available polymers, five polymers were selected to form a sensor array that was able to provide optimal responses to six target-volatile organic compounds (VOCs). A series of tests on the response of sensor array to various VOC concentrations have been performed. Linear sensor responses have been observed over the tested concentration ranges, although the responses over a whole concentration range are generally nonlinear. {sm_bullet} Inverse models have been developed for identifying individual VOCs based on sensor array responses. A linear solvation energy model is particularly promising for identifying an unknown VOC in a single-component system. It has been demonstrated that a sensor array as such we developed is able to discriminate waste containers for their total VOC concentrations and therefore can be used as screening tool for reducing the existing headspace gas sampling rate. {sm_bullet} Various VOC preconcentrators have been fabricated using Carboxen 1000 as an absorbent. Extensive tests have been conducted in order to obtain optimal configurations and parameter ranges for preconcentrator performance. It has been shown that use of preconcentrators can reduce the detection limits of chemiresistors by two orders of magnitude. The life span of preconcentrators under various physiochemical conditions has also been evaluated. {sm_bullet} The performance of Pd film-based H2 sensors in the presence of VOCs has been evaluated. The interference of sensor readings by VOC has been observed, which can be attributed to the interference of VOC with the H2-O2 reaction on the Pd alloy surface. This interference can be eliminated by coating a layer of silicon dioxide on sensing film surface. Our work has demonstrated a wide range of applications of gas microsensors in radioactive waste management. Such applications can potentially lead to a significant cost saving and risk reduction for waste characterization.
In the mid-90's, breakthroughs were achieved at Sandia with z-pinches for high energy density physics on the Saturn machine. These initial tests led to the modification of the PBFA II machine to provide high currents rather than the high voltage it was initially designed for. The success of z-pinch for high energy density physics experiments insured a new mission for the converted accelerator, known as Z since 1997. Z now provides a unique capability to a number of basic science communities and has expanded its mission to include radiation effects research, inertial confinement fusion and material properties research. To achieve continued success, the physics community has requested higher peak current, better precision and pulse shaping versatility be incorporated into the refurbishment of the Z machine, known as ZR. In addition to the performance specification for ZR of a peak current of 26 MA with an implosion time of 100 ns, the machine also has a reliability specification to achieve 400 shots per year. While changes to the basic architecture of the Z machine are minor, the vast majority of its components have been redesigned. Moreover the increase in peak current from its present 18 MA to ZR's peak current of 26 MA at nominal operating parameters requires significantly higher voltages. These higher voltages, along with the reliability requirement, mandate a system assessment be performed to insure the requirements have been met. This paper will describe the System Assessment Test Program (SATPro) for the ZR project and report on the results.
Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.