The outline of this report is: (1) structures of hexagonal Er meal, ErH{sub 2} fluorite, and molybdenum; (2) texture issues and processing effects; (3) idea of pole figure integration; and (4) promising neutron diffraction work. Summary of this report are: (1) ErD{sub 2} and ErT{sub 2} film microstructures are strongly effected by processing conditions; (2) both x-ray and neutron diffraction are being pursued to help diagnose structure/property issues regarding ErT{sub 2} films and these correlations to He retention/release; (3) texture issues are great challenges for determination of site occupancy; and (4) work on pole-figure-integration looks to have promise addressing texture issues in ErD{sub 2} and ErT{sub 2} films.
Hydrogen energy may provide the means to an environmentally friendly future. One of the problems related to its application for transportation is 'on-board' storage. Hydrogen storage in solids has long been recognized as one of the most practical approaches for this purpose. The H-capacity in interstitial hydrides of most metals and alloys is limited to below 2.5% by weight and this is unsatisfactory for on-board transportation applications. Magnesium hydride is an exception with hydrogen capacity of -8.2 wt.%, however, its operating temperature, above 350 C, is too high for practical use. Sodium alanate (NaAlH{sub 4}) absorbs hydrogen up to 5.6 wt.% theoretically; however, its reaction kinetics and partial reversibility do not completely meet the new target for transportation application. Recently Chen et al. [1] reported that (Li{sub 3}N+2H{sub 2} {leftrightarrow} LiNH{sub 2} + 2LiH) provides a storage material with a possible high capacity, up to 11.5 wt.%, although this material is still too stable to meet the operating pressure/temperature requirement. Here we report a new approach to destabilize lithium imide system by partial substitution of lithium by magnesium in the (LiNH{sub 2} + LiH {leftrightarrow} Li{sub 2}NH + H{sub 2}) system with a minimal capacity loss. This Mg-substituted material can reversibly absorb 5.2 wt.% hydrogen at pressure of 30 bar at 200 C. This is a very promising material for on-board hydrogen storage applications. It is interesting to observe that the starting material (2LiNH{sub 2} + MgH{sub 2}) converts to (Mg(NH{sub 2}){sub 2} + 2LiH) after a desorption/re-absorption cycle.
Biosecurity must be implemented without impeding biomedical and bioscience research. Existing security literature and regulatory requirements do not present a comprehensive approach or clear model for biosecurity, nor do they wholly recognize the operational issues within laboratory environments. To help address these issues, the concept of Biosecurity Levels should be developed. Biosecurity Levels would have increasing levels of security protections depending on the attractiveness of the pathogens to adversaries. Pathogens and toxins would be placed in a Biosecurity Level based on their security risk. Specifically, the security risk would be a function of an agent's weaponization potential and consequences of use. To demonstrate the concept, examples of security risk assessments for several human, animal, and plant pathogens will be presented. Higher security than that currently mandated by federal regulations would be applied for those very few agents that represent true weapons threats and lower levels for the remainder.
This paper describes the analyses and the experimental mechanics program to support the National Aeronautics and Space Administration (NASA) investigation of the Shuttle Columbia accident. A synergism of the analysis and experimental effort is required to insure that the final analysis is valid - the experimental program provides both the material behavior and a basis for validation, while the analysis is required to insure the experimental effort provides behavior in the correct loading regime. Preliminary scoping calculations of foam impact onto the Shuttle Columbia's wing leading edge determined if enough energy was available to damage the leading edge panel. These analyses also determined the strain-rate regimes for various materials to provide the material test conditions. Experimental testing of the reinforced carbon-carbon wing panels then proceeded to provide the material behavior in a variety of configurations and strain-rates for flown or conditioned samples of the material. After determination of the important failure mechanisms of the material, validation experiments were designed to provide a basis of comparison for the analytical effort. Using this basis, the final analyses were used for test configuration, instrumentation location, and calibration definition in support of full-scale testing of the panels in June 2003. These tests subsequently confirmed the accident cause.
Photocatalytic porphyrins are used to reduce metal complexes from aqueous solution and, further, to control the deposition of metals onto porphyrin nanotubes and surfactant assembly templates to produce metal composite nanostructures and nanodevices. For example, surfactant templates lead to spherical platinum dendrites and foam-like nanomaterials composed of dendritic platinum nanosheets. Porphyrin nanotubes are reported for the first time, and photocatalytic porphyrin nanotubes are shown to reduce metal complexes and deposit the metal selectively onto the inner or outer surface of the tubes, leading to nanotube-metal composite structures that are capable of hydrogen evolution and other nanodevices.
This report describes the purpose and results of the two-year, Sandia-sponsored Laboratory Directed Research and Development (LDRD) project entitled Understanding Communication in Counterterrorism Crisis Management The purpose of this project was to facilitate the capture of key communications among team members in simulated training exercises, and to learn how to improve communication in that domain. The first section of this document details the scenario development aspects of the simulation. The second section covers the new communication technologies that were developed and incorporated into the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of decision support tools. The third section provides an overview of the features of the simulation and highlights its communication aspects. The fourth section describes the Team Communication Study processes and methodologies. The fifth section discusses future directions and areas in which to apply the new technologies and study results obtained as a result of this LDRD.
The summary of this report is: (1) Optimizing synthesis parameters leads to enhanced catalyst surface areas - Nonlinear relationship between activity and surface area; (2) Catalyst development performed under a staged protocol; (3) Catalytic materials with desired properties have been identified - Meet stage requirements, Performance can be tuned by altering component concentrations, Optimization still necessary at low temperatures; (4) Better activity and tolerance to SO2 - V2O5-based materials ruled out because of durability issues; and (5) Future work will focus on improving overall low temperature activity.
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.
The generalized momentum balance (GMB) methods, explored chiefly by Shabana and his co-workers, treat slap or collision in linear structures as sequences of impulses, thereby maintaining the linearity of the structures throughout. Further, such linear analysis is facilitated by modal representation of the structures. These methods are discussed here and extended. Simulations on a simple two-rod problem demonstrate how this modal impulse approximation affects the system both directly after each impulse as well as over the entire collision. Furthermore, these simulations illustrate how the GMB results differ from the exact solution and how mitigation of these artifacts is achieved. Another modal method discussed in this paper is the idea of imposing piecewise constant forces over short, yet finite, time intervals during contact. The derivation of this method is substantially different than that of the GMB method, yet the numerical results show similar behavior, adding credence to both models. Finally, a novel method combining these two approaches is introduced. The new method produces physically reasonable results that are numerically very close to the exact solution of the collision of two rods. This approach avoids most of the non physical, numerical artifacts of interpenetration or chatter present in the first two methods.
The purpose of modal testing is usually to provide an estimate of a linear structural dynamics model. Typical uses of the experimental modal model are (1) to compare it with a finite element model for model validation or updating; (2) to verify a plant model for a control system; or (3) to develop an experimentally based model to understand structural dynamic responses. Since these are some common end uses, for this article the main goal is to focus on excitation methods to obtain an adequate estimate of a linear structural dynamics model. The purpose of the modal test should also provide the requirements that will drive the rigor of the testing, analysis, and the amount of instrumentation. Sometimes, only the natural frequencies are required. The next level is to obtain relative mode shapes with the frequencies to correlate with a finite element model. More rigor is required to get accurate critical damping ratios if energy dissipation is important. At the highest level, a full experimental model may require the natural frequencies, damping, modal mass, scaled shapes, and, perhaps, other terms to account for out-of-band modes. There is usually a requirement on the uncertainty of the modal parameters, whether it is specifically called out or underlying. These requirements drive the meaning of the word 'adequate' in the phrase 'adequate linear estimate' for the structural dynamics model. The most popular tools for exciting structures in modal tests are shakers and impact hammers. The emphasis here will be on shakers. There have been many papers over the years that mention some of the advantages and issues associated with shaker testing. One study that is focused on getting good data with shakers is that of Peterson. Although impact hammers may seem very convenient, in many cases, shakers offer advantages in obtaining a linear model. The best choice of excitation device is somewhat dependent on the test article and logistical considerations. These considerations will be addressed in this article to help the test team make a choice between impact hammer and various shaker options. After the choice is made, there are still challenges to obtaining data for an adequate linear estimate of the desired structural dynamics model. The structural dynamics model may be a modal model with the desired quantities of natural frequencies, viscous damping ratios, and mode shapes with modal masses, or it may be the frequency response functions (FRFs), or their transforms, which may be constructed from the modal model. In any case, the fidelity of the linear model depends to a large extent on the validity of the experimental data, which are generally gathered in the form of FRFs. With the goal of obtaining an 'adequate linear estimate' for a model of the structural dynamic system under test, consider several common challenges that must be overcome in the excitation setup to gather adequate data.
A laser hazard analysis and safety assessment was performed for each various laser diode candidates associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers. A theoretical laser hazard analysis model for this system was derived and an Excel{reg_sign} spreadsheet model was developed to answer the 'what if questions' associated with the various modes of operations for the various candidate diode lasers.
A Self Organizing Map (SOM) approach was used to analyze physiological data taken from a group of subjects participating in a cooperative video shooting game. The ultimate aim was to discover signatures of group cooperation, conflict, leadership, and performance. Such information could be fed back to participants in a meaningful way, and ultimately increase group performance in national security applications, where the consequences of a poor group decision can be devastating. Results demonstrated that a SOM can be a useful tool in revealing individual and group signatures from physiological data, and could ultimately be used to heighten group performance.
Deposition in next-step devices such as ITER will pose diagnostic challenges. Codeposition of hydrogen with carbon needs to be characterized and understood in the initial hydrogen phase in order to mitigate tritium retention and qualify carbon plasma facing components for DT operations. Plasma facing diagnostic mirrors will experience deposition that is expected to rapidly degrade their reflectivity, posing a challenge to diagnostic design. Some eroded particles will collect as dust on interior surfaces and the quantity of dust will be strictly regulated for safety reasons however, diagnostics of in-vessel dust are lacking. We report results from two diagnostics that relate to these issues. Measurements of deposition on NSTX with 4 Hz time resolution have been made using a quartz microbalance in a configuration that mimics that of a typical diagnostic mirror. Often deposition was observed immediately following the discharge suggesting that diagnostic shutters should be closed as soon as possible after the time period of interest. Material loss was observed following a few discharges. A novel diagnostic to detect dust particles on remote surfaces was commissioned on NSTX.
The Estancia Basin lies about 30 miles to the east of Albuquerque, NM. It is a closed basin in terms of surface water and is somewhat isolated in terms of groundwater. Historically, the primary natural outlet for both surface water and groundwater has been evaporation from the salt lakes in the southeastern portion of the basin. There are no significant watercourses that flow into this basin and groundwater recharge is minimal. During the 20th Century, agriculture grew to become the major user of groundwater in the basin. Significant declines in groundwater levels have accompanied this agricultural use. Domestic and municipal use of the basin groundwater is increasing as Albuquerque population continues to spill eastward into the basin, but this use is projected to be less than 1% of agricultural use well into the 21st Century. This Water Budget model keeps track of the water balance within the basin. The model considers the amount of water entering the basin and leaving the basin. Since there is no significant surface water component within this basin, the balance of water in the groundwater aquifer constitutes the primary component of this balance. Inflow is based on assumptions for recharge made by earlier researchers. Outflow from the basin is the summation of the depletion from all basin water uses. The model user can control future water use within the basin via slider bars that set values for population growth, water system per-capita use, agricultural acreage, and the types of agricultural diversion. The user can also adjust recharge and natural discharge within the limits of uncertainty for those parameters. The model runs for 100 years beginning in 1940 and ending in 2040. During the first 55 years model results can be compared to historical data and estimates of groundwater use. The last 45 years are predictive. The model was calibrated to match to New Mexico Office of State Engineer (NMOSE) estimates of aquifer storage during the historical period by making adjustments to recharge and outflow that were within the parameters uncertainties. Although results of this calibrated model imply that there may be more water remaining in the aquifer than the Estancia Water Plan estimates, this answer is only another possible result in a range of answers that are based on large parameter uncertainties.
This paper investigates the performance of tensor methods for solving small- and large-scale systems of nonlinear equations where the Jacobian matrix at the root is ill-conditioned or singular. This condition occurs on many classes of problems, such as identifying or approaching turning points in path following problems. The singular case has been studied more than the highly ill-conditioned case, for both Newton and tensor methods. It is known that Newton-based methods do not work well with singular problems because they converge linearly to the solution and, in some cases, with poor accuracy. On the other hand, direct tensor methods have performed well on singular problems and have superlinear convergence on such problems under certain conditions. This behavior originates from the use of a special, restricted form of the second-order term included in the local tensor model that provides information lacking in a (nearly) singular Jacobian. With several implementations available for large-scale problems, tensor methods now are capable of solving larger problems. We compare the performance of tensor methods and Newton-based methods for both small- and large-scale problems over a range of conditionings, from well-conditioned to ill-conditioned to singular. Previous studies with tensor methods only concerned the ends of this spectrum. Our results show that tensor methods are increasingly superior to Newton-based methods as the problem grows more ill-conditioned.
Tonopah Test Range (TTR) in Nevada and Kauai Test Facility (KTF) in Hawaii are government-owned, contractor-operated facilities operated by Sandia Corporation, a subsidiary of Lockheed Martin Corporation. The U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA), through the Sandia Site Office (SSO), in Albuquerque, NM, manages TTR and KTF's operations. Sandia Corporation conducts operations at TTR in support of DOE/NNSA's Weapons Ordnance Program and has operated the site since 1957. Westinghouse Government Services subcontracts to Sandia Corporation in administering most of the environmental programs at TTR. Sandia Corporation operates KTF as a rocket preparation launching and tracking facility. This Annual Site Environmental Report (ASER) summarizes data and the compliance status of the environmental protection and monitoring program at TTR and KTF through Calendar Year (CY) 2003. The compliance status of environmental regulations applicable at these sites include state and federal regulations governing air emissions, wastewater effluent, waste management, terrestrial surveillance, and Environmental Restoration (ER) cleanup activities. Sandia Corporation is responsible only for those environmental program activities related to its operations. The DOE/NNSA, Nevada Site Office (NSO) retains responsibility for the cleanup and management of ER TTR sites. Currently, there are no ER Sites at KTF. Environmental monitoring and surveillance programs are required by DOE Order 450.1, Environmental Protection Program (DOE 2003) and DOE Order 231.1 Chg 2., Environment, Safety, and Health Reporting (DOE 1996).
Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned, contractor-operated facility owned by the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) and managed by the Sandia Site Office (SSO), Albuquerque, New Mexico. Sandia Corporation, a wholly-owned subsidiary of Lockheed Martin Corporation, operates SNL/NM. This annual report summarizes data and the compliance status of Sandia Corporation's environmental protection and monitoring programs through December 31, 2003. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 450.1, ''Environmental Protection Program'' (DOE 2003a) and DOE Order 231.1 Chg.2, ''Environment, Safety, and Health Reporting'' (DOE 1996).
Efficient and environmentally sound methods of producing hydrogen are of great importance to the US as it progresses toward the H2 economy. Current studies are investigating the use of high temperature systems driven by nuclear and/or solar energy to drive thermochemical cycles for H2 production. These processes are advantageous since they do not produce greenhouse gas emissions that are a result of hydrogen production from electrolysis or hydrocarbon reformation. Double-substituted perovskites, A1-xSrxCo1-yBy O3-δ (A = Y, La; B = Fe, Ni, Cr, Mn) were synthesized for use as ceramic high-temperature oxygen separation membranes. The materials have promising oxygen sorption properties and were structurally robust under varying temperatures and atmospheres. Post-TGA powder diffraction patterns revealed no structural changes after the temperature and gas treatments, demonstrating the robustness of the material. The most promising material was the La0.1Sr0.9Co1-xMnx O3-δ perovskite. The oxygen sorption properties increased with increasing Mn doping.
This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.
Modeling the response of buried reinforced concrete structures subjected to close-in detonations of conventional high explosives poses a challenge for a number of reasons. Foremost, there is the potential for coupled interaction between the blast and structure. Coupling enters the problem whenever the structure deformation affects the stress state in the neighboring soil, which in turn, affects the loading on the structure. Additional challenges for numerical modeling include handling disparate degrees of material deformation encountered in the structure and surrounding soil, modeling the structure details (e.g., modeling the concrete with embedded reinforcement, jointed connections, etc.), providing adequate mesh resolution, and characterizing the soil response under blast loading. There are numerous numerical approaches for modeling this class of problem (e.g., coupled finite element/smooth particle hydrodynamics, arbitrary Lagrange-Eulerian methods, etc.). The focus of this work will be the use of a coupled Euler-Lagrange (CEL) solution approach. In particular, the development and application of a CEL capability within the Zapotec code is described. Zapotec links two production codes, CTH and Pronto3D. CTH, an Eulerian shock physics code, performs the Eulerian portion of the calculation, while Pronto3D, an explicit finite element code, performs the Lagrangian portion. The two codes are run concurrently with the appropriate portions of a problem solved on their respective computational domains. Zapotec handles the coupling between the two domains. The application of the CEL methodology within Zapotec for modeling coupled blast/structure interaction will be investigated by a series of benchmark calculations. These benchmarks rely on data from the Conventional Weapons Effects Backfill (CONWEB) test series. In these tests, a 15.4-lb pipe-encased C-4 charge was detonated in soil at a 5-foot standoff from a buried test structure. The test structure was composed of a reinforced concrete slab bolted to a reaction structure. Both the slab thickness and soil media were varied in the test series. The wealth of data obtained from these tests along with the variations in experimental setups provide ample opportunity to assess the robustness of the Zapotec CEL methodology.
We explore stability of Random Boolean Networks as a model of biological interaction networks. We introduce surface-to-volume ratio as a measure of stability of the network. Surface is defined as the set of states within a basin of attraction that maps outside the basin by a bit-flip operation. Volume is defined as the total number of states in the basin. We report development of an object-oriented Boolean network analysis code (Attract) to investigate the structure of stable vs. unstable networks. We find two distinct types of stable networks. The first type is the nearly trivial stable network with a few basins of attraction. The second type contains many basins. We conclude that second type stable networks are extremely rare.
Historically, TCP/IP has been the protocol suite used to transfer data throughout the Advanced Simulation and Computing (ASC) community. However, TCP was developed many years ago for an environment very different from the ASC Wide Area Network (WAN) of today. There have been numerous publications that hint of better performance if modifications were made to the TCP algorithms or a different protocol was used to transfer data across a high bandwidth, high delay WAN. Since Sandia National Laboratories wants to maximize the ASC WAN performance to support the Thor's Hammer supercomputer, there is strong interest in evaluating modifications to the TCP protocol and in evaluating alternatives to TCP, such as SCTP, to determine if they provide improved performance. Therefore, the goal of this project is to test, evaluate, compare, and report protocol technologies that enhance the performance of the ASC WAN.
Response of removable epoxy foam (REF) to high heat fluxes is described using a decomposition chemistry model [1] in conjunction with a finite element heat conduction code [2] that supports chemical kinetics and dynamic radiation enclosures. The chemistry model [1] describes the temporal transformation of virgin foam into carbonaceous residue by considering breakdown of the foam polymer structure, desorption of gases not associated with the foam polymer, mass transport of decomposition products from the reaction site to the bulk gas, and phase equilibrium. The finite element foam response model considers the spatial behavior of the foam by using measured and predicted thermophysical properties in combination with the decomposition chemistry model. Foam elements are removed from the computational domain when the condensed mass fractions of the foam elements are close to zero. Element removal, referred to as element death, creates a space within the metal confinement causing radiation to be the dominant mode of heat transfer between the surface of the remaining foam elements and the interior walls of the confining metal skin. Predictions were compared to front locations extrapolated from radiographs of foam cylinders enclosed in metal containers that were heated with quartz lamps [3,4]. The effects of the maximum temperature of the metal container, density of the foam, the foam orientation, venting of the decomposition products, pressurization of the metal container, and the presence or absence of embedded components are discussed.
In a recent paper, Starr and Segalman demonstrated that any Masing model can be represented as a parallel-series Iwan model. A preponderance of the constitutive models that have been suggested for simulating mechanical joints are Masing models, and the purpose of this discussion is to demonstrate how the Iwan representation of those models can yield insight into their character. In particular, this approach can facilitate a critical comparison among numerous plausible constitutive models. It is explicitly shown that three-parameter models such as Smallwood's (Ramberg-Osgood) calculate parameters in such a manner that macro-slip is not an independent parameter, yet the model admits macro-slip. The introduction of a fourth parameter is therefore required. It is shown that when a macro-slip force is specified for the Smallwood model the result is a special case of the Segalman four-parameter model. Both of these models admit a slope discontinuity at the inception of macro-slip. A five-parameter model that has the beneficial features of Segalman's four-parameter model is proposed. This model manifests a force-displacement curve having a continuous first derivative.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.