The development of carbon-carbon (C-C) composites for aerospace applications has prompted the need for ways to improve the poor oxidation resistance of these materials, In order to evaluate and test materials to be used as thermal protection system (TPS) material the need for readily available and reliable testing methods are critical to the success of materials development efforts, With the purpose to evaluate TPS materials, three testing methods were used to assess materials at high temperatures (> 2000°C) and heat flux in excess of 200 Wcm-2. The first two methods are located at the National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories, which are the Solar Furnace Facility and the Solar Tower Facility, The third method is an oxyacetylene torch set up according to ASTM E285-80 with oxidizing flame control and maximum achievable temperatures in excess of 2000°C In this study, liquid precursors to ultra high temperature ceramics (UHTCs) have been developed into multilayer coatings on C-C composites and evaluated using the oxidation testing methods. The tests will be discussed in detail and correlated with preliminary materials evaluation results with the aim of presenting an understanding of the testing environment on the materials evaluated for oxidation resistance.
Glass-to-metal (GTM) seals maintain hermeticity while allowing the passage of electrical signals. Typically, these seals are comprised of one or more metal pins encapsulated in a glass which is contained in a metal shell. In compression seals, the coefficient of thermal expansion of the metal shell is greater than the glass, and the glass is expected to be in compression. Recent development builds of a multi-pin GTM seal revealed severe cracking of the glass, with cracks originating at or near the pin-glass interface, and propagating circumferentially. A series of finite element analyses (FEA) was performed for this seal with the material set: 304 stainless steel (SS304) shell, Schott S-8061 (or equivalent) glass, and Alloy 52 pins. Stress-strain data for both metals was fit by linear-hardening and power-law hardening plasticity models. The glass layer thickness and its location with respect to geometrical features in the shell were varied. Several additional design changes in the shell were explored. Results reveal that: (1) plastic deformation in the small-strain regime in the metals lead to radial tensile stresses in glass, (2) small changes in the mechanical behavior of the metals dramatically change the calculated stresses in the glass, and (3) seemingly minor design changes in the shell geometry influence the stresses in the glass significantly. Based on these results, guidelines for materials selection and design of seals are provided.
Thermal gravimetric analysis (TGA) combined with evolved gas analysis by Fourier transform infrared spectroscopy (FTIR) or mass spectrometry (MS) often is used to study thermal decomposition of organic polymers. Frequently, results are used to determine decomposition mechanisms and to develop rate expressions for a variety of applications, which include hazard analyses. Although some current TGA instruments operate with controlled heating rates as high as 500° C/min, most experiments are done at much lower heating rates of about 5° to 50° C/min to minimize temperature gradients in the sample. The intended applications, such as hazard analyses involving fire environments, for rate expressions developed from TGA experiments often involve heating rates much greater than 50° C/min. The heating rate can affect polymer decomposition by altering relative rates at which competing decomposition reactions occur. Analysis of the effect of heating rate on competing first-order decomposition reactions with Arrhenius rate constants indicated that relative to heating rates of 5° to 50° C/min, observable changes in decomposition behavior may occur when heating rates approach 1,000° C/min. Results from experiments with poly(methyl methacrylate) (PMMA) samples that were heated at 5° to 50° C/min during TGA-FTIR experiments and results from experiments with samples heated at rates on the order of 1,000° C/min during pyrolysis-GC-FTIR experiments supported the analyses.
The Wind Energy Technology Department at Sandia National Laboratories (SNL) focuses on producing innovations in wind turbine blade technology to enable the development of longer blades that are lighter, more structurally and aerodynamically efficient, and impart reduced loads to the system. A large part of the effort is to characterize the properties of relevant composite materials built with typical manufacturing processes. This paper provides an overview of recent studies of composite laminates for wind turbine blade construction and summarizes test results for three prototype blades that incorporate a variety of material-related innovations.
The Wind Energy Technology Department at Sandia National Laboratories (SNL) focuses on producing innovations in wind turbine blade technology to enable the development of longer blades that are lighter, more structurally and aerodynamically efficient, and impart reduced loads to the system. A large part of the effort is to characterize the properties of relevant composite materials built with typical manufacturing processes. This paper provides an overview of recent studies of composite laminates for wind turbine blade construction and summarizes test results for three prototype blades that incorporate a variety of material-related innovations.
Silica based glasses are commonly used as window material in applications which are subject to high velocity impacts. Thorough understanding of the response to shock loading in these materials is crucial to the development of new designs. Despite the lack of long range order in amorphous glasses, the structure can be described statistically by the random network model. Changes to the network structure alter the response to shock loading. Results indicate that in fused silica, substitution of boron as a network former does not have a large effect on the shock loading properties while modifying the network with sodium and calcium changes the dynamic response. These initial results suggest the potential of a predictive capability to determine the effects of other network substitutions.
This paper summarizes the numerical site scale model developed to simulate the transport of radionuclides via ground water in the saturated zone beneath Yucca Mountain.
Preparing Computer Aided Design models for successful mesh generation continues to be a crucial part of the design to analysis process. A common problem in CAD models is features that are very small compared to the desired mesh size. Small features exist for a variety of reasons and can require an excessive amount of elements or inhibit mesh generation all together. Many of the tools for removing small features modify only the topology of the model (often in a secondary topological representation of the model) leaving the underlying geometry as is. The availability of tools that actually modify the topology and underlying geometry in the boundary representation (B-rep) model is much more limited regardless of the inherent advantages of this approach. This paper presents a process for removing small featrues from a B-rep model using almost solely functionality provided by the underlying solid modeling kernel. The process cuts out the old topology and reconstructs new topology and geometry to close the volume. The process is quite general and can be applied to complex configurations of unwanted topology.
Systems in flight often encounter environments with combined vibration and constant acceleration. Sandia National Laboratories has developed a new system capable of combining these environments for hardware qualification testing on a centrifuge. To demonstrate that combined vibration plus centrifuge acceleration is equivalent to the vibration and acceleration encountered in a flight environment the equations of motion of a spring mass damper system in each environment were derived and compared. These equations of motion suggest a decrease in natural frequency for spring mass damper systems undergoing constant rotational velocity on a centrifuge. It was shown mathematically and through experimental testing that the natural frequency of a spring-mass system will decrease with increased rotational velocity. An increase of rotational velocity will eventually result in system instability. The development and testing of a mechanical system to demonstrate this characteristic is discussed. Results obtained from frequency domain analysis of time domain data is presented as is the implications these results conclude about centrifuge testing of systems with low natural frequency on small radius centrifuges.
Quantitative studies of material properties and interfaces using the atomic force microscope (AFM) have important applications in engineering, biotechnology and chemistry. Emerging studies require an estimate of the stiffness of the probe so that the forces exerted on a sample can be determined from the measured displacements. Numerous methods for determining the spring constant of AFM cantilevers have been proposed, yet none accounts for the effect of the mass of the probe tip on the calibration procedure. This work demonstrates that the probe tip does have a significant effect on the dynamic response of an AFM cantilever by experimentally measuring the first few modes of a commercial AFM probe and comparing them with those of a theoretical model for a cantilever probe that does not have a tip. The mass and inertia of an AFM probe tip are estimated from scanning electron microscope images and a simple model for the probe is derived and tuned to match the first few modes of the actual probe. Analysis suggests that both the method of Sader and the thermal tune method of Hutter and Bechhoefer give erroneous predictions of the area density or the effective mass of the probe. However, both methods do accurately predict the static stiffness of the AFM probe due to the fact that the mass terms cancel so long as the mode shape of the AFM probe does not deviate from the theoretical model. The calibration errors that would be induced due to differences between mode shapes measured in this study and the theoretical ones are estimated.
Advancements in our capabilities to accurately model physical systems using high resolution finite element models have led to increasing use of models for prediction of physical system responses. Yet models are typically not used without first demonstrating their accuracy or, at least, adequacy. In high consequence applications where model predictions are used to make decisions or control operations involving human life or critical systems, a movement toward accreditation of mathematical model predictions via validation is taking hold. Model validation is the activity wherein the predictions of mathematical models are demonstrated to be accurate or adequate for use within a particular regime. Though many types of predictions can be made with mathematical models, not all predictions have the same impact on the usefulness of a model. For example, predictions where the response of a system is greatest may be most critical to the adequacy of a model. Therefore, a model that makes accurate predictions in some environments and poor predictions in other environments may be perfectly adequate for certain uses. The current investigation develops a general technique for validating mathematical models where the measures of response are weighted in some logical manner. A combined experimental and numerical example that demonstrates the validation of a system using both weighted and non-weighted response measures is presented.
Force and moment measurements have been made on an instrumented subscale fin model at transonic speeds in Sandia's Trisonic Wind Tunnel to ascertain the effects of Mach number and angle of attack on the interaction of a trailing vortex with a downstream control surface. Components of normal force, bending moment, and hinge moment were measured on an instrumented fin downstream of an identical fin at Mach numbers between 0.85 and 1.24, and combinations of angles of attack between -5° and 10° for both fins. The primary influence of upstream fin deflection is to shift the downstream fin's forces in a direction consistent with the vortex-induced angle of attack on the downstream fin. Secondary non-linear effects of vortex lift were found to increase the slopes of normal force and bending moment coefficients when plotted versus fin deflection angle. This phenomenon was dependent upon Mach number and the angles of attack of both fins. The hinge moment coefficient was also influenced by the vortex lift as the center of pressure was pushed aft with increased Mach number and total angle of attack.
In 2002, Sandia National Laboratories (SNL) initiated a research program to demonstrate the use of carbon fiber in wind turbine blades and to investigate advanced structural concepts through the Blade Systems Design Study, known as the BSDS. One of the blade designs resulting from this program, commonly referred to as the BSDS blade, resulted from a systems approach in which manufacturing, structural and aerodynamic performance considerations were all simultaneously included in the design optimization. The BSDS blade design utilizes "flatback" airfoils for the inboard section of the blade to achieve a lighter, stronger blade. Flatback airfoils are generated by opening up the trailing edge of an airfoil uniformly along the camber line, thus preserving the camber of the original airfoil. This process is in distinct contrast to the generation of truncated airfoils, where the trailing edge the airfoil is simply cut off, changing the camber and subsequently degrading the aerodynamic performance. Compared to a thick conventional, sharp trailing-edge airfoil, a flatback airfoil with the same thickness exhibits increased lift and reduced sensitivity to soiling. Although several commercial turbine manufacturers have expressed interest in utilizing flatback airfoils for their wind turbine blades, they are concerned with the potential extra noise that such a blade will generate from the blunt trailing edge of the flatback section. In order to quantify the noise generation characteristics of flatback airfoils, Sandia National Laboratories has conducted a wind tunnel test to measure the noise generation and aerodynamic performance characteristics of a regular DU97-300-W airfoil, a 10% trailing edge thickness flatback version of that airfoil, and the flatback fitted with a trailing edge treatment. The paper describes the test facility, the models, and the test methodology, and provides some preliminary results from the test.
This report focuses on our recent advances in the fabrication and processing of barium strontium titanate (BST) thin films by chemical solution depositiion for next generation fuctional integrated capacitors. Projected trends for capacitors include increasing capacitance density, decreasing operating voltages, decreasing dielectric thickness and decreased process cost. Key to all these trends is the strong correlation of film phase evolution and resulting microstructure, it becomes possible to tailor the microstructure for specific applications. This interplay will be discussed in relation to the resulting temperature dependent dielectric response of the BST films.
Geometric features with characteristic lengths on the order of the size of the contact patch interface may be at least partly responsible for the variability observed in experimental measurements of structural stiffness and energy dissipation per cycle in a bolted joint. Experiments on combinations of two different types of joints (statically determinate single-joint and statically indeterminate three-joint structures) of nominally identical hardware show that the structural stiffness of the tested specimens varies by up to 25% and the energy dissipation varies by up to nearly 300%. A pressure-sensitive film was assembled into the interfaces of jointed structures to gain a qualitative understanding of the distribution of interfacial pressures of nominally conformal surfaces. The resultant pressure distributions suggest that there are misfit mechanisms that may influence contact patch geometry and also structural response of the interface. These mechanisms include local plateaus and machining induced waviness. The mechanisms are not consistent across nominally machined hardware interfaces. The proposed misfit mechanisms may be partly responsible for the variability in energy dissipation per cycle of joint experiments.
Damping in a micro-cantilever beam was measured for a very broad range of air pressures from atmosphere (10 5 Pa) down to 0.2 Pa. The beam was in open space free from squeeze films. The damping ratio, due mainly to air drag, varied by a factor of 10 4 within this pressure range. The damping due to air drag was separated from other sources of energy dissipation so that air damping could be measured at 10 -6 of critical damping factor. The linearity of the damping was confirmed over a wide range of beam vibration levels. Lastly, the measured damping was compared with several existing theories for air-drag damping for both rarified and viscous flow gas theories. The measured data indicate that, in the rarefied regime the air damping is proportional to pressure, independent of viscosity, and in the viscous regime the damping is determined by viscosity.
The development of transmitter and receiver Multichip Module subassemblies implemented in LTCC for an S-band radar application followed an approach that reduces the number of discrete devices and increases reliability. The LTCC MCM incorporates custom GaAs RF integrated circuits in faraday cavities, novel methods of reducing line resistance and enhancing lumped element Q, and a thick film back plane which attaches to a heat sink. The incorporation of PIN diodes on the receiver and a 50W power amplifier on the transmitter required methods for removing heat beyond what thermal vias can accomplish. The die is a high voltage pHEMT GaAs power amplifier RFIC chip that measures 6.5 mm × 8 mm. Although thermal vias are adequate in certain cases, the thermal solution includes heat spreaders and thermally conductive backplates. Processing hierarchy, including gold-tin die attach and various use of polymeric attachment, must allow rework on these prototypical devices. LTCC cavity covers employ metallic coatings on their exterior surfaces. The processing of the LTCC and its effect on the function of the transmitter and receiver circuits is discussed in the poster session.
American Solar Energy Society - SOLAR 2008, Including Proc. of 37th ASES Annual Conf., 33rd National Passive Solar Conf., 3rd Renewable Energy Policy and Marketing Conf.: Catch the Clean Energy Wave
This paper summarizes operational histories of three Russian-designed photovoltaic (PV) lighthouses in Norway and Russia. All lighthouses were monitored to evaluate overall system and Nickel Cadmium (NiCad) battery bank performance to determine battery capacity, charging trends, temperature, and reliability. The practical use of PV in this unusual mode, months of battery charging followed by months of battery discharging, is documented and assessed. This paper presents operational data obtained from 2004 through 2007.
In order for the IAEA to draw valid safeguards conclusions, they must be assured that the data used to draw those conclusions are authentic. In order to provide that assurance, authentication measures are applied to the safeguards equipment and the data from the equipment. These authentication measures require that IAEA personnel have direct electronic and physical access to the equipment and severely limit access to the equipment by the operator. Providing the necessary access for the IAEA personnel can be intrusive and potentially disruptive to plant operations. If the equipment is to be used jointly by the operator and the IAEA, the authentication measures can cause difficulties for the operator by limiting his ability to repair and maintain the hardware. In many cases, tamper indicating conduit and enclosures are also required. The installation, sealing, and inspection of this tamper indicating hardware also add to the intrusiveness of the safeguards activities and increase the cost of safeguards. This paper discusses these impacts and proposes methods for mitigating them.
The Cognitive Foundry is a unified collection of tools for Cognitive Science and Technology applications, supporting the development of intelligent agent models. The Foundry has two primary components designed to facilitate agent construction: the Cognitive Framework and Machine Learning packages. The Cognitive Framework provides design patterns and default implementations of an architecture for evaluating theories of cognition, as well as a suite of tools to assist in the building and analysis of theories of cognition. The Machine Learning package provides tools for populating components of the Cognitive Framework from domain-relevant data using automated knowledge-capture techniques. This paper describes the Cognitive Foundry with a focus on its application within the context of agent behavior modeling.
Simulation of potential radionuclide transport in the saturated zone from beneath the proposed repository at Yucca Mountain to the accessible environment is an important aspect of the total system performance assessment (TSPA) for disposal of high-level radioactive waste at the site. Analyses of uncertainty and sensitivity are integral components of the TSPA and have been conducted at both the sub-system and system levels to identify parameters and processes that contribute to the overall uncertainty in predictions of repository performance. Results of the sensitivity analyses indicate that uncertainty in groundwater specific discharge along the flow path in the saturated zone from beneath the repository is an important contributor to uncertainty in TSPA results and is the dominant source of uncertainty in transport times in the saturated zone for most radionuclides. Uncertainties in parameters related to matrix diffusion in the volcanic units, colloid-facilitated transport, and sorption are also important contributors to uncertainty in transport times to differing degrees for various radionuclides.
The drift-shadow effect describes capillary diversion of water flow around a drift or cavity in porous or fractured rock, resulting in lower water flux directly beneath the cavity. This paper presents computational simulations of drift-shadow experiments using dual-permeability models, similar to the models used for performance assessment analyses of flow and seepage in unsaturated fractured tuff at Yucca Mountain. Results show that the dual-penneability models capture the salient trends and behavior observed in the experiments, but constitutive relations (e.g., fracture capillary-pressure curves) can significantly affect the simulated results. An evaluation of different meshes showed that at the grid refinement used, a comparison between orthogonal and unstructured meshes did not result in large differences.
Uncertainty and sensitivity analyses of the expected dose to the reasonably maximally exposed individual in the Yucca Mountain 2008 total system performance assessment (TSPA) are presented. Uncertainty results are obtained with Latin hypercube sampling of epistemic uncertain inputs, and partial rank correlation coefficients are used to illustrate sensitivity analysis results.
The Total System Performance Assessment (TSPA) for the proposed high level radioactive waste repository at Yucca Mountain, Nevada, uses a sampling-based approach to uncertainty and sensitivity analysis. Specifically, Latin hypercube sampling is used to generate a mapping between epistemically uncertain analysis inputs and analysis outcomes of interest. This results in distributions that characterize the uncertainty in analysis outcomes. Further, the resultant mapping can be explored with sensitivity analysis procedures based on (i) examination of scatterplots, (ii) partial rank correlation coefficients, (iii) R2 values and standardized rank regression coefficients obtained in stepwise rank regression analyses, and (iv) other analysis techniques. The TSPA considers over 300 epistemically uncertain inputs (e.g., corrosion properties, solubilities, retardations, defining parameters for Poisson processes, ⋯) and over 70 time-dependent analysis outcomes (e.g., physical properties in waste packages and the engineered barrier system, releases from the engineered barrier system, the unsaturated zone and the saturated zone for individual radionuclides, and annual dose to the reasonably maximally exposed individual (RMEI) from both individual radionuclides and all radionuclides. The obtained uncertainty and sensitivity analysis results play an important role in facilitating understanding of analysis results, supporting analysis verification, establishing risk importance, and enhancing overall analysis credibility. The uncertainty and sensitivity analysis procedures are illustrated and explained with selected results for releases from the engineered barrier system, the unsaturated zone and the saturated zone and also for annual dose to the RMEI.
This report evaluates transportation risk for nuclear material in the proposed Global Nuclear Energy Partnership (GNEP) fuel cycle. Since many details of the GNEP program are yet to be determined, this document is intended only to identify general issues. The existing regulatory environment is determined to be largely prepared to incorporate the changes that the GNEP program will introduce. Nuclear material vulnerability and attractiveness are considered with respect to the various transport stages within the GNEP fuel cycle. It is determined that increased transportation security will be required for the GNEP fuel cycle, particularly for international transport. Finally, transportation considerations for several fuel cycle scenarios are discussed. These scenarios compare the current "once-through" fuel cycle with various aspects of the proposed GNEP fuel cycle.
The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.
This paper summarizes the results of a Phenomena Identification and Ranking Table (PIRT) exercise performed for nuclear power plant (NPP) fire modeling applications conducted on behalf of the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES). A PIRT exercise is a formalized, facilitated expert elicitation process. In this case, the expert panel was comprised of seven international fire science experts and was facilitated by Sandia National Laboratories (SNL). The objective of a PIRT exercise is to identify key phenomena associated with the intended application and to then rank the importance and current state of knowledge of each identified phenomenon. One intent of this process is to provide input into the process of identifying and prioritizing future research efforts. In practice, the panel considered a series of specific fire scenarios based on scenarios typically considered in NPP applications. Each scenario includes a defined figure of merit; that is, a specific goal to be achieved in analyzing the scenario through the application of fire modeling tools. The panel identifies any and all phenomena relevant to a fire modeling-based analysis for the figure of merit. Each phenomenon is ranked relative to its importance to the fire model outcome and then further ranked against the existing state of knowledge and adequacy of existing modeling tools to predict that phenomenon. The PIRT panel covered several fire scenarios and identified a number of areas potentially in need of further fire modeling improvements. The paper summarizes the results of the ranking exercise.
One source of concern in the nuclear power community is associated with performing PRAs on the passive systems used in Advanced Light Water Reactors. Passive systems rely on physical phenomena in order to perform safety actions. This leads to questions about how one should model the reliability of the system, such as how one should model the uncertainty in physical parameters that define the operational characteristics of the passive system and how to determine the degradation and failure characteristics of a system. Hierarchical Bayesian techniques provide a means for assessing the types of problems presented by passive systems. They allow the analyst to collect multiple types of data, including expert judgment and historical data from different sources and then combine them in one analysis. The importance of this feature is that it allows an analyst to perform a mathematically consistent PRA without large amounts of data for the specific system under scrutiny. As data become available, they are incorporated into the analysis using Bayes' rule. As the dataset becomes large, the data dominate the analysis. A study is performed whereby data are collected from a set of resistors in a corrosive environment. A model is created that related the environmental conditions of the sensors being used to the performance of the sensors. Prior distributions are then proposed for the uncertain parameters. Both longitudinal and failure data are recorded for the sensors. These data are then used to update the model and obtain the posterior distributions related to the uncertain parameters.
The drift-shadow effect describes capillary diversion of water flow around a drift or cavity in porous or fractured rock, resulting in lower water flux directly beneath the cavity. This paper presents computational simulations of drift-shadow experiments using dual-permeability models, similar to the models used for performance assessment analyses of flow and seepage in unsaturated fractured tuff at Yucca Mountain. Results show that the dual-penneability models capture the salient trends and behavior observed in the experiments, but constitutive relations (e.g., fracture capillary-pressure curves) can significantly affect the simulated results. An evaluation of different meshes showed that at the grid refinement used, a comparison between orthogonal and unstructured meshes did not result in large differences.
Ceramic samples of Pb0.99La0.01 (Zr 0.91Ti0.09)O3 were studied by dielectric and time-of-flight neutron diffraction measurements at 300 and 250 K versus pressure. Isothermal dielectric data (300/250 K) suggest structural transitions with onsets near 0.35/0.37 GPa, respectively, for increasing pressure. On pressure release, only the 300K transition occurs (0.10 GPa; none indicated at 250 K). Diffraction data at 300 K show the sample has the R3c structure, remaining in that phase cooling to 250 K. Pressure increase (either 300 or 250 K) above 0.3 GPa yields a Pnma-like (AO) phase (two other prominent peaks in the spectra suggest a possible incommensurate cell). Temperature/pressure excursions show considerable phase hysteresis.
American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008
Sevougian, S.D.; Behie, Alda; Chipman, Veraun; Gross, Michael B.; Mehta, Sunil; Statham, William
The representation of disruptive events (seismic and igneous events) and early failures of waste packages and drip shields in the 2008 total system performance assessment (TSPA) for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada is described, in the context of the 2008 TSPA, disruptive events and early failures are treated as phenomena that occur randomly (e.g., the time of a seismic event) and also have properties that are random (e.g., the peak ground velocity associated with a seismic event). Specifically the following potential disruptions are considered: (i) early failure of individual drip shields, (ii) early failure of individual waste packages, (iii) igneous intrusion events that result in the filling of the waste disposal drifts with magma, (iv) volcanic eruption events that result in the dispersal of waste into the atmosphere, (v) seismic events that damage waste packages and drip shields as a result of strong vibratory ground motion, and (vi) seismic events that damage waste packages and drip shields as a result of shear displacement along a fault. Example annual dose results are shown for the two most risk-significant events: strong seismic ground motion and igneous intrusion.
The development of separation distances for hydrogen facilities can be determined in several ways. A conservative approach is to use the worst possible accidents in terms of consequences. Such accidents may be of very low frequency and would likely never occur. Although this approach bounds separation distances, the resulting distances are generally prohibitive. The current separation distances in hydrogen codes and standards do not reflect this approach. An alternative deterministic approach that is often utilized by standards development organizations and allowed under some regulations is to select accident scenarios that are more probable but do not provide bounding consequences. In this approach, expert opinion is generally used to select the accidents used as the basis for the prescribed separation distances.
Proceedings - 2008 International Symposium on Microelectronics, IMAPS 2008
Knudson, R.T.; Barner, Greg; Smith, Frank; Zawicki, Larry; Peterson, Ken
Full tape thickness features (FTTF) using conductors, high K and low K dielectrics, sacrificial volume materials, and magnetic materials are useful as both technically and cost-effective approaches to multiple needs in laminate microelectronic and microsystem structures. Lowering resistance in conductor traces of all kinds, raising Q-factors in coils, and enhancing EMI shielding in RF desingns are a few of the modern needs. By filling with suitable dielectric compositions one can deliver embedded capacitors with an appropriate balance between mechanical compatibility and safety factor for fabrication. Similar techniques could be applied to magnetic materials without wasteful manufacturing processes when the magnetic material is a small fraction of the overall circuit area. Finally, to open the technology of unfilled volumes for radio frequency performance as well as microfluidics and mixed cofired material applications, the full tape thickness implementation of sacrificial volume materials is also considered. We discuss implementations of FTTF structures and discuss technical problems and the promise such structures hold for the future.
A series of modal tests were performed in order to validate a finite element model of a complex aerospace structure. Data was measured using various excitation methods in order to extract clean modes and damping values for a lightly damped system. Model validation was performed for one subassembly as well as for the full assembly in order to pinpoint the areas of the model that required updating and to better ascertain the quality of the joint models connecting the various components and subassemblies. After model updates were completed, using the measured modal data, the model was validated using frequency response functions (FRFs) as the independent validation metric. Test and model FRFs were compared to determine the validity of the finite element model.
Optical tweezers has become a powerful and common tool for sensitive determination of electrostatic interactions between colloidal particles. Two optical trapping based techniques, blinking tweezers and direct force measurements, have become increasingly prevalent in investigations of interparticle potentials. The blinking laser tweezers method repeatedly catches and releases a pair of particles to gather physical statistics of particle trajectories. Statistical analysis is used to determine drift velocities, diffusion coefficients, and ultimately colloidal forces as a function of the center-center separation of the particles. Direct force measurements monitor the position of a particle relative to the center of an optical trap as the separation distance between two continuously trapped particles is gradually decreased. As the particles near each other, the displacement from the trap center for each particle increases proportional to the inter-particle force. Although commonly employed in the investigation of interactions of colloidal particles, there exists no direct comparison of these experimental methods in the literature. In this study, an experimental apparatus was developed capable of performing both methods and is used to quantify electrostatic potentials between two sizes of polystyrene particles in an AOT hexadecane solution. Comparisons are drawn between the experiments conducted using the two measurement techniques, theory, and existing literature. Forces are quantified on the femto-Newton scale and results agree well with literature values.
Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.
We have conducted a molecular dynamics (MD) simulation study of water confined between methyl-terminated and carboxyl-terminated alkylsilane self-assembled monolayers (SAMs) on amorphous silica substrates. In doing so, we have investigated the dynamic and structural behavior of the water molecules when compressed to loads ranging from 20 to 950 MPa for two different amounts of water (27 and 58 water molecules/nm{sup 2}). Within the studied range of loads, we observe that no water molecules penetrate the hydrophobic region of the carboxyl-terminated SAMs. However, we observe that at loads larger than 150 MPa water molecules penetrate the methyl-terminated SAMs and form hydrogen-bonded chains that connect to the bulk water. The diffusion coefficient of the water molecules decreases as the water film becomes thinner and pressure increases. When compared to bulk diffusion coefficients of water molecules at the various loads, we found that the diffusion coefficients for the systems with 27 water molecules/nm{sup 2} are reduced by a factor of 20 at low loads and by a factor of 40 at high loads, while the diffusion coefficients for the systems with 58 water molecules/nm{sup 2} are reduced by a factor of 25 at all loads.
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.
The Heavy Ion Fusion Science Virtual National Laboratory has achieved 60-fold longitudinal pulse compression of ion beams on the Neutralized Drift Compression Experiment (NDCX) [P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005)]. To focus a space-charge-dominated charge bunch to sufficiently high intensities for ion-beam-heated warm dense matter and inertial fusion energy studies, simultaneous transverse and longitudinal compression to a coincident focal plane is required. Optimizing the compression under the appropriate constraints can deliver higher intensity per unit length of accelerator to the target, thereby facilitating the creation of more compact and cost-effective ion beam drivers. The experiments utilized a drift region filled with high-density plasma in order to neutralize the space charge and current of an {approx}300 keV K{sup +} beam and have separately achieved transverse and longitudinal focusing to a radius <2 mm and pulse duration <5 ns, respectively. Simulation predictions and recent experiments demonstrate that a strong solenoid (B{sub Z} < 100 kG) placed near the end of the drift region can transversely focus the beam to the longitudinal focal plane. This paper reports on simulation predictions and experimental progress toward realizing simultaneous transverse and longitudinal charge bunch focusing. The proposed NDCX-II facility would capitalize on the insights gained from NDCX simulations and measurements in order to provide a higher-energy (>2 MeV) ion beam user-facility for warm dense matter and inertial fusion energy-relevant target physics experiments.