Overview of White Sands deployment for stuck source recovery
Abstract not provided.
Abstract not provided.
Polymer Degradation and Stability
Chemiluminescence (CL) has been applied as a condition monitoring technique to assess aging related changes in a hydroxyl-terminated-polybutadiene based polyurethane elastomer. Initial thermal aging of this polymer was conducted between 110 and 50 °C. Two CL methods were applied to examine the degradative changes that had occurred in these aged samples: isothermal "wear-out" experiments under oxygen yielding initial CL intensity and "wear-out" time data, and temperature ramp experiments under inert conditions as a measure of previously accumulated hydroperoxides or other reactive species. The sensitivities of these CL features to prior aging exposure of the polymer were evaluated on the basis of qualifying this method as a quick screening technique for quantification of degradation levels. Both the techniques yielded data representing the aging trends in this material via correlation with mechanical property changes. Initial CL rates from the isothermal experiments are the most sensitive and suitable approach for documenting material changes during the early part of thermal aging. © 2006 Elsevier Ltd. All rights reserved.
Many experimenters at the Annular Core Research Reactor (ACRR) have a need to predict the neutron/gamma environment prior to testing. In some cases, the neutron/gamma environment is needed to understand the test results after the completion of an experiment. In an effort to satisfy the needs of experimenters, a model of the ACRR was developed for use with the Monte Carlo N-Particle transport codes MCNP [Br03] and MCNPX [Wa02]. The model contains adjustable safety, transient, and control rods, several of the available spectrum-modifying cavity inserts, and placeholders for experiment packages. The ACRR model was constructed such that experiment package models can be easily placed in the reactor after being developed as stand-alone units. An addition to the 'standard' model allows the FREC-II cavity to be included in the calculations. This report presents the MCNP/MCNPX model of the ACRR. Comparisons are made between the model and the reactor for various configurations. Reactivity worth curves for the various reactor configurations are presented. Examples of reactivity worth calculations for a few experiment packages are presented along with the measured reactivity worth from the reactor test of the experiment packages. Finally, calculated neutron/gamma spectra are presented.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
Laboratory-scale experiments simulating the injection of fresh water into brine in a Strategic Petroleum Reserve (SPR) cavern were performed at Sandia National Laboratories for various conditions of injection rate and small and large injection tube diameters. The computational fluid dynamic (CFD) code FLUENT was used to simulate these experiments to evaluate the predictive capability of FLUENT for brine-water mixing in an SPR cavern. The data-model comparisons show that FLUENT simulations predict the mixing plume depth reasonably well. Predictions of the near-wall brine concentrations compare very well with the experimental data. The simulated time for the mixing plume to reach the vessel wall was underpredicted for the small injection tubes but reasonable for the large injection tubes. The difference in the time to reach the wall is probably due to the three-dimensional nature of the mixing plume as it spreads out at the air-brine or oil-brine interface. The depth of the mixing plume as it spreads out along the interface was within a factor of 2 of the experimental data. The FLUENT simulation results predict the plume mixing accurately, especially the water concentration when the mixing plume reaches the wall. This parameter value is the most significant feature of the mixing process because it will determine the amount of enhanced leaching at the oil-brine interface.
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
This multinational, multi-phase spent fuel sabotage test program is quantifying the aerosol particles produced when the products of a high energy density device (HEDD) interact with and explosively particulate test rodlets that contain pellets of either surrogate materials or actual spent fuel. This program has been underway for several years. This program provides source-term data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. This document focuses on an updated description of the test program and test components for all work and plans made, or revised, primarily during FY 2005 and about the first two-thirds of FY 2006. It also serves as a program status report as of the end of May 2006. We provide details on the significant findings on aerosol results and observations from the recently completed Phase 2 surrogate material tests using cerium oxide ceramic pellets in test rodlets plus non-radioactive fission product dopants. Results include: respirable fractions produced; amounts, nuclide content, and produced particle size distributions and morphology; status on determination of the spent fuel ratio, SFR (the ratio of respirable particles from real spent fuel/respirables from surrogate spent fuel, measured under closely matched test conditions, in a contained test chamber); and, measurements of enhanced volatile fission product species sorption onto respirable particles. We discuss progress and results for the first three, recently performed Phase 3 tests using depleted uranium oxide, DUO{sub 2}, test rodlets. We will also review the status of preparations and the final Phase 4 tests in this program, using short rodlets containing actual spent fuel from U.S. PWR reactors, with both high- and lower-burnup fuel. These data plus testing results and design are tailored to support and guide, follow-on computer modeling of aerosol dispersal hazards and radiological consequence assessments. This spent fuel sabotage--aerosol test program, performed primarily at Sandia National Laboratories, with support provided by both the U.S. Department of Energy and the Nuclear Regulatory Commission, had significant inputs from, and is strongly supported and coordinated by both the U.S. and international program participants in Germany, France, and the U.K., as part of the international Working Group for Sabotage Concerns of Transport and Storage Casks, WGSTSC.
A technique published in SAND Report 2006-1789 ''Model Reduction of Systems with Localized Nonlinearities'' is illustrated in two problems of finite element structural dynamics. That technique, called here the Method of Locally Discontinuous Basis Vectors (LDBV), was devised to address the peculiar difficulties of model reduction of systems having spatially localized nonlinearities. It's illustration here is on two problems of different geometric and dynamic complexity, but each containing localized interface nonlinearities represented by constitutive models for bolted joint behavior. As illustrated on simple problems in the earlier SAND report, the LDBV Method not only affords reduction in size of the nonlinear systems of equations that must be solved, but it also facilitates the use of much larger time steps on problems of joint macro-slip than would be possible otherwise. These benefits are more dramatic for the larger problems illustrated here. The work of both the original SAND report and this one were funded by the LDRD program at Sandia National Laboratories.
A laser hazard analysis is performed to evaluate if the use of fluorescent diffuse reflectors to view incident laser beams (Coherent Verdi 10W) present a hazard based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers. The use of fluorescent diffuse reflectors in the alignment process does not pose an increased hazard because of the fluorescence at a different wavelength than that of the incident laser.
The advancement of DNA cloning has significantly augmented the potential threat of a focused bioweapon assault, such as a terrorist attack. With current DNA cloning techniques, toxin genes from the most dangerous (but environmentally labile) bacterial or viral organism can now be selected and inserted into robust organism to produce an infinite number of deadly chimeric bioweapons. In order to neutralize such a threat, accurate detection of the expressed toxin genes, rather than classification on strain or genealogical decent of these organisms, is critical. The development of a high-throughput microarray approach will enable the detection of unknowns chimeric bioweapons. The development of a high-throughput microarray approach will enable the detection of unknown bioweapons. We have developed a unique microfluidic approach to capture and concentrate these threat genes (mRNA's) upto a 30 fold concentration. These captured oligonucleotides can then be used to synthesize in situ oligonucleotide copies (cDNA probes) of the captured genes. An integrated microfluidic architecture will enable us to control flows of reagents, perform clean-up steps and finally elute nanoliter volumes of synthesized oligonucleotides probes. The integrated approach has enabled a process where chimeric or conventional bioweapons can rapidly be identified based on their toxic function, rather than being restricted to information that may not identify the critical nature of the threat.
Localized shear deformation plays an important role in a number of geotechnical and geological processes. Slope failures, the formation and propagation of faults, cracking in concrete dams, and shear fractures in subsiding hydrocarbon reservoirs are examples of important effects of shear localization. Traditional engineering analyses of these phenomena, such as limit equilibrium techniques, make certain assumptions on the shape of the failure surface as well as other simplifications. While these methods may be adequate for the applications for which they were designed, it is difficult to extrapolate the results to more general scenarios. An alternative approach is to use a numerical modeling technique, such as the finite element method, to predict localization. While standard finite elements can model a wide variety of loading situations and geometries quite well, for numerical reasons they have difficulty capturing the softening and anisotropic damage that accompanies localization. By introducing an enhancement to the element in the form of a fracture surface at an arbitrary position and orientation in the element, we can regularize the solution, model the weakening response, and track the relative motion of the surfaces. To properly model the slip along these surfaces, the traction-displacement response must be properly captured. This report focuses on the development of a constitutive model appropriate to localizing geomaterials, and the embedding of this model into the enhanced finite element framework. This modeling covers two distinct phases. The first, usually brief, phase is the weakening response as the material transitions from intact continuum to a body with a cohesionless fractured surface. Once the cohesion has been eliminated, the response along the surface is completely frictional. We have focused on a rate- and state-dependent frictional model that captures stable and unstable slip along the surface. This model is embedded numerically into the element using a generalized trapezoidal formulation. While the focus is on the constitutive model of interest, the framework is also developed for a general surface response. This report summarizes the major research and development accomplishments for the LDRD project titled 'Cohesive Zone Modeling of Failure in Geomaterials: Formulation and Implementation of a Strong Discontinuity Model Incorporating the Effect of Slip Speed on Frictional Resistance'. This project supported a strategic partnership between Sandia National Laboratories and Stanford University by providing funding for the lead author, Craig Foster, during his doctoral research.
This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Presto is a Lagrangian, three-dimensional explicit, transient dynamics code for the analysis of solids subjected to large, suddenly applied loads. Presto is designed for problems with large deformations, nonlinear material behavior, and contact. There is a versatile element library incorporating both continuum and structural elements. The code is designed for a parallel computing environment. This document describes the input for the code that gives users access to all of the current functionality in the code. Presto is built in an environment that allows it to be coupled with other engineering analysis codes. The input structure for the code, which uses a concept called scope, reflects the fact that Presto can be used in a coupled environment. This guide describes the scope concept and the input from the outermost to the innermost input scopes. Within a given scope, the descriptions of input commands are grouped based on code functionality. For example, all material input command lines are described in a section of the user's guide for all of the material models in the code.
Abstract not provided.
Abstract not provided.
Abstract not provided.