Publications

Results 81601–81800 of 96,771

Search results

Jump to search filters

Final report on LDRD project : single-photon-sensitive imaging detector arrays at 1600 nm

Serkland, Darwin K.; Childs, Kenton D.; Koudelka, Robert K.; Geib, K.M.; Klem, John F.; Hawkins, Samuel D.; Patel, Rupal K.

The key need that this project has addressed is a short-wave infrared light detector for ranging (LIDAR) imaging at temperatures greater than 100K, as desired by nonproliferation and work for other customers. Several novel device structures to improve avalanche photodiodes (APDs) were fabricated to achieve the desired APD performance. A primary challenge to achieving high sensitivity APDs at 1550 nm is that the small band-gap materials (e.g., InGaAs or Ge) necessary to detect low-energy photons exhibit higher dark counts and higher multiplication noise compared to materials like silicon. To overcome these historical problems APDs were designed and fabricated using separate absorption and multiplication (SAM) regions. The absorption regions used (InGaAs or Ge) to leverage these materials 1550 nm sensitivity. Geiger mode detection was chosen to circumvent gain noise issues in the III-V and Ge multiplication regions, while a novel Ge/Si device was built to examine the utility of transferring photoelectrons in a silicon multiplication region. Silicon is known to have very good analog and GM multiplication properties. The proposed devices represented a high-risk for high-reward approach. Therefore one primary goal of this work was to experimentally resolve uncertainty about the novel APD structures. This work specifically examined three different designs. An InGaAs/InAlAs Geiger mode (GM) structure was proposed for the superior multiplication properties of the InAlAs. The hypothesis to be tested in this structure was whether InAlAs really presented an advantage in GM. A Ge/Si SAM was proposed representing the best possible multiplication material (i.e., silicon), however, significant uncertainty existed about both the Ge material quality and the ability to transfer photoelectrons across the Ge/Si interface. Finally a third pure germanium GM structure was proposed because bulk germanium has been reported to have better dark count properties. However, significant uncertainty existed about the quantum efficiency at 1550 nm the necessary operating temperature. This project has resulted in several conclusions after fabrication and measurement of the proposed structures. We have successfully demonstrated the Ge/Si proof-of-concept in producing high analog gain in a silicon region while absorbing in a Ge region. This has included significant Ge processing infrastructure development at Sandia. However, sensitivity is limited at low temperatures due to high dark currents that we ascribe to tunneling. This leaves remaining uncertainty about whether this structure can achieve the desired performance with further development. GM detection in InGaAs/InAlAs, Ge/Si, Si and pure Ge devices fabricated at Sandia was shown to overcome gain noise challenges, which represents critical learning that will enable Sandia to respond to future single photon detection needs. However, challenges to the operation of these devices in GM remain. The InAlAs multiplication region was not found to be significantly superior to current InP regions for GM, however, improved multiplication region design of InGaAs/InP APDs has been highlighted. For Ge GM detectors it still remains unclear whether an optimal trade-off of parameters can achieve the necessary sensitivity at 1550 nm. To further examine these remaining questions, as well as other application spaces for these technologies, funding for an Intelligence Community post-doc was awarded this year.

More Details

Applying New Network Security Technologies to SCADA Systems

Hurd, Steven A.; Stamp, Jason E.; Duggan, David P.; Chavez, Adrian R.

Supervisory Control and Data Acquisition (SCADA) systems for automation are very important for critical infrastructure and manufacturing operations. They have been implemented to work in a number of physical environments using a variety of hardware, software, networking protocols, and communications technologies, often before security issues became of paramount concern. To offer solutions to security shortcomings in the short/medium term, this project was to identify technologies used to secure "traditional" IT networks and systems, and then assess their efficacy with respect to SCADA systems. These proposed solutions must be relatively simple to implement, reliable, and acceptable to SCADA owners and operators. 4This page intentionally left blank.

More Details

Analysis of real-time reservoir monitoring : reservoirs, strategies, & modeling

Cooper, Scott P.; Elbring, Gregory J.; Jakaboski, Blake E.; Lorenz, John C.; Mani, Seethambal S.; Normann, Randy A.; Rightley, Michael J.; van Bloemen Waanders, Bart G.; Weiss, Chester J.

The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging survivability issues. Our findings indicate that packaging represents the most significant technical challenge associated with application of sensors in the downhole environment for long periods (5+ years) of time. These issues are described in detail within the report. The impact of successful reservoir monitoring programs and coincident improved reservoir management is measured by the production of additional oil and gas volumes from existing reservoirs, revitalization of nearly depleted reservoirs, possible re-establishment of already abandoned reservoirs, and improved economics for all cases. Smart Well monitoring provides the means to understand how a reservoir process is developing and to provide active reservoir management. At the same time it also provides data for developing high-fidelity simulation models. This work has been a joint effort with Sandia National Laboratories and UT-Austin's Bureau of Economic Geology, Department of Petroleum and Geosystems Engineering, and the Institute of Computational and Engineering Mathematics.

More Details

Beyond the local density approximation : improving density functional theory for high energy density physics applications

Modine, N.A.; Wright, Alan F.; Muller, Richard P.; Sears, Mark P.; Wills, Ann E.; Desjarlais, Michael P.

A finite temperature version of 'exact-exchange' density functional theory (EXX) has been implemented in Sandia's Socorro code. The method uses the optimized effective potential (OEP) formalism and an efficient gradient-based iterative minimization of the energy. The derivation of the gradient is based on the density matrix, simplifying the extension to finite temperatures. A stand-alone all-electron exact-exchange capability has been developed for testing exact exchange and compatible correlation functionals on small systems. Calculations of eigenvalues for the helium atom, beryllium atom, and the hydrogen molecule are reported, showing excellent agreement with highly converged quantumMonte Carlo calculations. Several approaches to the generation of pseudopotentials for use in EXX calculations have been examined and are discussed. The difficult problem of finding a correlation functional compatible with EXX has been studied and some initial findings are reported.

More Details

Development of a novel technique to assess the vulnerability of micro-mechanical system components to environmentally assisted cracking

Enos, David E.; Goods, Steven H.

Microelectromechanical systems (MEMS) will play an important functional role in future DOE weapon and Homeland Security applications. If these emerging technologies are to be applied successfully, it is imperative that the long-term degradation of the materials of construction be understood. Unlike electrical devices, MEMS devices have a mechanical aspect to their function. Some components (e.g., springs) will be subjected to stresses beyond whatever residual stresses exist from fabrication. These stresses, combined with possible abnormal exposure environments (e.g., humidity, contamination), introduce a vulnerability to environmentally assisted cracking (EAC). EAC is manifested as the nucleation and propagation of a stable crack at mechanical loads/stresses far below what would be expected based solely upon the materials mechanical properties. If not addressed, EAC can lead to sudden, catastrophic failure. Considering the materials of construction and the very small feature size, EAC represents a high-risk environmentally induced degradation mode for MEMS devices. Currently, the lack of applicable characterization techniques is preventing the needed vulnerability assessment. The objective of this work is to address this deficiency by developing techniques to detect and quantify EAC in MEMS materials and structures. Such techniques will allow real-time detection of crack initiation and propagation. The information gained will establish the appropriate combinations of environment (defining packaging requirements), local stress levels, and metallurgical factors (composition, grain size and orientation) that must be achieved to prevent EAC.

More Details

Diffusionless fluid transport and routing using novel microfluidic devices

Shediac, Renee S.; Barrett, Louise B.

Microfluidic devices have been proposed for 'Lab-on-a-Chip' applications for nearly a decade. Despite the unquestionable promise of these devices to allow rapid, sensitive and portable biochemical analysis, few practical devices exist. It is often difficult to adapt current laboratory techniques to the microscale because bench-top methods use discrete liquid volumes, while most current microfluidic devices employ streams of liquid confined in a branching network of micron-scale channels. The goal of this research was to use two phase liquid flows, creating discrete packets of liquid. Once divided into discrete packets, the packets can be moved controllably within the microchannels without loss of material. Each packet is equivalent to a minute test tube, holding a fraction from a separation or an aliquot to be reacted. We report on the fabrication of glass and PDMS (polydimethylsiloxane) devices that create and store packets.

More Details

Penetrator reliability investigation and design exploration : from conventional design processes to innovative uncertainty-capturing algorithms

Swiler, Laura P.; Hough, Patricia D.; Gray, Genetha A.; Chiesa, Michael L.; Heaphy, Robert T.; Thomas, Stephen W.; Trucano, Timothy G.

This project focused on research and algorithmic development in optimization under uncertainty (OUU) problems driven by earth penetrator (EP) designs. While taking into account uncertainty, we addressed three challenges in current simulation-based engineering design and analysis processes. The first challenge required leveraging small local samples, already constructed by optimization algorithms, to build effective surrogate models. We used Gaussian Process (GP) models to construct these surrogates. We developed two OUU algorithms using 'local' GPs (OUU-LGP) and one OUU algorithm using 'global' GPs (OUU-GGP) that appear competitive or better than current methods. The second challenge was to develop a methodical design process based on multi-resolution, multi-fidelity models. We developed a Multi-Fidelity Bayesian Auto-regressive process (MF-BAP). The third challenge involved the development of tools that are computational feasible and accessible. We created MATLAB{reg_sign} and initial DAKOTA implementations of our algorithms.

More Details

Meeting report:Iraq oil ministry needs assessment workshop.3-5 Septemner 2006

Littlefield, Adriane L.; Pregenzer, Arian L.

Representatives from the U.S. Department of Energy, the National Nuclear Security Administration, and Sandia National Laboratories met with mid-level representatives from Iraq's oil and gas companies and with former employees and senior managers of Iraq's Ministry of Oil September 3-5 in Amman, Jordan. The goals of the workshop were to assess the needs of the Iraqi Oil Ministry and industry, to provide information about capabilities at DOE and the national laboratories relevant to Iraq, and to develop ideas for potential projects.

More Details

MEMS-based arrays of micro ion traps for quantum simulation scaling

Blain, Matthew G.; Jokiel, Bernhard J.; Tigges, Chris P.

In this late-start Tier I Seniors Council sponsored LDRD, we have designed, simulated, microfabricated, packaged, and tested ion traps to extend the current quantum simulation capabilities of macro-ion traps to tens of ions in one and two dimensions in monolithically microfabricated micrometer-scaled MEMS-based ion traps. Such traps are being microfabricated and packaged at Sandia's MESA facility in a unique tungsten MEMS process that has already made arrays of millions of micron-sized cylindrical ion traps for mass spectroscopy applications. We define and discuss the motivation for quantum simulation using the trapping of ions, show the results of efforts in designing, simulating, and microfabricating W based MEMS ion traps at Sandia's MESA facility, and describe is some detail our development of a custom based ion trap chip packaging technology that enables the implementation of these devices in quantum physics experiments.

More Details

Long duration shock pulse shaping using nylon webbing

Proceedings of the 2006 SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2006

Davie, Neil T.

Typical shock testing requirements specify shock pulses of several hundred to several thousand g's, with pulse duration usually less than a few milliseconds. A requirement to qualify a shipping container to a head-on tractortrailer crash environment led to the development of a new test technique capable of low-g (< 50 g), long-duration (> 100 ms) shock pulses. This technique utilizes nylon webbing engaged in tension to shape the pulse produced by the interaction of two sleds on an indoor track. A combination of experimental and computational methodology was used to successfully develop the test technique to solve a specific testing requirement. The process used to develop the test technique is emphasized in this paper, where a prudent balance between experiment and analysis resulted in a cost effective solution. The results show that the quasi-static load-elongation behavior of the nylon webbing can be used to adequately model the dynamic behavior of the webbing, allowing design of the experimental setup with a simple computational model. The quasi-static load-elongation measurements are described along with the development of the computational model. Results of a full-scale experiment are presented, showing that the required shock pulse could be achieved with this test technique.

More Details

Widefield laser Doppler vibrometer using high-speed cameras

Proceedings of the 2006 SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2006

Reu, Phillip L.; Hansche, Bruce D.

Laser Doppler vibrometers (LDVs) have become the standard for out-of-plane velocity measurement because they are non-contacting, have wide signal bandwidth and high resolution, and are relatively easy to use. A typical drawback of LDVs is their limitation to single point measurements. This limitation is mitigated by scanning technology, but at the cost of the time required to sample all the points on a surface and the inability to measure transient, non-repetitive events. A more proficient alternative is a Widefield LDV (WLDV), which heterodynes the Doppler frequency down from the typical MHz range (depending on velocity) to the kHz range. In WLDV, the modulated signal is detected via a high-speed CMOS camera and the optimum modulation signal is calculated from a measured velocity on the target obtained with a traditional single point LDV. This paper will present preliminary lab results of a full-field velocity animation obtained using a WLDV system to measure the velocity of a block on a turntable. A comparison highlighting the similarities and differences of similar systems, such as Temporal Speckle Pattern Interferometry and traditional LDV is discussed.

More Details

Development and validation of a viscoelastic foam model for encapsulated components

Proceedings of the 2006 SEM Annual Conference and Exposition on Experimental and Applied Mechanics 2006

Hinnerichs, Terry D.; Urbina, Angel U.; Paez, Thomas L.; O'Gorman, Christian C.

Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Saunas structural dynamics code, is achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several parallel papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper will describe the development of a linear-viscoelastic constitutive model for EF-AR20 epoxy foam with density, modulus, and damping uncertainties and apply the model to the simplest of the series of foam/component structural geometries for the calibration and validation of the constitutive model.

More Details

Representations and metaphors for the structure of synchronous multimedia collaboration within task-oriented, time-constrained distributed teams

Proceedings of the Annual Hawaii International Conference on System Sciences

Linebarger, John M.; Scholand, Andrew J.; Ehlen, Mark E.

Based primarily on the results of a month-long experiment and a crisis management exercise, synchronous multimedia collaboration within a taskoriented, time-constrained distributed team appears to exhibit three layers of structure. The first layer is episodic, and results in collections of related multimedia collaboration artifacts that can be called "chapters" or "scenes" in the collaboration. The second layer is the multivalent nature of collaboration, in which collaboration conversations at multiple subgroup levels take place at the same time. The third, top-level, layer is the agenda that drives the collaboration. The implications for the design of synchronous collaboration systems are that multiple views, representations, and metaphors for this conversation structure are needed. Chapter views, subgroup views, and agenda views are presented as alternative packaging mechanisms and entry points into the collaboration data. Other metaphors and presentations include the collaboration tree and infinitely recursive conference room, as well as network graphs of subgroup structure and agenda-based group awareness. © 2006 IEEE.

More Details

Strain fields around high-energy ion tracks in α-quartz

Journal of Applied Physics

Follstaedt, D.M.; Norman, A.K.; Doyle, Barney L.; McDaniel, F.D.

Transmission electron microscopy has been used to image the tracks of high-energy 197Au +26 (374 MeV) and 127I +18 (241 MeV) ions incident in a nonchanneling direction through a prethinned specimen of hexagonal α-quartz (SiO 2). These ions have high electronic stopping powers in quartz, 24 and 19 keV/nm, respectively, which are sufficient to produce a disordered latent track. When the tracks are imaged with diffraction contrast using several different reciprocal lattice vectors, they exhibit a radial strain extending outward from their disordered centerline approximately 16 nm into the crystalline surroundings. The images are consistent with a radial strain field with cylindrical symmetry around the amorphous track, like that found in models developed to account for the lateral expansion of amorphous SiO 2 films produced by irradiation with high-energy ions. These findings provide an experimental basis for increased confidence in such modeling. © 2006 American Institute of Physics.

More Details

Flute instability growth on a magnetized plasma column

Physics of Plasmas

Rose, D.V.; Genoni, T.C.; Welch, D.R.; Mehlhorn, Thomas A.; Porter, J.L.; Ditmire, T.

The growth of the flute-type instability for a field-aligned plasma column immersed in a uniform magnetic field is studied. Particle-in-cell simulations are compared with a semi-analytic dispersion analysis of the drift cyclotron instability in cylindrical geometry with a Gaussian density profile in the radial direction. For the parameters considered here, the dispersion analysis gives a local maximum for the peak growth rates as a function of R/r i, where R is the Gaussian characteristic radius and r i is the ion gyroradius. The electrostatic and electromagnetic particle-in-cell simulation results give azimuthal and radial mode numbers that are in reasonable agreement with the dispersion analysis. The electrostatic simulations give linear growth rates that are in good agreement with the dispersion analysis results, while the electromagnetic simulations yield growth rate trends that are similar to the dispersion analysis but that are not in quantitative agreement. These differences are ascribed to higher initial field fluctuation levels in the electromagnetic field solver. Overall, the simulations allow the examination of both the linear and nonlinear evolution of the instability in this physical system up to and beyond the point of wave energy saturation. © 2006 American Institute of Physics.

More Details

Performance of thermal cells and batteries made with plasma-sprayed cathodes and anodes

Journal of Power Sources

Guidotti, R.A.; Reinhardt, Frederick W.; Dai, J.; Reisner, D.E.

Cathodes for thermally activated ("thermal") batteries based on CoS2 and LiCl-LiBr-LiF electrolyte and FeS2 (pyrite) and LiCl-KCl eutectic were prepared by thermal spraying catholyte mixtures onto graphite-paper substrates. Composite separator-cathode deposits were also prepared in the same manner by sequential thermal spraying of LiCl-KCl-based separator material onto a pyrite-cathode substrate. These materials were then tested in single cells over a temperature range of 400-600 °C and in 5-cell and 15-cell batteries. A limited number of battery tests were conducted with the separator-cathode composites and plasma-sprayed Li(Si) anodes-the first report of an all-plasma-sprayed thermal battery. Thermal-spraying offers distinct advantages over conventional pressed-powder parts for fabrication of thin electrodes for short-life thermal batteries. The plasma-sprayed electrodes have lower impedances than the corresponding pressed-powder parts due to improved particle-particle contact. © 2006 Elsevier B.V. All rights reserved.

More Details

Are curved focal planes necessary for wide-field survey telescopes?

Proceedings of SPIE - The International Society for Optical Engineering

Ackermann, Mark R.; McGraw, John T.; Zimmer, Peter C.

The last decade has seen significant interest in wide field of view (FOV) telescopes for sky survey and space surveillance applications. Prompted by this interest, a multitude of wide-field designs have emerged. While all designs result from optimization of competing constraints, one of the more controversial design choices is whether such telescopes require flat or curved focal planes. For imaging applications, curved focal planes are not an obvious choice. Thirty years ago with mostly analytic design tools, the solution to wide-field image quality appeared to be curved focal planes. Today however, with computer aided optimization, high image quality can be achieved over flat focal surfaces. For most designs, the small gains in performance offered by curved focal planes are more than offset by the complexities and cost of curved CCDs. Modern design techniques incorporating reflective and refractive correctors appear to make a curved focal surface an unnecessary complication. Examination of seven current, wide FOV projects (SDSS, MMT, DCT, LSST, PanStarrs, HyperSuprime and DARPA SST) suggests there is little to be gained from a curved focal plane. The one exception might be the HyperSuprime instrument where performance goals are severely stressing refractive prime-focus corrector capabilities.

More Details

Chemiluminescence as a condition monitoring method for thermal aging and lifetime prediction of an HTPB elastomer

Polymer Degradation and Stability

Celina, M.; Trujillo, A.B.; Gillen, K.T.; Minier, Leanna M.G.

Chemiluminescence (CL) has been applied as a condition monitoring technique to assess aging related changes in a hydroxyl-terminated-polybutadiene based polyurethane elastomer. Initial thermal aging of this polymer was conducted between 110 and 50 °C. Two CL methods were applied to examine the degradative changes that had occurred in these aged samples: isothermal "wear-out" experiments under oxygen yielding initial CL intensity and "wear-out" time data, and temperature ramp experiments under inert conditions as a measure of previously accumulated hydroperoxides or other reactive species. The sensitivities of these CL features to prior aging exposure of the polymer were evaluated on the basis of qualifying this method as a quick screening technique for quantification of degradation levels. Both the techniques yielded data representing the aging trends in this material via correlation with mechanical property changes. Initial CL rates from the isothermal experiments are the most sensitive and suitable approach for documenting material changes during the early part of thermal aging. © 2006 Elsevier Ltd. All rights reserved.

More Details

MCNP/MCNPX model of the annular core research reactor

Depriest, Kendall D.; Cooper, Philip J.; Parma, Edward J.

Many experimenters at the Annular Core Research Reactor (ACRR) have a need to predict the neutron/gamma environment prior to testing. In some cases, the neutron/gamma environment is needed to understand the test results after the completion of an experiment. In an effort to satisfy the needs of experimenters, a model of the ACRR was developed for use with the Monte Carlo N-Particle transport codes MCNP [Br03] and MCNPX [Wa02]. The model contains adjustable safety, transient, and control rods, several of the available spectrum-modifying cavity inserts, and placeholders for experiment packages. The ACRR model was constructed such that experiment package models can be easily placed in the reactor after being developed as stand-alone units. An addition to the 'standard' model allows the FREC-II cavity to be included in the calculations. This report presents the MCNP/MCNPX model of the ACRR. Comparisons are made between the model and the reactor for various configurations. Reactivity worth curves for the various reactor configurations are presented. Examples of reactivity worth calculations for a few experiment packages are presented along with the measured reactivity worth from the reactor test of the experiment packages. Finally, calculated neutron/gamma spectra are presented.

More Details

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

Brown, Shannon L.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual

Brown, Shannon L.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

QCS: a system for querying, clustering and summarizing documents

Dunlavy, Daniel D.

Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

More Details

Numerical simulations of lab-scale brine-water mixing experiments

Webb, Stephen W.; Khalil, Imane K.

Laboratory-scale experiments simulating the injection of fresh water into brine in a Strategic Petroleum Reserve (SPR) cavern were performed at Sandia National Laboratories for various conditions of injection rate and small and large injection tube diameters. The computational fluid dynamic (CFD) code FLUENT was used to simulate these experiments to evaluate the predictive capability of FLUENT for brine-water mixing in an SPR cavern. The data-model comparisons show that FLUENT simulations predict the mixing plume depth reasonably well. Predictions of the near-wall brine concentrations compare very well with the experimental data. The simulated time for the mixing plume to reach the vessel wall was underpredicted for the small injection tubes but reasonable for the large injection tubes. The difference in the time to reach the wall is probably due to the three-dimensional nature of the mixing plume as it spreads out at the air-brine or oil-brine interface. The depth of the mixing plume as it spreads out along the interface was within a factor of 2 of the experimental data. The FLUENT simulation results predict the plume mixing accurately, especially the water concentration when the mixing plume reaches the wall. This parameter value is the most significant feature of the mixing process because it will determine the amount of enhanced leaching at the oil-brine interface.

More Details

A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory

Oberkampf, William L.

Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

More Details

Spent fuel sabotage aerosol test program :FY 2005-06 testing and aerosol data summary

Brockmann, John E.; Lucero, Daniel A.; Steyskal, Michele S.; Gregson, Michael W.

This multinational, multi-phase spent fuel sabotage test program is quantifying the aerosol particles produced when the products of a high energy density device (HEDD) interact with and explosively particulate test rodlets that contain pellets of either surrogate materials or actual spent fuel. This program has been underway for several years. This program provides source-term data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. This document focuses on an updated description of the test program and test components for all work and plans made, or revised, primarily during FY 2005 and about the first two-thirds of FY 2006. It also serves as a program status report as of the end of May 2006. We provide details on the significant findings on aerosol results and observations from the recently completed Phase 2 surrogate material tests using cerium oxide ceramic pellets in test rodlets plus non-radioactive fission product dopants. Results include: respirable fractions produced; amounts, nuclide content, and produced particle size distributions and morphology; status on determination of the spent fuel ratio, SFR (the ratio of respirable particles from real spent fuel/respirables from surrogate spent fuel, measured under closely matched test conditions, in a contained test chamber); and, measurements of enhanced volatile fission product species sorption onto respirable particles. We discuss progress and results for the first three, recently performed Phase 3 tests using depleted uranium oxide, DUO{sub 2}, test rodlets. We will also review the status of preparations and the final Phase 4 tests in this program, using short rodlets containing actual spent fuel from U.S. PWR reactors, with both high- and lower-burnup fuel. These data plus testing results and design are tailored to support and guide, follow-on computer modeling of aerosol dispersal hazards and radiological consequence assessments. This spent fuel sabotage--aerosol test program, performed primarily at Sandia National Laboratories, with support provided by both the U.S. Department of Energy and the Nuclear Regulatory Commission, had significant inputs from, and is strongly supported and coordinated by both the U.S. and international program participants in Germany, France, and the U.K., as part of the international Working Group for Sabotage Concerns of Transport and Storage Casks, WGSTSC.

More Details

Finite element calculations illustrating a method of model reduction for the dynamics of structures with localized nonlinearities

Griffith, Daniel G.; Segalman, Daniel J.

A technique published in SAND Report 2006-1789 ''Model Reduction of Systems with Localized Nonlinearities'' is illustrated in two problems of finite element structural dynamics. That technique, called here the Method of Locally Discontinuous Basis Vectors (LDBV), was devised to address the peculiar difficulties of model reduction of systems having spatially localized nonlinearities. It's illustration here is on two problems of different geometric and dynamic complexity, but each containing localized interface nonlinearities represented by constitutive models for bolted joint behavior. As illustrated on simple problems in the earlier SAND report, the LDBV Method not only affords reduction in size of the nonlinear systems of equations that must be solved, but it also facilitates the use of much larger time steps on problems of joint macro-slip than would be possible otherwise. These benefits are more dramatic for the larger problems illustrated here. The work of both the original SAND report and this one were funded by the LDRD program at Sandia National Laboratories.

More Details

Hazard analysis of long term viewing of visible laser light off of fluorescent diffuse reflective surfaces (post-it)

Augustoni, Arnold L.

A laser hazard analysis is performed to evaluate if the use of fluorescent diffuse reflectors to view incident laser beams (Coherent Verdi 10W) present a hazard based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers. The use of fluorescent diffuse reflectors in the alignment process does not pose an increased hazard because of the fluorescence at a different wavelength than that of the incident laser.

More Details

Development of a high-throughput microfluidic integrated microarray for the detection of chimeric bioweapons

Hux, Gary A.; Shepodd, Timothy J.

The advancement of DNA cloning has significantly augmented the potential threat of a focused bioweapon assault, such as a terrorist attack. With current DNA cloning techniques, toxin genes from the most dangerous (but environmentally labile) bacterial or viral organism can now be selected and inserted into robust organism to produce an infinite number of deadly chimeric bioweapons. In order to neutralize such a threat, accurate detection of the expressed toxin genes, rather than classification on strain or genealogical decent of these organisms, is critical. The development of a high-throughput microarray approach will enable the detection of unknowns chimeric bioweapons. The development of a high-throughput microarray approach will enable the detection of unknown bioweapons. We have developed a unique microfluidic approach to capture and concentrate these threat genes (mRNA's) upto a 30 fold concentration. These captured oligonucleotides can then be used to synthesize in situ oligonucleotide copies (cDNA probes) of the captured genes. An integrated microfluidic architecture will enable us to control flows of reagents, perform clean-up steps and finally elute nanoliter volumes of synthesized oligonucleotides probes. The integrated approach has enabled a process where chimeric or conventional bioweapons can rapidly be identified based on their toxic function, rather than being restricted to information that may not identify the critical nature of the threat.

More Details

Modeling brittle fracture, slip weakening, and variable friction in geomaterials with an embedded strong discontinuity finite element

Regueiro, Richard A.; Borja, R.I.; Foster, C.D.

Localized shear deformation plays an important role in a number of geotechnical and geological processes. Slope failures, the formation and propagation of faults, cracking in concrete dams, and shear fractures in subsiding hydrocarbon reservoirs are examples of important effects of shear localization. Traditional engineering analyses of these phenomena, such as limit equilibrium techniques, make certain assumptions on the shape of the failure surface as well as other simplifications. While these methods may be adequate for the applications for which they were designed, it is difficult to extrapolate the results to more general scenarios. An alternative approach is to use a numerical modeling technique, such as the finite element method, to predict localization. While standard finite elements can model a wide variety of loading situations and geometries quite well, for numerical reasons they have difficulty capturing the softening and anisotropic damage that accompanies localization. By introducing an enhancement to the element in the form of a fracture surface at an arbitrary position and orientation in the element, we can regularize the solution, model the weakening response, and track the relative motion of the surfaces. To properly model the slip along these surfaces, the traction-displacement response must be properly captured. This report focuses on the development of a constitutive model appropriate to localizing geomaterials, and the embedding of this model into the enhanced finite element framework. This modeling covers two distinct phases. The first, usually brief, phase is the weakening response as the material transitions from intact continuum to a body with a cohesionless fractured surface. Once the cohesion has been eliminated, the response along the surface is completely frictional. We have focused on a rate- and state-dependent frictional model that captures stable and unstable slip along the surface. This model is embedded numerically into the element using a generalized trapezoidal formulation. While the focus is on the constitutive model of interest, the framework is also developed for a general surface response. This report summarizes the major research and development accomplishments for the LDRD project titled 'Cohesive Zone Modeling of Failure in Geomaterials: Formulation and Implementation of a Strong Discontinuity Model Incorporating the Effect of Slip Speed on Frictional Resistance'. This project supported a strategic partnership between Sandia National Laboratories and Stanford University by providing funding for the lead author, Craig Foster, during his doctoral research.

More Details

On the design of reversible QDCA systems

Murphy, Sarah M.; DeBenedictis, Erik

This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system.

More Details

Presto users guide version 2.6

Koteras, James R.

Presto is a Lagrangian, three-dimensional explicit, transient dynamics code for the analysis of solids subjected to large, suddenly applied loads. Presto is designed for problems with large deformations, nonlinear material behavior, and contact. There is a versatile element library incorporating both continuum and structural elements. The code is designed for a parallel computing environment. This document describes the input for the code that gives users access to all of the current functionality in the code. Presto is built in an environment that allows it to be coupled with other engineering analysis codes. The input structure for the code, which uses a concept called scope, reflects the fact that Presto can be used in a coupled environment. This guide describes the scope concept and the input from the outermost to the innermost input scopes. Within a given scope, the descriptions of input commands are grouped based on code functionality. For example, all material input command lines are described in a section of the user's guide for all of the material models in the code.

More Details

Atomistic modeling of nanowires, small-scale fatigue damage in cast magnesium, and materials for MEMS

Zimmerman, Jonathan A.

Lightweight and miniaturized weapon systems are driving the use of new materials in design such as microscale materials and ultra low-density metallic materials. Reliable design of future weapon components and systems demands a thorough understanding of the deformation modes in these materials that comprise the components and a robust methodology to predict their performance during service or storage. Traditional continuum models of material deformation and failure are not easily extended to these new materials unless microstructural characteristics are included in the formulation. For example, in LIGA Ni and Al-Si thin films, the physical size is on the order of microns, a scale approaching key microstructural features. For a new potential structural material, cast Mg offers a high stiffness-to-weight ratio, but the microstructural heterogeneity at various scales requires a structure-property continuum model. Processes occurring at the nanoscale and microscale develop certain structures that drive material behavior. The objective of the work presented in this report was to understand material characteristics in relation to mechanical properties at the nanoscale and microscale in these promising new material systems. Research was conducted primarily at the University of Colorado at Boulder to employ tightly coupled experimentation and simulation to study damage at various material size scales under monotonic and cyclic loading conditions. Experimental characterization of nano/micro damage will be accomplished by novel techniques such as in-situ environmental scanning electron microscopy (ESEM), 1 MeV transmission electron microscopy (TEM), and atomic force microscopy (AFM). New simulations to support experimental efforts will include modified embedded atom method (MEAM) atomistic simulations at the nanoscale and single crystal micromechanical finite element simulations. This report summarizes the major research and development accomplishments for the LDRD project titled 'Atomistic Modeling of Nanowires, Small-scale Fatigue Damage in Cast Magnesium, and Materials for MEMS'. This project supported a strategic partnership between Sandia National Laboratories and the University of Colorado at Boulder by providing funding for the lead author, Ken Gall, and his students, while he was a member of the University of Colorado faculty.

More Details

Advanced engineering environment pilot project

Schwegel, Jill S.; Pomplun, Alan R.

The Advanced Engineering Environment (AEE) is a concurrent engineering concept that enables real-time process tooling design and analysis, collaborative process flow development, automated document creation, and full process traceability throughout a product's life cycle. The AEE will enable NNSA's Design and Production Agencies to collaborate through a singular integrated process. Sandia National Laboratories and Parametric Technology Corporation (PTC) are working together on a prototype AEE pilot project to evaluate PTC's product collaboration tools relative to the needs of the NWC. The primary deliverable for the project is a set of validated criteria for defining a complete commercial off-the-shelf (COTS) solution to deploy the AEE across the NWC.

More Details

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity

Adams, Brian M.; Wittwer, Jonathan W.; Bichon, Barron J.; Carnes, Brian C.; Copps, Kevin D.; Eldred, Michael S.; Hopkins, Matthew M.; Neckels, David C.; Notz, Patrick N.; Subia, Samuel R.

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

More Details

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual

Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Eldred, Michael S.; Brown, Shannon L.; Adams, Brian M.; Dunlavy, Daniel D.; Gay, David M.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Noncontact surface thermometry for microsystems: LDRD final report

Serrano, Justin R.; Phinney, Leslie M.

We describe a Laboratory Directed Research and Development (LDRD) effort to develop and apply laser-based thermometry diagnostics for obtaining spatially resolved temperature maps on working microelectromechanical systems (MEMS). The goal of the effort was to cultivate diagnostic approaches that could adequately resolve the extremely fine MEMS device features, required no modifications to MEMS device design, and which did not perturb the delicate operation of these extremely small devices. Two optical diagnostics were used in this study: microscale Raman spectroscopy and microscale thermoreflectance. Both methods use a low-energy, nonperturbing probe laser beam, whose arbitrary wavelength can be selected for a diffraction-limited focus that meets the need for micron-scale spatial resolution. Raman is exploited most frequently, as this technique provides a simple and unambiguous measure of the absolute device temperature for most any MEMS semiconductor or insulator material under steady state operation. Temperatures are obtained from the spectral position and width of readily isolated peaks in the measured Raman spectra with a maximum uncertainty near {+-}10 K and a spatial resolution of about 1 micron. Application of the Raman technique is demonstrated for V-shaped and flexure-style polycrystalline silicon electrothermal actuators, and for a GaN high-electron-mobility transistor. The potential of the Raman technique for simultaneous measurement of temperature and in-plane stress in silicon MEMS is also demonstrated and future Raman-variant diagnostics for ultra spatio-temporal resolution probing are discussed. Microscale thermoreflectance has been developed as a complement for the primary Raman diagnostic. Thermoreflectance exploits the small-but-measurable temperature dependence of surface optical reflectivity for diagnostic purposes. The temperature-dependent reflectance behavior of bulk silicon, SUMMiT-V polycrystalline silicon films and metal surfaces is presented. The results for bulk silicon are applied to silicon-on-insulator (SOI) fabricated actuators, where measured temperatures with a maximum uncertainty near {+-}9 K, and 0.75-micron inplane spatial resolution, are achieved for the reflectance-based measurements. Reflectance-based temperatures are found to be in good agreement with Raman-measured temperatures from the same device.

More Details

Critical infrastructure systems of systems assessment methodology

Depoy, Jennifer M.; Phelan, James M.; Sholander, Peter E.; Varnado, G.B.; Wyss, Gregory D.; Darby, John; Walter, Andrew W.

Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

More Details

Stable local oscillator microcircuit

Brocato, Robert W.

This report gives a description of the development of a Stable Local Oscillator (StaLO) Microcircuit. The StaLO accepts a 100MHz input signal and produces output signals at 1.2, 3.3, and 3.6 GHz. The circuit is built as a multi-chip module (MCM), since it makes use of integrated circuit technologies in silicon and lithium niobate as well as discrete passive components. The StaLO uses a comb generator followed by surface acoustic wave (SAW) filters. The comb generator creates a set of harmonic components of the 100MHz input signal. The SAW filters are narrow bandpass filters that are used to select the desired component and reject all others. The resulting circuit has very low sideband power levels and low phase noise (both less than -40dBc) that is limited primarily by the phase noise level of the input signal.

More Details
Results 81601–81800 of 96,771
Results 81601–81800 of 96,771