Publications

Results 71801–71900 of 99,299

Search results

Jump to search filters

Transportation scenarios for risk analysis

Weiner, Ruth F.

Transportation risk, like any risk, is defined by the risk triplet: what can happen (the scenario), how likely it is (the probability), and the resulting consequences. This paper evaluates the development of transportation scenarios, the associated probabilities, and the consequences. The most likely radioactive materials transportation scenario is routine, incident-free transportation, which has a probability indistinguishable from unity. Accident scenarios in radioactive materials transportation are of three different types: accidents in which there is no impact on the radioactive cargo, accidents in which some gamma shielding may be lost but there is no release of radioactive material, and accident in which radioactive material may potentially be released. Accident frequencies, obtainable from recorded data validated by the U.S. Department of Transportation, are considered equivalent to accident probabilities in this study. Probabilities of different types of accidents are conditional probabilities, conditional on an accident occurring, and are developed from event trees. Development of all of these probabilities and the associated highway and rail accident event trees are discussed in this paper.

More Details

LDRD final report : leveraging multi-way linkages on heterogeneous data

Dunlavy, Daniel M.; Kolda, Tamara G.

This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations with large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.

More Details

Generalized high order compact methods

Spotz, William S.

The fundamental ideas of the high order compact method are combined with the generalized finite difference method. The result is a finite difference method that works on unstructured, nonuniform grids, and is more accurate than one would classically expect from the number of grid points employed.

More Details

Modeling cortical circuits

Rothganger, Fredrick R.; Rohrer, Brandon R.; Verzi, Stephen J.; Xavier, Patrick G.

The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approach is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.

More Details

Peer-to-peer architectures for exascale computing : LDRD final report

Mayo, Jackson R.; Vorobeychik, Yevgeniy; Armstrong, Robert C.; Minnich, Ronald G.; Rudish, Donald W.

The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.

More Details

Long-Term Environmental Stewardship (LTES) life-cycle material management at Sandia National Laboratories

Nagy, Michael D.

The Long-Term Environmental Stewardship (LTES) mission is to ensure long-term protection of human health and the environment, and proactive management toward sustainable use and protection of natural and cultural resources affected by any Sandia National Laboratories (SNL) operations and operational legacies. The primary objectives of the LTES program are to: (1) Protect the environment from present and future operations; (2) Preserve and protect natural and cultural resources, and; (3) Apply environmental life-cycle management to SNL operations.

More Details

Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion

Owen, Steven J.; Staten, Matthew L.

Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.

More Details

International physical protection self-assessment tool for chemical facilities

Stiles, Linda L.; Tewell, Craig R.; Burdick, Brent; Lindgren, Eric

This report is the final report for Laboratory Directed Research and Development (LDRD) Project No.130746, International Physical Protection Self-Assessment Tool for Chemical Facilities. The goal of the project was to develop an exportable, low-cost, computer-based risk assessment tool for small to medium size chemical facilities. The tool would assist facilities in improving their physical protection posture, while protecting their proprietary information. In FY2009, the project team proposed a comprehensive evaluation of safety and security regulations in the target geographical area, Southeast Asia. This approach was later modified and the team worked instead on developing a methodology for identifying potential targets at chemical facilities. Milestones proposed for FY2010 included characterizing the international/regional regulatory framework, finalizing the target identification and consequence analysis methodology, and developing, reviewing, and piloting the software tool. The project team accomplished the initial goal of developing potential target categories for chemical facilities; however, the additional milestones proposed for FY2010 were not pursued and the LDRD funding therefore was redirected.

More Details

Hydrogen effects on materials for CNG/H2 blends

Somerday, Brian P.; Keller, Jay O.

No concerns for Hydrogen-Enriched Compressed Natural gas (HCNG) in steel storage tanks if material strength is < 950 MPa. Recommend evaluating H{sub 2}-assisted fatigue cracking in higher strength steels at H{sub 2} partial pressure in blend. Limited fatigue testing on higher strength steel cylinders in H{sub 2} shows promising results. Impurities in Compressed Natural Gas (CNG) (e.g., CO) may provide extrinsic mechanism for mitigating H{sub 2}-assisted fatigue cracking in steel tanks.

More Details

Quantification of margins and uncertainty for risk-informed decision analysis

Alvin, Kenneth F.

QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.

More Details

In-situ observation of ErD2 formation during D2 loading via neutron diffraction

Rodriguez, Marko A.; Snow, Clark S.; Wixom, Ryan R.

In an effort to better understand the structural changes occurring during hydrogen loading of erbium target materials, we have performed in situ D{sub 2} loading of erbium metal (powder) at temperature (450 C) with simultaneous neutron diffraction analysis. This experiment tracked the conversion of Er metal to the {alpha} erbium deuteride (solid-solution) phase and then into the {beta} (fluorite) phase. Complete conversion to ErD{sub 2.0} was accomplished at 10 Torr D{sub 2} pressure with deuterium fully occupying the tetrahedral sites in the fluorite lattice.

More Details

Novel detection methods for radiation-induced electron-hole pairs

Cich, Michael J.; Derzon, Mark S.; Martinez, Marino; Nordquist, Christopher D.; Vawter, Gregory A.

Most common ionizing radiation detectors typically rely on one of two general methods: collection of charge generated by the radiation, or collection of light produced by recombination of excited species. Substantial efforts have been made to improve the performance of materials used in these types of detectors, e.g. to raise the operating temperature, to improve the energy resolution, timing or tracking ability. However, regardless of the material used, all these detectors are limited in performance by statistical variation in the collection efficiency, for charge or photons. We examine three alternative schemes for detecting ionizing radiation that do not rely on traditional direct collection of the carriers or photons produced by the radiation. The first method detects refractive index changes in a resonator structure. The second looks at alternative means to sense the chemical changes caused by radiation on a scintillator-type material. The final method examines the possibilities of sensing the perturbation caused by radiation on the transmission of a RF transmission line structure. Aspects of the feasibility of each approach are examined and recommendations made for further work.

More Details

Aerosol cluster impact and break-up : II. Atomic and Cluster Scale Models

Lechman, Jeremy B.

Understanding the interaction of aerosol particle clusters/flocs with surfaces is an area of interest for a number of processes in chemical, pharmaceutical, and powder manufacturing as well as in steam-tube rupture in nuclear power plants. Developing predictive capabilities for these applications involves coupled phenomena on multiple length and timescales from the process macroscopic scale ({approx}1m) to the multi-cluster interaction scale (1mm-0.1m) to the single cluster scale ({approx}1000 - 10000 particles) to the particle scale (10nm-10{micro}m) interactions, and on down to the sub-particle, atomic scale interactions. The focus of this report is on the single cluster scale; although work directed toward developing better models of particle-particle interactions by considering sub-particle scale interactions and phenomena is also described. In particular, results of mesoscale (i.e., particle to single cluster scale) discrete element method (DEM) simulations for aerosol cluster impact with rigid walls are presented. The particle-particle interaction model is based on JKR adhesion theory and is implemented as an enhancement to the granular package in the LAMMPS code. The theory behind the model is outlined and preliminary results are shown. Additionally, as mentioned, results from atomistic classical molecular dynamics simulations are also described as a means of developing higher fidelity models of particle-particle interactions. Ultimately, the results from these and other studies at various scales must be collated to provide systems level models with accurate 'sub-grid' information for design, analysis and control of the underlying systems processes.

More Details

Sensitivity analysis techniques for models of human behavior

Naugle, Asmeret B.

Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

More Details

Enterprise analytics

Spomer, Judith E.

Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.

More Details

Detection of exposure damage in composite materials using Fourier transform infrared technology

Roach, Dennis P.; Duvall, Randy L.

Goal: to detect the subtle changes in laminate composite structures brought about by thermal, chemical, ultraviolet, and moisture exposure. Compare sensitivity of an array of NDI methods, including Fourier Transform Infrared Spectroscopy (FTIR), to detect subtle differences in composite materials due to deterioration. Inspection methods applied: ultrasonic pulse echo, through transmission ultrasonics, thermography, resonance testing, mechanical impedance analysis, eddy current, low frequency bond testing & FTIR. Comparisons between the NDI methods are being used to establish the potential of FTIR to provide the necessary sensitivity to non-visible, yet significant, damage in the resin and fiber matrix of composite structures. Comparison of NDI results with short beam shear tests are being used to relate NDI sensitivity to reduction in structural performance. Chemical analyses technique, which measures the infrared intensity versus wavelength of light reflected on the surface of a structure (chemical and physical information via this signature). Advances in instrumentation have resulted in hand-held portable devices that allow for field use (few seconds per scan). Shows promise for production quality assurance and in-service applications on composite aircraft structures (scarfed repairs). Statistical analysis on frequency spectrums produced by FTIR interrogations are being used to produce an NDI technique for assessing material integrity. Conclusions are: (1) Use of NDI to assess loss of composite laminate integrity brought about by thermal, chemical, ultraviolet, and moisture exposure. (2) Degradation trends between SBS strength and exposure levels (temperature and time) have been established for different materials. (3) Various NDI methods have been applied to evaluate damage and relate this to loss of integrity - PE UT shows greatest sensitivity. (4) FTIR shows promise for damage detection and calibration to predict structural integrity (short beam shear). (5) Detection of damage for medium exposure levels (possibly resin matrix degradation only) is more difficult and requires additional study. (6) These are initial results only - program is continuing with additional heat, UV, chemical and water exposure test specimens.

More Details

Quantifying the debonding of inclusions through tomography and computational homology

Foulk, James W.; Jin, Helena; Lu, Wei-Yang; Mota, Alejandro

This report describes a Laboratory Directed Research and Development (LDRD) project to use of synchrotron-radiation computed tomography (SRCT) data to determine the conditions and mechanisms that lead to void nucleation in rolled alloys. The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory (LBNL) has provided SRCT data of a few specimens of 7075-T7351 aluminum plate (widely used for aerospace applications) stretched to failure, loaded in directions perpendicular and parallel to the rolling direction. The resolution of SRCT data is 900nm, which allows elucidation of the mechanisms governing void growth and coalescence. This resolution is not fine enough, however, for nucleation. We propose the use statistics and image processing techniques to obtain sub-resolution scale information from these data, and thus determine where in the specimen and when during the loading program nucleation occurs and the mechanisms that lead to it. Quantitative analysis of the tomography data, however, leads to the conclusion that the reconstruction process compromises the information obtained from the scans. Alternate, more powerful reconstruction algorithms are needed to address this problem, but those fall beyond the scope of this project.

More Details

Multivariate analysis of progressive thermal desorption coupled gas chromatography-mass spectrometry

Van Benthem, Mark H.; Borek, Theodore T.; Mowry, Curtis D.; Kotula, Paul G.

Thermal decomposition of poly dimethyl siloxane compounds, Sylgard{reg_sign} 184 and 186, were examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a method of producing multiway data using a stepped thermal desorption. The technique involves sequentially heating a sample of the material of interest with subsequent analysis in a commercial GC/MS system. The decomposition chromatograms were analyzed using multivariate analysis tools including principal component analysis (PCA), factor rotation employing the varimax criterion, and multivariate curve resolution. The results of the analysis show seven components related to offgassing of various fractions of siloxanes that vary as a function of temperature. Thermal desorption coupled with gas chromatography-mass spectrometry (TD/GC-MS) is a powerful analytical technique for analyzing chemical mixtures. It has great potential in numerous analytic areas including materials analysis, sports medicine, in the detection of designer drugs; and biological research for metabolomics. Data analysis is complicated, far from automated and can result in high false positive or false negative rates. We have demonstrated a step-wise TD/GC-MS technique that removes more volatile compounds from a sample before extracting the less volatile compounds. This creates an additional dimension of separation before the GC column, while simultaneously generating three-way data. Sandia's proven multivariate analysis methods, when applied to these data, have several advantages over current commercial options. It also has demonstrated potential for success in finding and enabling identification of trace compounds. Several challenges remain, however, including understanding the sources of noise in the data, outlier detection, improving the data pretreatment and analysis methods, developing a software tool for ease of use by the chemist, and demonstrating our belief that this multivariate analysis will enable superior differentiation capabilities. In addition, noise and system artifacts challenge the analysis of GC-MS data collected on lower cost equipment, ubiquitous in commercial laboratories. This research has the potential to affect many areas of analytical chemistry including materials analysis, medical testing, and environmental surveillance. It could also provide a method to measure adsorption parameters for chemical interactions on various surfaces by measuring desorption as a function of temperature for mixtures. We have presented results of a novel method for examining offgas products of a common PDMS material. Our method involves utilizing a stepped TD/GC-MS data acquisition scheme that may be almost totally automated, coupled with multivariate analysis schemes. This method of data generation and analysis can be applied to a number of materials aging and thermal degradation studies.

More Details

Application of the DG-1199 methodology to the ESBWR and ABWR

Kalinich, Donald; Walton, Fotini; Gauntt, Randall O.

Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Population Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.

More Details

Hardware authentication using transmission spectra modified optical fiber

Romero, Juan A.; Grubbs, Robert K.

The ability to authenticate the source and integrity of data is critical to the monitoring and inspection of special nuclear materials, including hardware related to weapons production. Current methods rely on electronic encryption/authentication codes housed in monitoring devices. This always invites the question of implementation and protection of authentication information in an electronic component necessitating EMI shielding, possibly an on board power source to maintain the information in memory. By using atomic layer deposition techniques (ALD) on photonic band gap (PBG) optical fibers we will explore the potential to randomly manipulate the output spectrum and intensity of an input light source. This randomization could produce unique signatures authenticating devices with the potential to authenticate data. An external light source projected through the fiber with a spectrometer at the exit would 'read' the unique signature. No internal power or computational resources would be required.

More Details

A bio-synthetic interface for discovery of viral entry mechanisms

Negrete, Oscar N.; Hayden, Carl C.

Understanding and defending against pathogenic viruses is an important public health and biodefense challenge. The focus of our LDRD project has been to uncover the mechanisms enveloped viruses use to identify and invade host cells. We have constructed interfaces between viral particles and synthetic lipid bilayers. This approach provides a minimal setting for investigating the initial events of host-virus interaction - (i) recognition of, and (ii) entry into the host via membrane fusion. This understanding could enable rational design of therapeutics that block viral entry as well as future construction of synthetic, non-proliferating sensors that detect live virus in the environment. We have observed fusion between synthetic lipid vesicles and Vesicular Stomatitis virus particles, and we have observed interactions between Nipah virus-like particles and supported lipid bilayers and giant unilamellar vesicles.

More Details

Biomolecular transport and separation in nanotubular networks

Sasaki, Darryl Y.; Wang, Julia W.; Hayden, Carl C.; Stachowiak, Jeanne C.; Branda, Steven; Bachand, George D.; Meagher, Robert M.; Stevens, Mark J.; Robinson, David; Zendejas, Frank Z.

Cell membranes are dynamic substrates that achieve a diverse array of functions through multi-scale reconfigurations. We explore the morphological changes that occur upon protein interaction to model membrane systems that induce deformation of their planar structure to yield nanotube assemblies. In the two examples shown in this report we will describe the use of membrane adhesion and particle trajectory to form lipid nanotubes via mechanical stretching, and protein adsorption onto domains and the induction of membrane curvature through steric pressure. Through this work the relationship between membrane bending rigidity, protein affinity, and line tension of phase separated structures were examined and their relationship in biological membranes explored.

More Details

QMU as an Approach to Strengthening the Predictive Capabilities of Complex Models

Gray, Genetha A.; Boggs, Paul T.; Grace, Matthew D.

Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third, Bayesian methods for optimal testing in the QMU framework were developed. This completion of this project represent an increased understanding of how to apply and use the QMU process as a means for improving model predictions of the behavior of complex systems.

More Details

Studies of the viscoelastic properties of water confined between surfaces of specified chemical nature

Moore, Nathan W.; Feibelman, Peter J.; Grest, Gary S.

This report summarizes the work completed under the Laboratory Directed Research and Development (LDRD) project 10-0973 of the same title. Understanding the molecular origin of the no-slip boundary condition remains vitally important for understanding molecular transport in biological, environmental and energy-related processes, with broad technological implications. Moreover, the viscoelastic properties of fluids in nanoconfinement or near surfaces are not well-understood. We have critically reviewed progress in this area, evaluated key experimental and theoretical methods, and made unique and important discoveries addressing these and related scientific questions. Thematically, the discoveries include insight into the orientation of water molecules on metal surfaces, the premelting of ice, the nucleation of water and alcohol vapors between surface asperities and the lubricity of these molecules when confined inside nanopores, the influence of water nucleation on adhesion to salts and silicates, and the growth and superplasticity of NaCl nanowires.

More Details

Chemical strategies for die/wafer submicron alignment and bonding

Rohwer, Lauren E.S.; Chu, Dahwey; Martin, James E.

This late-start LDRD explores chemical strategies that will enable sub-micron alignment accuracy of dies and wafers by exploiting the interfacial energies of chemical ligands. We have micropatterned commensurate features, such as 2-d arrays of micron-sized gold lines on the die to be bonded. Each gold line is functionalized with alkanethiol ligands before the die are brought into contact. The ligand interfacial energy is minimized when the lines on the die are brought into registration, due to favorable interactions between the complementary ligand tails. After registration is achieved, standard bonding techniques are used to create precision permanent bonds. We have computed the alignment forces and torque between two surfaces patterned with arrays of lines or square pads to illustrate how best to maximize the tendency to align. We also discuss complex, aperiodic patterns such as rectilinear pad assemblies, concentric circles, and spirals that point the way towards extremely precise alignment.

More Details

Use of technology assessment databases to identify the issues associated with adoption of structural health monitoring practices

Roach, Dennis P.; Neidigk, Stephen

The goal is to create a systematic method and structure to compile, organize, and summarize SHM related data to identify the level of maturity and rate of evolution and have a quick and ongoing evaluation of the current state of SHM among research institutions and industry. Hundreds of technical publication and conference proceedings were read and analyzed to compile the database. Microsoft Excel was used to create a useable interface that could be filtered to compare any of the entered data fields.

More Details

Silicon carbide tritium permeation barrier for steel structural components

Buchenauer, D.A.; Kolasinski, Robert; Youchison, Dennis L.; Garde, J.; Holschuh Jr., Thomas V.

Chemical vapor deposited (CVD) silicon carbide (SiC) has superior resistance to tritium permeation even after irradiation. Prior work has shown Ultrametfoam to be forgiving when bonded to substrates with large CTE differences. The technical objectives are: (1) Evaluate foams of vanadium, niobium and molybdenum metals and SiC for CTE mitigation between a dense SiC barrier and steel structure; (2) Thermostructural modeling of SiC TPB/Ultramet foam/ferritic steel architecture; (3) Evaluate deuterium permeation of chemical vapor deposited (CVD) SiC; (4) D testing involved construction of a new higher temperature (> 1000 C) permeation testing system and development of improved sealing techniques; (5) Fabricate prototype tube similar to that shown with dimensions of 7cm {theta} and 35cm long; and (6) Tritium and hermeticity testing of prototype tube.

More Details

Using after-action review based on automated performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.

More Details

Performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

More Details

The high current, fast, 100ns, Linear Transformer Driver (LTD) developmental project at Sandia Laboratories and HCEI

Mazarakis, Michael G.; Fowler, William E.; Matzen, M.K.; McDaniel, Dillon H.; Mckee, G.R.; Savage, Mark E.; Struve, Kenneth; Stygar, William A.; Woodworth, Joseph R.

Sandia National Laboratories, Albuquerque, N.M., USA, in collaboration with the High Current Electronic Institute (HCEI), Tomsk, Russia, is developing a new paradigm in pulsed power technology: the Linear Transformer Driver (LTD) technology. This technological approach can provide very compact devices that can deliver very fast high current and high voltage pulses straight out of the cavity with out any complicated pulse forming and pulse compression network. Through multistage inductively insulated voltage adders, the output pulse, increased in voltage amplitude, can be applied directly to the load. The load may be a vacuum electron diode, a z-pinch wire array, a gas puff, a liner, an isentropic compression load (ICE) to study material behavior under very high magnetic fields, or a fusion energy (IFE) target. This is because the output pulse rise time and width can be easily tailored to the specific application needs. In this paper we briefly summarize the developmental work done in Sandia and HCEI during the last few years, and describe our new MYKONOS Sandia High Current LTD Laboratory. An extensive evaluation of the LTD technology is being performed at SNL and the High Current Electronic Institute (HCEI) in Tomsk Russia. Two types of High Current LTD cavities (LTD I-II, and 1-MA LTD) were constructed and tested individually and in a voltage adder configuration (1-MA cavity only). All cavities performed remarkably well and the experimental results are in full agreement with analytical and numerical calculation predictions. A two-cavity voltage adder is been assembled and currently undergoes evaluation. This is the first step towards the completion of the 10-cavity, 1-TW module. This MYKONOS voltage adder will be the first ever IVA built with a transmission line insulated with deionized water. The LTD II cavity renamed LTD III will serve as a test bed for evaluating a number of different types of switches, resistors, alternative capacitor configurations, cores and other cavity components. Experimental results will be presented at the Conference and in future publications.

More Details

High-efficiency high-energy Ka source for the critically-required maximum illumination of x-ray optics on Z using Z-petawatt-driven laser-breakout-afterburner accelerated ultrarelativistic electrons LDRD

Bennett, Guy R.; Sefkow, Adam B.

Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.

More Details

Dynamic tensile characterization of a 4330-V steel with kolsky bar techniques

Song, Bo; Connelly, Kevin

There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended for dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.

More Details

Use of nanofiltration to reduce cooling tower water usage

Altman, Susan J.; Jensen, Richard P.; Everett, Randy

Nanofiltration (NF) can effectively treat cooling-tower water to reduce water consumption and maximize water usage efficiency of thermoelectric power plants. A pilot is being run to verify theoretical calculations. A side stream of water from a 900 gpm cooling tower is being treated by NF with the permeate returning to the cooling tower and the concentrate being discharged. The membrane efficiency is as high as over 50%. Salt rejection ranges from 77-97% with higher rejection for divalent ions. The pilot has demonstrated a reduction of makeup water of almost 20% and a reduction of discharge of over 50%.

More Details
Results 71801–71900 of 99,299
Results 71801–71900 of 99,299