Publications

Results 71801–72000 of 99,299

Search results

Jump to search filters

Transportation scenarios for risk analysis

Weiner, Ruth F.

Transportation risk, like any risk, is defined by the risk triplet: what can happen (the scenario), how likely it is (the probability), and the resulting consequences. This paper evaluates the development of transportation scenarios, the associated probabilities, and the consequences. The most likely radioactive materials transportation scenario is routine, incident-free transportation, which has a probability indistinguishable from unity. Accident scenarios in radioactive materials transportation are of three different types: accidents in which there is no impact on the radioactive cargo, accidents in which some gamma shielding may be lost but there is no release of radioactive material, and accident in which radioactive material may potentially be released. Accident frequencies, obtainable from recorded data validated by the U.S. Department of Transportation, are considered equivalent to accident probabilities in this study. Probabilities of different types of accidents are conditional probabilities, conditional on an accident occurring, and are developed from event trees. Development of all of these probabilities and the associated highway and rail accident event trees are discussed in this paper.

More Details

LDRD final report : leveraging multi-way linkages on heterogeneous data

Dunlavy, Daniel M.; Kolda, Tamara G.

This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations with large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.

More Details

Generalized high order compact methods

Spotz, William S.

The fundamental ideas of the high order compact method are combined with the generalized finite difference method. The result is a finite difference method that works on unstructured, nonuniform grids, and is more accurate than one would classically expect from the number of grid points employed.

More Details

Modeling cortical circuits

Rothganger, Fredrick R.; Rohrer, Brandon R.; Verzi, Stephen J.; Xavier, Patrick G.

The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approach is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.

More Details

Peer-to-peer architectures for exascale computing : LDRD final report

Mayo, Jackson R.; Vorobeychik, Yevgeniy; Armstrong, Robert C.; Minnich, Ronald G.; Rudish, Donald W.

The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation solver for the heat equation, and the closely related technique of value iteration in optimization. We aimed to apply P2P concepts in designing implementations capable of surviving an unreliable machine of 10{sup 6} nodes.

More Details

Long-Term Environmental Stewardship (LTES) life-cycle material management at Sandia National Laboratories

Nagy, Michael D.

The Long-Term Environmental Stewardship (LTES) mission is to ensure long-term protection of human health and the environment, and proactive management toward sustainable use and protection of natural and cultural resources affected by any Sandia National Laboratories (SNL) operations and operational legacies. The primary objectives of the LTES program are to: (1) Protect the environment from present and future operations; (2) Preserve and protect natural and cultural resources, and; (3) Apply environmental life-cycle management to SNL operations.

More Details

Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion

Owen, Steven J.; Staten, Matthew L.

Computational simulation must often be performed on domains where materials are represented as scalar quantities or volume fractions at cell centers of an octree-based grid. Common examples include bio-medical, geotechnical or shock physics calculations where interface boundaries are represented only as discrete statistical approximations. In this work, we introduce new methods for generating Lagrangian computational meshes from Eulerian-based data. We focus specifically on shock physics problems that are relevant to ASC codes such as CTH and Alegra. New procedures for generating all-hexahedral finite element meshes from volume fraction data are introduced. A new primal-contouring approach is introduced for defining a geometric domain. New methods for refinement, node smoothing, resolving non-manifold conditions and defining geometry are also introduced as well as an extension of the algorithm to handle tetrahedral meshes. We also describe new scalable MPI-based implementations of these procedures. We describe a new software module, Sculptor, which has been developed for use as an embedded component of CTH. We also describe its interface and its use within the mesh generation code, CUBIT. Several examples are shown to illustrate the capabilities of Sculptor.

More Details

International physical protection self-assessment tool for chemical facilities

Stiles, Linda L.; Tewell, Craig R.; Burdick, Brent; Lindgren, Eric

This report is the final report for Laboratory Directed Research and Development (LDRD) Project No.130746, International Physical Protection Self-Assessment Tool for Chemical Facilities. The goal of the project was to develop an exportable, low-cost, computer-based risk assessment tool for small to medium size chemical facilities. The tool would assist facilities in improving their physical protection posture, while protecting their proprietary information. In FY2009, the project team proposed a comprehensive evaluation of safety and security regulations in the target geographical area, Southeast Asia. This approach was later modified and the team worked instead on developing a methodology for identifying potential targets at chemical facilities. Milestones proposed for FY2010 included characterizing the international/regional regulatory framework, finalizing the target identification and consequence analysis methodology, and developing, reviewing, and piloting the software tool. The project team accomplished the initial goal of developing potential target categories for chemical facilities; however, the additional milestones proposed for FY2010 were not pursued and the LDRD funding therefore was redirected.

More Details

Hydrogen effects on materials for CNG/H2 blends

Somerday, Brian P.; Keller, Jay O.

No concerns for Hydrogen-Enriched Compressed Natural gas (HCNG) in steel storage tanks if material strength is < 950 MPa. Recommend evaluating H{sub 2}-assisted fatigue cracking in higher strength steels at H{sub 2} partial pressure in blend. Limited fatigue testing on higher strength steel cylinders in H{sub 2} shows promising results. Impurities in Compressed Natural Gas (CNG) (e.g., CO) may provide extrinsic mechanism for mitigating H{sub 2}-assisted fatigue cracking in steel tanks.

More Details

Quantification of margins and uncertainty for risk-informed decision analysis

Alvin, Kenneth F.

QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.

More Details

In-situ observation of ErD2 formation during D2 loading via neutron diffraction

Rodriguez, Marko A.; Snow, Clark S.; Wixom, Ryan R.

In an effort to better understand the structural changes occurring during hydrogen loading of erbium target materials, we have performed in situ D{sub 2} loading of erbium metal (powder) at temperature (450 C) with simultaneous neutron diffraction analysis. This experiment tracked the conversion of Er metal to the {alpha} erbium deuteride (solid-solution) phase and then into the {beta} (fluorite) phase. Complete conversion to ErD{sub 2.0} was accomplished at 10 Torr D{sub 2} pressure with deuterium fully occupying the tetrahedral sites in the fluorite lattice.

More Details

Novel detection methods for radiation-induced electron-hole pairs

Cich, Michael J.; Derzon, Mark S.; Martinez, Marino; Nordquist, Christopher D.; Vawter, Gregory A.

Most common ionizing radiation detectors typically rely on one of two general methods: collection of charge generated by the radiation, or collection of light produced by recombination of excited species. Substantial efforts have been made to improve the performance of materials used in these types of detectors, e.g. to raise the operating temperature, to improve the energy resolution, timing or tracking ability. However, regardless of the material used, all these detectors are limited in performance by statistical variation in the collection efficiency, for charge or photons. We examine three alternative schemes for detecting ionizing radiation that do not rely on traditional direct collection of the carriers or photons produced by the radiation. The first method detects refractive index changes in a resonator structure. The second looks at alternative means to sense the chemical changes caused by radiation on a scintillator-type material. The final method examines the possibilities of sensing the perturbation caused by radiation on the transmission of a RF transmission line structure. Aspects of the feasibility of each approach are examined and recommendations made for further work.

More Details

Aerosol cluster impact and break-up : II. Atomic and Cluster Scale Models

Lechman, Jeremy B.

Understanding the interaction of aerosol particle clusters/flocs with surfaces is an area of interest for a number of processes in chemical, pharmaceutical, and powder manufacturing as well as in steam-tube rupture in nuclear power plants. Developing predictive capabilities for these applications involves coupled phenomena on multiple length and timescales from the process macroscopic scale ({approx}1m) to the multi-cluster interaction scale (1mm-0.1m) to the single cluster scale ({approx}1000 - 10000 particles) to the particle scale (10nm-10{micro}m) interactions, and on down to the sub-particle, atomic scale interactions. The focus of this report is on the single cluster scale; although work directed toward developing better models of particle-particle interactions by considering sub-particle scale interactions and phenomena is also described. In particular, results of mesoscale (i.e., particle to single cluster scale) discrete element method (DEM) simulations for aerosol cluster impact with rigid walls are presented. The particle-particle interaction model is based on JKR adhesion theory and is implemented as an enhancement to the granular package in the LAMMPS code. The theory behind the model is outlined and preliminary results are shown. Additionally, as mentioned, results from atomistic classical molecular dynamics simulations are also described as a means of developing higher fidelity models of particle-particle interactions. Ultimately, the results from these and other studies at various scales must be collated to provide systems level models with accurate 'sub-grid' information for design, analysis and control of the underlying systems processes.

More Details

Sensitivity analysis techniques for models of human behavior

Naugle, Asmeret B.

Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

More Details

Enterprise analytics

Spomer, Judith E.

Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.

More Details

Detection of exposure damage in composite materials using Fourier transform infrared technology

Roach, Dennis P.; Duvall, Randy L.

Goal: to detect the subtle changes in laminate composite structures brought about by thermal, chemical, ultraviolet, and moisture exposure. Compare sensitivity of an array of NDI methods, including Fourier Transform Infrared Spectroscopy (FTIR), to detect subtle differences in composite materials due to deterioration. Inspection methods applied: ultrasonic pulse echo, through transmission ultrasonics, thermography, resonance testing, mechanical impedance analysis, eddy current, low frequency bond testing & FTIR. Comparisons between the NDI methods are being used to establish the potential of FTIR to provide the necessary sensitivity to non-visible, yet significant, damage in the resin and fiber matrix of composite structures. Comparison of NDI results with short beam shear tests are being used to relate NDI sensitivity to reduction in structural performance. Chemical analyses technique, which measures the infrared intensity versus wavelength of light reflected on the surface of a structure (chemical and physical information via this signature). Advances in instrumentation have resulted in hand-held portable devices that allow for field use (few seconds per scan). Shows promise for production quality assurance and in-service applications on composite aircraft structures (scarfed repairs). Statistical analysis on frequency spectrums produced by FTIR interrogations are being used to produce an NDI technique for assessing material integrity. Conclusions are: (1) Use of NDI to assess loss of composite laminate integrity brought about by thermal, chemical, ultraviolet, and moisture exposure. (2) Degradation trends between SBS strength and exposure levels (temperature and time) have been established for different materials. (3) Various NDI methods have been applied to evaluate damage and relate this to loss of integrity - PE UT shows greatest sensitivity. (4) FTIR shows promise for damage detection and calibration to predict structural integrity (short beam shear). (5) Detection of damage for medium exposure levels (possibly resin matrix degradation only) is more difficult and requires additional study. (6) These are initial results only - program is continuing with additional heat, UV, chemical and water exposure test specimens.

More Details

Quantifying the debonding of inclusions through tomography and computational homology

Foulk, James W.; Jin, Helena; Lu, Wei-Yang; Mota, Alejandro

This report describes a Laboratory Directed Research and Development (LDRD) project to use of synchrotron-radiation computed tomography (SRCT) data to determine the conditions and mechanisms that lead to void nucleation in rolled alloys. The Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory (LBNL) has provided SRCT data of a few specimens of 7075-T7351 aluminum plate (widely used for aerospace applications) stretched to failure, loaded in directions perpendicular and parallel to the rolling direction. The resolution of SRCT data is 900nm, which allows elucidation of the mechanisms governing void growth and coalescence. This resolution is not fine enough, however, for nucleation. We propose the use statistics and image processing techniques to obtain sub-resolution scale information from these data, and thus determine where in the specimen and when during the loading program nucleation occurs and the mechanisms that lead to it. Quantitative analysis of the tomography data, however, leads to the conclusion that the reconstruction process compromises the information obtained from the scans. Alternate, more powerful reconstruction algorithms are needed to address this problem, but those fall beyond the scope of this project.

More Details

Multivariate analysis of progressive thermal desorption coupled gas chromatography-mass spectrometry

Van Benthem, Mark H.; Borek, Theodore T.; Mowry, Curtis D.; Kotula, Paul G.

Thermal decomposition of poly dimethyl siloxane compounds, Sylgard{reg_sign} 184 and 186, were examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a method of producing multiway data using a stepped thermal desorption. The technique involves sequentially heating a sample of the material of interest with subsequent analysis in a commercial GC/MS system. The decomposition chromatograms were analyzed using multivariate analysis tools including principal component analysis (PCA), factor rotation employing the varimax criterion, and multivariate curve resolution. The results of the analysis show seven components related to offgassing of various fractions of siloxanes that vary as a function of temperature. Thermal desorption coupled with gas chromatography-mass spectrometry (TD/GC-MS) is a powerful analytical technique for analyzing chemical mixtures. It has great potential in numerous analytic areas including materials analysis, sports medicine, in the detection of designer drugs; and biological research for metabolomics. Data analysis is complicated, far from automated and can result in high false positive or false negative rates. We have demonstrated a step-wise TD/GC-MS technique that removes more volatile compounds from a sample before extracting the less volatile compounds. This creates an additional dimension of separation before the GC column, while simultaneously generating three-way data. Sandia's proven multivariate analysis methods, when applied to these data, have several advantages over current commercial options. It also has demonstrated potential for success in finding and enabling identification of trace compounds. Several challenges remain, however, including understanding the sources of noise in the data, outlier detection, improving the data pretreatment and analysis methods, developing a software tool for ease of use by the chemist, and demonstrating our belief that this multivariate analysis will enable superior differentiation capabilities. In addition, noise and system artifacts challenge the analysis of GC-MS data collected on lower cost equipment, ubiquitous in commercial laboratories. This research has the potential to affect many areas of analytical chemistry including materials analysis, medical testing, and environmental surveillance. It could also provide a method to measure adsorption parameters for chemical interactions on various surfaces by measuring desorption as a function of temperature for mixtures. We have presented results of a novel method for examining offgas products of a common PDMS material. Our method involves utilizing a stepped TD/GC-MS data acquisition scheme that may be almost totally automated, coupled with multivariate analysis schemes. This method of data generation and analysis can be applied to a number of materials aging and thermal degradation studies.

More Details

Application of the DG-1199 methodology to the ESBWR and ABWR

Kalinich, Donald; Walton, Fotini; Gauntt, Randall O.

Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Population Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.

More Details

Hardware authentication using transmission spectra modified optical fiber

Romero, Juan A.; Grubbs, Robert K.

The ability to authenticate the source and integrity of data is critical to the monitoring and inspection of special nuclear materials, including hardware related to weapons production. Current methods rely on electronic encryption/authentication codes housed in monitoring devices. This always invites the question of implementation and protection of authentication information in an electronic component necessitating EMI shielding, possibly an on board power source to maintain the information in memory. By using atomic layer deposition techniques (ALD) on photonic band gap (PBG) optical fibers we will explore the potential to randomly manipulate the output spectrum and intensity of an input light source. This randomization could produce unique signatures authenticating devices with the potential to authenticate data. An external light source projected through the fiber with a spectrometer at the exit would 'read' the unique signature. No internal power or computational resources would be required.

More Details

A bio-synthetic interface for discovery of viral entry mechanisms

Negrete, Oscar N.; Hayden, Carl C.

Understanding and defending against pathogenic viruses is an important public health and biodefense challenge. The focus of our LDRD project has been to uncover the mechanisms enveloped viruses use to identify and invade host cells. We have constructed interfaces between viral particles and synthetic lipid bilayers. This approach provides a minimal setting for investigating the initial events of host-virus interaction - (i) recognition of, and (ii) entry into the host via membrane fusion. This understanding could enable rational design of therapeutics that block viral entry as well as future construction of synthetic, non-proliferating sensors that detect live virus in the environment. We have observed fusion between synthetic lipid vesicles and Vesicular Stomatitis virus particles, and we have observed interactions between Nipah virus-like particles and supported lipid bilayers and giant unilamellar vesicles.

More Details

Biomolecular transport and separation in nanotubular networks

Sasaki, Darryl Y.; Wang, Julia W.; Hayden, Carl C.; Stachowiak, Jeanne C.; Branda, Steven; Bachand, George D.; Meagher, Robert M.; Stevens, Mark J.; Robinson, David; Zendejas, Frank Z.

Cell membranes are dynamic substrates that achieve a diverse array of functions through multi-scale reconfigurations. We explore the morphological changes that occur upon protein interaction to model membrane systems that induce deformation of their planar structure to yield nanotube assemblies. In the two examples shown in this report we will describe the use of membrane adhesion and particle trajectory to form lipid nanotubes via mechanical stretching, and protein adsorption onto domains and the induction of membrane curvature through steric pressure. Through this work the relationship between membrane bending rigidity, protein affinity, and line tension of phase separated structures were examined and their relationship in biological membranes explored.

More Details

QMU as an Approach to Strengthening the Predictive Capabilities of Complex Models

Gray, Genetha A.; Boggs, Paul T.; Grace, Matthew D.

Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third, Bayesian methods for optimal testing in the QMU framework were developed. This completion of this project represent an increased understanding of how to apply and use the QMU process as a means for improving model predictions of the behavior of complex systems.

More Details

Studies of the viscoelastic properties of water confined between surfaces of specified chemical nature

Moore, Nathan W.; Feibelman, Peter J.; Grest, Gary S.

This report summarizes the work completed under the Laboratory Directed Research and Development (LDRD) project 10-0973 of the same title. Understanding the molecular origin of the no-slip boundary condition remains vitally important for understanding molecular transport in biological, environmental and energy-related processes, with broad technological implications. Moreover, the viscoelastic properties of fluids in nanoconfinement or near surfaces are not well-understood. We have critically reviewed progress in this area, evaluated key experimental and theoretical methods, and made unique and important discoveries addressing these and related scientific questions. Thematically, the discoveries include insight into the orientation of water molecules on metal surfaces, the premelting of ice, the nucleation of water and alcohol vapors between surface asperities and the lubricity of these molecules when confined inside nanopores, the influence of water nucleation on adhesion to salts and silicates, and the growth and superplasticity of NaCl nanowires.

More Details

Chemical strategies for die/wafer submicron alignment and bonding

Rohwer, Lauren E.S.; Chu, Dahwey; Martin, James E.

This late-start LDRD explores chemical strategies that will enable sub-micron alignment accuracy of dies and wafers by exploiting the interfacial energies of chemical ligands. We have micropatterned commensurate features, such as 2-d arrays of micron-sized gold lines on the die to be bonded. Each gold line is functionalized with alkanethiol ligands before the die are brought into contact. The ligand interfacial energy is minimized when the lines on the die are brought into registration, due to favorable interactions between the complementary ligand tails. After registration is achieved, standard bonding techniques are used to create precision permanent bonds. We have computed the alignment forces and torque between two surfaces patterned with arrays of lines or square pads to illustrate how best to maximize the tendency to align. We also discuss complex, aperiodic patterns such as rectilinear pad assemblies, concentric circles, and spirals that point the way towards extremely precise alignment.

More Details

Use of technology assessment databases to identify the issues associated with adoption of structural health monitoring practices

Roach, Dennis P.; Neidigk, Stephen

The goal is to create a systematic method and structure to compile, organize, and summarize SHM related data to identify the level of maturity and rate of evolution and have a quick and ongoing evaluation of the current state of SHM among research institutions and industry. Hundreds of technical publication and conference proceedings were read and analyzed to compile the database. Microsoft Excel was used to create a useable interface that could be filtered to compare any of the entered data fields.

More Details

Silicon carbide tritium permeation barrier for steel structural components

Buchenauer, D.A.; Kolasinski, Robert; Youchison, Dennis L.; Garde, J.; Holschuh Jr., Thomas V.

Chemical vapor deposited (CVD) silicon carbide (SiC) has superior resistance to tritium permeation even after irradiation. Prior work has shown Ultrametfoam to be forgiving when bonded to substrates with large CTE differences. The technical objectives are: (1) Evaluate foams of vanadium, niobium and molybdenum metals and SiC for CTE mitigation between a dense SiC barrier and steel structure; (2) Thermostructural modeling of SiC TPB/Ultramet foam/ferritic steel architecture; (3) Evaluate deuterium permeation of chemical vapor deposited (CVD) SiC; (4) D testing involved construction of a new higher temperature (> 1000 C) permeation testing system and development of improved sealing techniques; (5) Fabricate prototype tube similar to that shown with dimensions of 7cm {theta} and 35cm long; and (6) Tritium and hermeticity testing of prototype tube.

More Details

Using after-action review based on automated performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.

More Details

Performance assessment to enhance training effectiveness

Adams, Susan S.; Basilico, Justin D.; Abbott, Robert G.

Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.

More Details

The high current, fast, 100ns, Linear Transformer Driver (LTD) developmental project at Sandia Laboratories and HCEI

Mazarakis, Michael G.; Fowler, William E.; Matzen, M.K.; McDaniel, Dillon H.; Mckee, G.R.; Savage, Mark E.; Struve, Kenneth; Stygar, William A.; Woodworth, Joseph R.

Sandia National Laboratories, Albuquerque, N.M., USA, in collaboration with the High Current Electronic Institute (HCEI), Tomsk, Russia, is developing a new paradigm in pulsed power technology: the Linear Transformer Driver (LTD) technology. This technological approach can provide very compact devices that can deliver very fast high current and high voltage pulses straight out of the cavity with out any complicated pulse forming and pulse compression network. Through multistage inductively insulated voltage adders, the output pulse, increased in voltage amplitude, can be applied directly to the load. The load may be a vacuum electron diode, a z-pinch wire array, a gas puff, a liner, an isentropic compression load (ICE) to study material behavior under very high magnetic fields, or a fusion energy (IFE) target. This is because the output pulse rise time and width can be easily tailored to the specific application needs. In this paper we briefly summarize the developmental work done in Sandia and HCEI during the last few years, and describe our new MYKONOS Sandia High Current LTD Laboratory. An extensive evaluation of the LTD technology is being performed at SNL and the High Current Electronic Institute (HCEI) in Tomsk Russia. Two types of High Current LTD cavities (LTD I-II, and 1-MA LTD) were constructed and tested individually and in a voltage adder configuration (1-MA cavity only). All cavities performed remarkably well and the experimental results are in full agreement with analytical and numerical calculation predictions. A two-cavity voltage adder is been assembled and currently undergoes evaluation. This is the first step towards the completion of the 10-cavity, 1-TW module. This MYKONOS voltage adder will be the first ever IVA built with a transmission line insulated with deionized water. The LTD II cavity renamed LTD III will serve as a test bed for evaluating a number of different types of switches, resistors, alternative capacitor configurations, cores and other cavity components. Experimental results will be presented at the Conference and in future publications.

More Details

High-efficiency high-energy Ka source for the critically-required maximum illumination of x-ray optics on Z using Z-petawatt-driven laser-breakout-afterburner accelerated ultrarelativistic electrons LDRD

Bennett, Guy R.; Sefkow, Adam B.

Under the auspices of the Science of Extreme Environments LDRD program, a <2 year theoretical- and computational-physics study was performed (LDRD Project 130805) by Guy R Bennett (formally in Center-01600) and Adam B. Sefkow (Center-01600): To investigate novel target designs by which a short-pulse, PW-class beam could create a brighter K{alpha} x-ray source than by simple, direct-laser-irradiation of a flat foil; Direct-Foil-Irradiation (DFI). The computational studies - which are still ongoing at this writing - were performed primarily on the RedStorm supercomputer at Sandia National Laboratories Albuquerque site. The motivation for a higher efficiency K{alpha} emitter was very clear: as the backlighter flux for any x-ray imaging technique on the Z accelerator increases, the signal-to-noise and signal-to-background ratios improve. This ultimately allows the imaging system to reach its full quantitative potential as a diagnostic. Depending on the particular application/experiment this would imply, for example, that the system would have reached its full design spatial resolution and thus the capability to see features that might otherwise be indiscernible with a traditional DFI-like x-ray source. This LDRD began FY09 and ended FY10.

More Details

Dynamic tensile characterization of a 4330-V steel with kolsky bar techniques

Song, Bo; Connelly, Kevin

There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended for dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.

More Details

Use of nanofiltration to reduce cooling tower water usage

Altman, Susan J.; Jensen, Richard P.; Everett, Randy

Nanofiltration (NF) can effectively treat cooling-tower water to reduce water consumption and maximize water usage efficiency of thermoelectric power plants. A pilot is being run to verify theoretical calculations. A side stream of water from a 900 gpm cooling tower is being treated by NF with the permeate returning to the cooling tower and the concentrate being discharged. The membrane efficiency is as high as over 50%. Salt rejection ranges from 77-97% with higher rejection for divalent ions. The pilot has demonstrated a reduction of makeup water of almost 20% and a reduction of discharge of over 50%.

More Details

Co-design in ACES and exascale

The Alliance for Computing at the Extreme Scale (ACES) is a Los Alamos and Sandia collaboration encompassing not only HPC procurements and operations, but also computer science and architecture research and development. One area of focus within ACES relates to the critical technology developments for future high performance computing systems and the applications that would run on them, and ACES is heavily involved in the proposed DOE Exascale Initiative. The proposed Exascale Initiative emphasizes the need for co-design, which is the three-way collaborative and concurrent design of HPC hardware, software, and the applications themselves. Transformational changes will occur not only in HPC hardware, but also in the applications space, and taken together these will require transformation changes in the overall software layers supporting the programming models, tools, runtimes, file systems, and operating systems. Co-design involving all three areas of hardware, software, and applications will be the key to success. This talk will outline key aspects of the Exascale Initiative and its emphasis on co-design. It will provide some examples from LANL & Sandia experiences in co-design including aspects of the innovative Roadrunner architecture and software, a ACES-Cray project studying advanced interconnects within Cray, and Sandia's work in computer system simulators and mini-applications.

More Details

Clustering of graphs with of multiple edge types

Pinar, Ali P.; Rocklin, Matthew D.

We study clustering on graphs with multiple edge types. Our main motivation is that similarities between objects can be measured in many different metrics. For instance similarity between two papers can be based on common authors, where they are published, keyword similarity, citations, etc. As such, graphs with multiple edges is a more accurate model to describe similarities between objects. Each edge/metric provides only partial information about the data; recovering full information requires aggregation of all the similarity metrics. Clustering becomes much more challenging in this context, since in addition to the difficulties of the traditional clustering problem, we have to deal with a space of clusterings. We generalize the concept of clustering in single-edge graphs to multi-edged graphs and investigate problems such as: Can we find a clustering that remains good, even if we change the relative weights of metrics? How can we describe the space of clusterings efficiently? Can we find unexpected clusterings (a good clustering that is distant from all given clusterings)? If given the groundtruth clustering, can we recover how the weights for edge types were aggregated?

More Details

Nexus of technologies : international safeguards, physical protection and arms control

Jordan, Sabina E.; Blair, Dianna S.; Smartt, Heidi A.

New technologies have been, and are continuing to be, developed for Safeguards, Arms Control, and Physical Protection. Application spaces and technical requirements are evolving - Overlaps are developing. Lessons learned from IAEA's extensive experience could benefit other communities. Technologies developed for other applications may benefit Safeguards - Inherent cost benefits and improvements in procurement security processes.

More Details

Fluorescence measurements for evaluating the application of multivariate analysis techniques to optically thick environments

Reichardt, Thomas A.; Schmitt, Randal L.; Sickafoose, Shane; Jones, Howland D.T.; Timlin, Jerilyn A.

Laser-induced fluorescence measurements of cuvette-contained laser dye mixtures are made for evaluation of multivariate analysis techniques to optically thick environments. Nine mixtures of Coumarin 500 and Rhodamine 610 are analyzed, as well as the pure dyes. For each sample, the cuvette is positioned on a two-axis translation stage to allow the interrogation at different spatial locations, allowing the examination of both primary (absorption of the laser light) and secondary (absorption of the fluorescence) inner filter effects. In addition to these expected inner filter effects, we find evidence that a portion of the absorbed fluorescence is re-emitted. A total of 688 spectra are acquired for the evaluation of multivariate analysis approaches to account for nonlinear effects.

More Details

Challenges in structural analysis for deformed nuclear reactivity assessments

Villa, Daniel L.; Tallman, Tyler N.

Launch safety calculations for past space reactor concepts have usually been limited to immersion of the reactor in water and/or sand, using nominal system geometries or in some cases simplified compaction scenarios. Deformation of the reactor core by impact during the accident sequence typically has not been considered because of the complexity of the calculation. Recent advances in codes and computing power have made such calculations feasible. The accuracy of such calculations depends primarily on the underlying structural analysis. Even though explicit structural dynamics is a mature field, nuclear reactors present significant challenges to obtain accurate deformation predictions. The presence of a working fluid is one of the primary contributors to challenges in these predictions. The fluid-structure interaction cannot be neglected because the fluid surrounds the nuclear fuel which is the most important region in the analysis. A detailed model of a small eighty-five pin reactor was built with the working fluid modeled as smoothed particle hydrodynamic (SPH) elements. Filling the complex volume covered by the working fluid with SPH elements required development of an algorithm which eliminates overlaps between hexahedral and SPH elements. The results with and without the working fluid were found to be considerably different with respect to reactivity predictions.

More Details

Age-aware solder performance models : level 2 milestone completion

Holm, Elizabeth A.; Neilsen, Michael K.; Vianco, Paul T.; Neidigk, Matthew

Legislated requirements and industry standards are replacing eutectic lead-tin (Pb-Sn) solders with lead-free (Pb-free) solders in future component designs and in replacements and retrofits. Since Pb-free solders have not yet seen service for long periods, their long-term behavior is poorly characterized. Because understanding the reliability of Pb-free solders is critical to supporting the next generation of circuit board designs, it is imperative that we develop, validate and exercise a solder lifetime model that can capture the thermomechanical response of Pb-free solder joints in stockpile components. To this end, an ASC Level 2 milestone was identified for fiscal year 2010: Milestone 3605: Utilize experimentally validated constitutive model for lead-free solder to simulate aging and reliability of solder joints in stockpile components. This report documents the completion of this milestone, including evidence that the milestone completion criteria were met and a summary of the milestone Program Review.

More Details

The integration of process monitoring for safeguards

Cipiti, Benjamin B.; Zinaman, Owen R.

The Separations and Safeguards Performance Model is a reprocessing plant model that has been developed for safeguards analyses of future plant designs. The model has been modified to integrate bulk process monitoring data with traditional plutonium inventory balances to evaluate potential advanced safeguards systems. Taking advantage of the wealth of operator data such as flow rates and mass balances of bulk material, the timeliness of detection of material loss was shown to improve considerably. Four diversion cases were tested including both abrupt and protracted diversions at early and late times in the run. The first three cases indicated alarms before half of a significant quantity of material was removed. The buildup of error over time prevented detection in the case of a protracted diversion late in the run. Some issues related to the alarm conditions and bias correction will need to be addressed in future work. This work both demonstrates the use of the model for performing diversion scenario analyses and for testing advanced safeguards system designs.

More Details

Energy balance in peridynamics

Silling, Stewart; Lehoucq, Rich

The peridynamic model of solid mechanics treats internal forces within a continuum through interactions across finite distances. These forces are determined through a constitutive model that, in the case of an elastic material, permits the strain energy density at a point to depend on the collective deformation of all the material within some finite distance of it. The forces between points are evaluated from the Frechet derivative of this strain energy density with respect to the deformation map. The resulting equation of motion is an integro-differential equation written in terms of these interparticle forces, rather than the traditional stress tensor field. Recent work on peridynamics has elucidated the energy balance in the presence of these long-range forces. We have derived the appropriate analogue of stress power, called absorbed power, that leads to a satisfactory definition of internal energy. This internal energy is additive, allowing us to meaningfully define an internal energy density field in the body. An expression for the local first law of thermodynamics within peridynamics combines this mechanical component, the absorbed power, with heat transport. The global statement of the energy balance over a subregion can be expressed in a form in which the mechanical and thermal terms contain only interactions between the interior of the subregion and the exterior, in a form anticipated by Noll in 1955. The local form of this first law within peridynamics, coupled with the second law as expressed in the Clausius-Duhem inequality, is amenable to the Coleman-Noll procedure for deriving restrictions on the constitutive model for thermomechanical response. Using an idea suggested by Fried in the context of systems of discrete particles, this procedure leads to a dissipation inequality for peridynamics that has a surprising form. It also leads to a thermodynamically consistent way to treat damage within the theory, shedding light on how damage, including the nucleation and advance of cracks, should be incorporated into a constitutive model.

More Details

Oxy-combustion of pulverized coal : modeling of char-combustion kinetics

Geier, Manfred; Shaddix, Christopher R.

In this study, char combustion of pulverized coal under oxy-fuel combustion conditions was investigated on the basis of experimentally observed temperature-size characteristics and corresponding predictions of numerical simulations. Using a combustion-driven entrained flow reactor equipped with an optical particle-sizing pyrometer, combustion characteristics (particle temperatures and apparent size) of pulverized coal char particles was determined for combustion in both reduced oxygen and oxygen-enriched atmospheres with either a N{sub 2} or CO{sub 2} bath gas. The two coals investigated were a low-sulfur, high-volatile bituminous coal (Utah Skyline) and a low-sulfur subbituminous coal (North Antelope), both size-classified to 75-106 {micro}m. A particular focus of this study lies in the analysis of the predictive modeling capabilities of simplified models that capture char combustion characteristics but exhibit the lowest possible complexity and thus facilitate incorporation in existing computational fluid dynamics (CFD) simulation codes. For this purpose, char consumption characteristics were calculated for char particles in the size range 10-200 {micro}m using (1) single-film, apparent kinetic models with a chemically 'frozen' boundary layer, and (2) a reacting porous particle model with detailed gas-phase kinetics and three separate heterogeneous reaction mechanisms of char-oxidation and gasification. A comparison of model results with experimental data suggests that single-film models with reaction orders between 0.5 and 1 with respect to the surface oxygen partial pressure may be capable of adequately predicting the temperature-size characteristics of char consumption, provided heterogeneous (steam and CO{sub 2}) gasification reactions are accounted for.

More Details

Risk-informed separation distances for hydrogen gas storage facilities

Keller, Jay O.; Ruggles, Adam J.; Dedrick, Daniel E.; Moen, Christopher D.; Evans, Gregory H.; Lachance, Jeffrey L.; Winters, William S.; Houf, William G.; Zhang, Jiayao

The use of risk information in establishing code and standard requirements enables: (1) An adequate and appropriate level of safety; and (2) Deployment of hydrogen facilities are as safe as gasoline facilities. This effort provides a template for clear and defensible regulations, codes, and standards that can enable international market transformation.

More Details

A threat-based definition of IA and IA-enabled products

Shakamuri, Mayuri

This paper proposes a definition of 'IA and IA-enabled products' based on threat, as opposed to 'security services' (i.e., 'confidentiality, authentication, integrity, access control or non-repudiation of data'), as provided by Department of Defense (DoD) Instruction 8500.2, 'Information Assurance (IA) Implementation.' The DoDI 8500.2 definition is too broad, making it difficult to distinguish products that need higher protection from those that do not. As a consequence the products that need higher protection do not receive it, increasing risk. The threat-based definition proposed in this paper solves those problems by focusing attention on threats, thereby moving beyond compliance to risk management. (DoDI 8500.2 provides the definitions and controls that form the basis for IA across the DoD.) Familiarity with 8500.2 is assumed.

More Details

Challenges in simulation automation and archival

Blacker, Teddy D.

The challenges of simulation streamlining and automation continue. The need for analysis verification, reviews, quality assurance, pedigree, and archiving are strong. These automation and archival needs can alternate between competing and complementing when determining how to improve the analysis environment and process. The needs compete for priority, resource allocation, and business practice importance. Likewise, implementation strategies of both automation and archival can swing between rather local work groups to more global corporate initiatives. Questions abound about needed connectivity (and the extent of this connectivity) to various CAD systems, product data management (PDM) systems, test data repositories and various information management implementations. This is a complex set of constraints. This presentation will bring focus to this complex environment through sharing experiences. The experiences are those gleaned over years of effort at Sandia to make reasonable sense out of the decisions to be made. It will include a discussion of integration and development of home grown tools for both automation and archival. It will also include an overview of efforts to understand local requirements, compare in-house tools to commercial offerings against those requirements, and options for future progress. Hopefully, sharing this rich set of experiences may prove useful to others struggling to make progress in their own environments.

More Details

Kernel-based Linux emulation for Plan 9

Minnich, Ronald G.

CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

More Details

Oxy-combustion of pulverized coal : modeling of char combustion kinetics

Geier, Manfred; Shaddix, Christopher R.

In this study, char combustion of pulverized coal under oxy-fuel combustion conditions was investigated on the basis of experimentally observed temperature-size characteristics and corresponding predictions of numerical simulations. Using a combustion-driven entrained flow reactor equipped with an optical particle-sizing pyrometer, combustion characteristics (particle temperatures and apparent size) of pulverized coal char particles was determined for combustion in both reduced oxygen and oxygen-enriched atmospheres with either a N{sub 2} or CO{sub 2} bath gas. The two coals investigated were a low-sulfur, high-volatile bituminous coal (Utah Skyline) and a low-sulfur subbituminous coal (North Antelope), both size-classified to 75-106 {micro}m. A particular focus of this study lies in the analysis of the predictive modeling capabilities of simplified models that capture char combustion characteristics but exhibit the lowest possible complexity and thus facilitate incorporation in existing computational fluid dynamics (CFD) simulation codes. For this purpose, char consumption characteristics were calculated for char particles in the size range 10-200 {micro}m using (1) single-film, apparent kinetic models with a chemically 'frozen' boundary layer, and (2) a reacting porous particle model with detailed gas-phase kinetics and three separate heterogeneous reaction mechanisms of char-oxidation and gasification. A comparison of model results with experimental data suggests that single-film models with reaction orders between 0.5 and 1 with respect to the surface oxygen partial pressure may be capable of adequately predicting the temperature-size characteristics of char consumption, provided heterogeneous (steam and CO{sub 2}) gasification reactions are accounted for.

More Details

Determining the Bayesian optimal sampling strategy in a hierarchical system

Boggs, Paul T.; Pebay, Philippe P.; Ringland, James T.

Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

More Details

The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz

Forman, Michael F.; Young, Derek Y.

Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.

More Details

Meandered-line antenna with integrated high-impedance surface

Forman, Michael F.

A reduced-volume antenna composed of a meandered-line dipole antenna over a finite-width, high-impedance surface is presented. The structure is novel in that the high-impedance surface is implemented with four Sievenpiper via-mushroom unit cells, whose area is optimized to match the meandered-line dipole antenna. The result is an antenna similar in performance to patch antenna but one fourth the area that can be deployed directly on the surface of a conductor. Simulations demonstrate a 3.5 cm ({lambda}/4) square antenna with a bandwidth of 4% and a gain of 4.8 dBi at 2.5 GHz.

More Details

Data intensive computing at Sandia

Wilson, Andrew T.

Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

More Details

Verifiable process monitoring through enhanced data authentication

Ross, Troy R.; Schoeneman, Barry D.; Baldwin, George T.

To ensure the peaceful intent for production and processing of nuclear fuel, verifiable process monitoring of the fuel production cycle is required. As part of a U.S. Department of Energy (DOE)-EURATOM collaboration in the field of international nuclear safeguards, the DOE Sandia National Laboratories (SNL), the European Commission Joint Research Centre (JRC) and Directorate General-Energy (DG-ENER) developed and demonstrated a new concept in process monitoring, enabling the use of operator process information by branching a second, authenticated data stream to the Safeguards inspectorate. This information would be complementary to independent safeguards data, improving the understanding of the plant's operation. The concept is called the Enhanced Data Authentication System (EDAS). EDAS transparently captures, authenticates, and encrypts communication data that is transmitted between operator control computers and connected analytical equipment utilized in nuclear processes controls. The intent is to capture information as close to the sensor point as possible to assure the highest possible confidence in the branched data. Data must be collected transparently by the EDAS: Operator processes should not be altered or disrupted by the insertion of the EDAS as a monitoring system for safeguards. EDAS employs public key authentication providing 'jointly verifiable' data and private key encryption for confidentiality. Timestamps and data source are also added to the collected data for analysis. The core of the system hardware is in a security enclosure with both active and passive tamper indication. Further, the system has the ability to monitor seals or other security devices in close proximity. This paper will discuss the EDAS concept, recent technical developments, intended application philosophy and the planned future progression of this system.

More Details

Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

Pinkalla, Mark P.

The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains at SNL for software and hardware testing. This paper will describe the capabilities of the new surveillance system, application and requirements, and the design approach.

More Details

Xyce parallel electronic simulator design

Keiter, Eric R.; Russo, Thomas V.; Schiek, Richard; Thornquist, Heidi K.; Mei, Ting

This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

More Details

LDRD 149045 final report distinguishing documents

Mitchell, Scott A.

This LDRD 149045 final report describes work that Sandians Scott A. Mitchell, Randall Laviolette, Shawn Martin, Warren Davis, Cindy Philips and Danny Dunlavy performed in 2010. Prof. Afra Zomorodian provided insight. This was a small late-start LDRD. Several other ongoing efforts were leveraged, including the Networks Grand Challenge LDRD, and the Computational Topology CSRF project, and the some of the leveraged work is described here. We proposed a sentence mining technique that exploited both the distribution and the order of parts-of-speech (POS) in sentences in English language documents. The ultimate goal was to be able to discover 'call-to-action' framing documents hidden within a corpus of mostly expository documents, even if the documents were all on the same topic and used the same vocabulary. Using POS was novel. We also took a novel approach to analyzing POS. We used the hypothesis that English follows a dynamical system and the POS are trajectories from one state to another. We analyzed the sequences of POS using support vector machines and the cycles of POS using computational homology. We discovered that the POS were a very weak signal and did not support our hypothesis well. Our original goal appeared to be unobtainable with our original approach. We turned our attention to study an aspect of a more traditional approach to distinguishing documents. Latent Dirichlet Allocation (LDA) turns documents into bags-of-words then into mixture-model points. A distance function is used to cluster groups of points to discover relatedness between documents. We performed a geometric and algebraic analysis of the most popular distance functions and made some significant and surprising discoveries, described in a separate technical report.

More Details

Automatic recognition of malicious intent indicators

Koch, Mark W.; Nguyen, Hung D.; Giron, Casey; Yee, Mark L.; Drescher, Steven M.

A major goal of next-generation physical protection systems is to extend defenses far beyond the usual outer-perimeter-fence boundaries surrounding protected facilities. Mitigation of nuisance alarms is among the highest priorities. A solution to this problem is to create a robust capability to Automatically Recognize Malicious Indicators of intruders. In extended defense applications, it is not enough to distinguish humans from all other potential alarm sources as human activity can be a common occurrence outside perimeter boundaries. Our approach is unique in that it employs a stimulus to determine a malicious intent indicator for the intruder. The intruder's response to the stimulus can be used in an automatic reasoning system to decide the intruder's intent.

More Details

Grid-tied PV battery systems

Hund, Thomas D.; Gonzalez, Sigifredo

Grid tied PV energy smoothing was implemented by using a valve regulated lead-acid (VRLA) battery as a temporary energy storage device to both charge and discharge as required to smooth the inverter energy output from the PV array. Inverter output was controlled by the average solar irradiance over the previous 1h time interval. On a clear day the solar irradiance power curve is offset by about 1h, while on a variable cloudy day the inverter output power curve will be smoothed based on the average solar irradiance. Test results demonstrate that this smoothing algorithm works very well. Battery state of charge was more difficult to manage because of the variable system inefficiencies. Testing continued for 30-days and established consistent operational performance for extended periods of time under a wide variety of resource conditions. Both battery technologies from Exide (Absolyte) and East Penn (ALABC Advanced) proved to cycle well at a Partial state of charge over the time interval tested.

More Details

The development and application of the Remotely Monitored Sealing Array (RMSA)

Schoeneman, Barry D.

Advanced sealing technologies are often an integral part of a containment surveillance (CS) approach to detect undeclared diversion of nuclear materials. As adversarial capabilities continue to advance, the sophistication of the seal design must advance as well. The intelligent integration of security concepts into a physical technology used to seal monitored items is a fundamental requirement for secure containment. Seals have a broad range of capabilities. These capabilities must be matched appropriately to the application to establish the greatest effectiveness from the seal. However, many current seal designs and their application fail to provide the high confidence of detection and timely notification that can be appreciated with new technology. Additionally, as monitoring needs rapidly expand, out-pacing budgets, remote monitoring of low-cost autonomous sealing technologies becomes increasingly appealing. The Remotely Monitored Sealing Array (RMSA) utilizes this technology and has implemented cost effective security concepts establishing the high confidence that is expected of active sealing technology today. RMSA is a system of relatively low-cost but secure active loop seals for the monitoring of nuclear material containers. The sealing mechanism is a fiber optic loop that is pulsed using a low-power LED circuit with a coded signal to verify integrity. Battery life is conserved by the use of sophisticated power management techniques, permitting many years of reliable operation without battery replacement or other maintenance. Individual seals communicate by radio using a secure transmission protocol using either of two specially designated communication frequency bands. Signals are encrypted and authenticated by private key, established during the installation procedure, and the seal bodies feature both active and passive tamper indication. Seals broadcast to a central 'translator' from which information is both stored locally and/or transmitted remotely for review. The system is especially appropriate for nuclear material storage facilities, indoor or outdoor, enabling remote inspection of status rather than tedious individual seal verification, and without the need for interconnected cabling. A handheld seal verifier is also available for an inspector to verify any particular individual seal in close proximity. This paper will discuss the development of the RMSA sealing system, its capabilities, its application philosophy, and projected future trends.

More Details

Entrepreneurial separation to transfer technology

Fairbanks, Richard R.

Entrepreneurial separation to transfer technology (ESTT) program is that entrepreneurs terminate their employment with Sandia. The term of the separation is two years with the option to request a third year. Entrepreneurs are guaranteed reinstatement by Sandia if they return before ESTT expiration. Participants may start up or helpe expand technology businesses.

More Details

Predicting fracture in micron-scale polycrystalline silicon MEMS structures

Boyce, Brad L.; Foulk, James W.; Field, Richard V.; Ohlhausen, J.A.

Designing reliable MEMS structures presents numerous challenges. Polycrystalline silicon fractures in a brittle manner with considerable variability in measured strength. Furthermore, it is not clear how to use a measured tensile strength distribution to predict the strength of a complex MEMS structure. To address such issues, two recently developed high throughput MEMS tensile test techniques have been used to measure strength distribution tails. The measured tensile strength distributions enable the definition of a threshold strength as well as an inferred maximum flaw size. The nature of strength-controlling flaws has been identified and sources of the observed variation in strength investigated. A double edge-notched specimen geometry was also tested to study the effect of a severe, micron-scale stress concentration on the measured strength distribution. Strength-based, Weibull-based, and fracture mechanics-based failure analyses were performed and compared with the experimental results.

More Details

Micro-optics for imaging

Boye, Robert

This project investigates the fundamental imaging capability of an optic with a physical thickness substantially less than 1 mm. The analysis assumes that post-processing can overcome certain restrictions such as detector pixel size and image degradation due to aberrations. A first order optical analysis quickly reveals the limitations of even an ideal thin lens to provide sufficient image resolution and provides the justification for pursuing an annular design. Some straightforward examples clearly show the potential of this approach. The tradeoffs associated with annular designs, specifically field of view limitations and reduced mid-level spatial frequencies, are discussed and their impact on the imaging performance evaluated using several imaging examples. Additionally, issues such as detector acceptance angle and the need to balance aberrations with resolution are included in the analysis. With these restrictions, the final results present an excellent approximation of the expected performance of the lens designs presented.

More Details

FISH 'N' Chips : a single cell genomic analyzer for the human microbiome

Meagher, Robert M.; Patel, Kamlesh; Light, Yooli K.; Liu, Peng L.; Singh, Anup K.

Uncultivable microorganisms likely play significant roles in the ecology within the human body, with subtle but important implications for human health. Focusing on the oral microbiome, we are developing a processor for targeted isolation of individual microbial cells, facilitating whole-genome analysis without the need for isolation of pure cultures. The processor consists of three microfluidic modules: identification based on 16S rRNA fluorescence in situ hybridization (FISH), fluorescence-based sorting, and encapsulation of individual selected cells into small droplets for whole genome amplification. We present here a technique for performing microscale FISH and flow cytometry, as a prelude to single cell sorting.

More Details

Influence of orientation on the size effect in BCC pillars with different critical temperatures

Proposed for publication in Materials Science and Engineering A.

Clark, Blythe C.

The size effect in body-centered cubic metals is comprehensively investigated through micro/nano-compression tests performed on focused ion beam machined tungsten (W), molybdenum (Mo) and niobium (Nb) pillars, with single slip [2 3 5] and multiple slip [0 0 1] orientations. The results demonstrate that the stress-strain response is unaffected by the number of activated slip systems, indicating that dislocation-dislocation interaction is not a dominant mechanism for the observed diameter dependent yield strength and strain hardening. Furthermore, the limited mobility of screw dislocations, which is different for each material at ambient temperature, acts as an additional strengthening mechanism leading to a material dependent size effect. Nominal values and diameter dependence of the flow stress significantly deviate from studies on face-centered cubic metals. This is demonstrated by the correlation of size dependence with the material specific critical temperature. Activation volumes were found to decrease with decreasing pillar diameter further indicating that the influence of the screw dislocations decreases with smaller pillar diameter.

More Details

Assessment of methodologies for analysis of the dungeness B accidental aircraft crash risk

Hansen, Clifford; Lachance, Jeffrey L.

The Health and Safety Executive (HSE) has requested Sandia National Laboratories (SNL) to review the aircraft crash methodology for nuclear facilities that are being used in the United Kingdom (UK). The scope of the work included a review of one method utilized in the UK for assessing the potential for accidental airplane crashes into nuclear facilities (Task 1) and a comparison of the UK methodology against similar International Atomic Energy Agency (IAEA), United States (US) Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) methods (Task 2). Based on the conclusions from Tasks 1 and 2, an additional Task 3 would provide an assessment of a site-specific crash frequency for the Dungeness B facility using one of the other methodologies. This report documents the results of Task 2. The comparison of the different methods was performed for the three primary contributors to aircraft crash risk at the Dungeness B site: airfield related crashes, crashes below airways, and background crashes. The methods and data specified in each methodology were compared for each of these risk contributors, differences in the methodologies were identified, and the importance of these differences was qualitatively and quantitatively assessed. The bases for each of the methods and the data used were considered in this assessment process. A comparison of the treatment of the consequences of the aircraft crashes was not included in this assessment because the frequency of crashes into critical structures is currently low based on the existing Dungeness B assessment. Although the comparison found substantial differences between the UK and the three alternative methodologies (IAEA, NRC, and DOE) this assessment concludes that use of any of these alternative methodologies would not change the conclusions reached for the Dungeness B site. Performance of Task 3 is thus not recommended.

More Details

Use of metal organic fluors for spectral discrimination of neutrons and gammas

Allendorf, Mark; Feng, Patrick L.

A new method for spectral shape discrimination (SSD) of fast neutrons and gamma rays has been investigated. Gammas interfere with neutron detection, making efficient discrimination necessary for practical applications. Pulse shape discrimination (PSD) in liquid organic scintillators is currently the most effective means of gamma rejection. The hazardous liquids, restrictions on volume, and the need for fast timing are drawbacks to traditional PSD scintillators. In this project we investigated harvesting excited triplet states to increase scintillation yield and provide distinct spectral signatures for gammas and neutrons. Our novel approach relies on metal-organic phosphors to convert a portion of the energy normally lost to the scintillation process into useful luminescence with sub-microsecond lifetimes. The approach enables independent control over delayed luminescence wavelength, intensity, and timing for the first time. We demonstrated that organic scintillators, including plastics, nanoporous framework materials, and oil-based liquids can be engineered for both PSD and SSD.

More Details

Thermodynamic and kinetic characterization of H-D exchange in Pd and Pd alloys

Luo, Weifang

A Sieverts apparatus coupled with an RGA is an effective method to detect composition variations during isotopic exchange. This experimental setup provides a powerful tool for the thermodynamic and kinetic characterization of H-D isotope exchange on metals and alloys. H-D exchange behavior during absorption and desorption in the plateau region in Pd have been investigated and reported here. It was found that in the plateau region of H-D-Pd system the equilibrium pressures are between those of H2-Pd and D2-Pd for both absorption and desorption and the equilibrium pressures are higher when the fractions of D in the Pd are higher. Adding a dose of gas H2 (or D2) to Pd-D (or Pd-H) system results in releasing of gas D2 and HD (or H2 and HD) in {beta}-phase of Pd-D (or {beta}-phase of Pd-H), but this does not happen in the plateau region. The equilibrium constants have been determined during exchange and it was found that they agree well with the calculated values reported in literature. The separation factor {alpha} values during exchange have been measured and compared with the literature values. The exchange rates have been determined from the exchange profiles and a first order kinetic model for the exchange of H-D-Pd systems has been employed for the analysis. The exchange activation energies for both directions, H2+PdD and D2+PdH, have been determined.

More Details

Computational and experimental platform for understanding and optimizing water flux and salt rejection in nanoporous membranes

Rogers, David M.; Leung, Kevin; Brinker, C.J.; Singh, Seema S.; Merson, John A.

Affordable clean water is both a global and a national security issue as lack of it can cause death, disease, and international tension. Furthermore, efficient water filtration reduces the demand for energy, another national issue. The best current solution to clean water lies in reverse osmosis (RO) membranes that remove salts from water with applied pressure, but widely used polymeric membrane technology is energy intensive and produces water depleted in useful electrolytes. Furthermore incremental improvements, based on engineering solutions rather than new materials, have yielded only modest gains in performance over the last 25 years. We have pursued a creative and innovative new approach to membrane design and development for cheap desalination membranes by approaching the problem at the molecular level of pore design. Our inspiration comes from natural biological channels, which permit faster water transport than current reverse osmosis membranes and selectively pass healthy ions. Aiming for an order-of-magnitude improvement over mature polymer technology carries significant inherent risks. The success of our fundamental research effort lies in our exploiting, extending, and integrating recent advances by our team in theory, modeling, nano-fabrication and platform development. A combined theoretical and experimental platform has been developed to understand the interplay between water flux and ion rejection in precisely-defined nano-channels. Our innovative functionalization of solid state nanoporous membranes with organic protein-mimetic polymers achieves 3-fold improvement in water flux over commercial RO membranes and has yielded a pending patent and industrial interest. Our success has generated useful contributions to energy storage, nanoscience, and membrane technology research and development important for national health and prosperity.

More Details
Results 71801–72000 of 99,299
Results 71801–72000 of 99,299