As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior. Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.
Lignin is often overlooked in the valorization of lignocellulosic biomass, but lignin-based materials and chemicals represent potential value-added products for biorefineries that could significantly improve the economics of a biorefinery. Fluctuating crude oil prices and changing fuel specifications are some of the driving factors to develop new technologies that could be used to convert polymeric lignin into low molecular weight lignin and or monomeric aromatic feedstocks to assist in the displacement of the current products associated with the conversion of a whole barrel of oil. Our project of understanding microbial lignolysis for renewable platform chemicals aimed to understand microbial and enzymatic lignolysis processes to break down lignin for conversion into commercially viable drop-in fuels. We developed novel lignin analytics to interrogate enzymatic and microbial lignolysis of native polymeric lignin and established a detailed understanding of lignolysis as a function of fungal enzyme, microbes and endophytes. Bioinformatics pipeline was developed for metatranscryptomic analysis of aridland ecosystem for investigating the potential discovery of new lignolysis gene and gene products.
Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.
We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.
Despite rapid progress, solar thermochemistry remains high risk; improvements in both active materials and reactor systems are needed. This claim is supported by studies conducted both prior to and as part of this project. Materials offer a particular large opportunity space as, until recently, very little effort apart from basic thermodynamic analysis was extended towards understanding this most fundamental component of a metal oxide thermochemical cycle. Without this knowledge, system design was hampered, but more importantly, advances in these crucial materials were rare and resulted more from intuition rather than detailed insight. As a result, only two basic families of potentially viable solid materials have been widely considered, each of which has significant challenges. Recent efforts towards applying an increased level of scientific rigor to the study of thermochemical materials have provided a much needed framework and insights toward developing the next generation of highly improved thermochemically active materials. The primary goal of this project was to apply this hard-won knowledge to rapidly advance the field of thermochemistry to produce a material within 2 years that is capable of yielding CO from CO2 at a 12.5 % reactor efficiency. Three principal approaches spanning a range of risk and potential rewards were pursued: modification of known materials, structuring known materials, and identifying/developing new materials for the application. A newly developed best-of-class material produces more fuel (9x more H2, 6x more CO) under milder conditions than the previous state of the art. Analyses of thermochemical reactor and system efficiencies and economics were performed and a new hybrid concept was reported. The larger case for solar fuels was also further refined and documented.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .
This project focused on developing a micro-scale counter flow heat exchangers for Joule-Thomson cooling with the potential for both chip and wafer scale integration. This project is differentiated from previous work by focusing on planar, thin film micromachining instead of bulk materials. A process will be developed for fabricating all the devices mentioned above, allowing for highly integrated micro heat exchangers. The use of thin film dielectrics provides thermal isolation, increasing efficiency of the coolers compared to designs based on bulk materials, and it will allow for wafer-scale fabrication and integration. The process is intended to implement a CFHX as part of a Joule-Thomson cooling system for applications with heat loads less than 1mW. This report presents simulation results and investigation of a fabrication process for such devices.
After years in the field, many materials suffer degradation, off-gassing, and chemical changes causing build-up of measurable chemical atmospheres. Stand-alone embedded chemical sensors are typically limited in specificity, require electrical lines, and/or calibration drift makes data reliability questionable. Along with size, these "Achilles' heels" have prevented incorporation of gas sensing into sealed, hazardous locations which would highly benefit from in-situ analysis. We report on development of an all-optical, mid-IR, fiber-optic based MEMS Photoacoustic Spectroscopy solution to address these limitations. Concurrent modeling and computational simulation are used to guide hardware design and implementation.
In this study, we use PFLOTRAN, a highly scalable, parallel, flow and reactive transport code to simulate the concentrations of 3H, 3He, CFC-11, CFC-12, CFC-113, SF6, 39Ar, 81Kr, 4He and themean groundwater age in heterogeneous fields on grids with an excess of 10 million nodes. We utilize this computational platform to simulate the concentration of multiple tracers in high-resolution, heterogeneous 2-D and 3-D domains, and calculate tracer-derived ages. Tracer-derived ages show systematic biases toward younger ages when the groundwater age distribution contains water older than the maximum tracer age. The deviation of the tracer-derived age distribution from the true groundwater age distribution increases with increasing heterogeneity of the system. However, the effect of heterogeneity is diminished as the mean travel time gets closer the tracer age limit. Age distributions in 3-D domains differ significantly from 2-D domains. 3D simulations show decreased mean age, and less variance in age distribution for identical heterogeneity statistics. High-performance computing allows for investigation of tracer and groundwater age systematics in high-resolution domains, providing a platform for understanding and utilizing environmental tracer and groundwater age information in heterogeneous 3-D systems. Groundwater environmental tracers can provide important constraints for the calibration of groundwater flow models. Direct simulation of environmental tracer concentrations in models has the additional advantage of avoiding assumptions associated with using calculated groundwater age values. This study quantifies model uncertainty reduction resulting from the addition of environmental tracer concentration data. The analysis uses a synthetic heterogeneous aquifer and the calibration of a flow and transport model using the pilot point method. Results indicate a significant reduction in the uncertainty in permeability with the addition of environmental tracer data, relative to the use of hydraulic measurements alone. Anthropogenic tracers and their decay products, such as CFC11, 3H, and 3He, provide significant constraint oninput permeability values in the model. Tracer data for 39Ar provide even more complete information on the heterogeneity of permeability and variability in the flow system than the anthropogenic tracers, leading to greater parameter uncertainty reduction.
The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the design, fabrication, and testing of the SMART Rotor. This report begins with an overview of active control research at Sandia and the objectives of this project. The SMART blade, based on the DOE / SNL 9-meter CX-100 blade design, is then documented including all modifications necessary to integrate the trailing edge flaps, sensors incorporated into the system, and the fabrication processes that were utilized. Finally the test site and test campaign are described.
Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes of National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.
The Water, Energy, and Carbon Sequestration Simulation Model (WECSsim) is a national dynamic simulation model that calculates and assesses capturing, transporting, and storing CO2 in deep saline formations from all coal and natural gas-fired power plants in the U.S. An overarching capability of WECSsim is to also account for simultaneous CO2 injection and water extraction within the same geological saline formation. Extracting, treating, and using these saline waters to cool the power plant is one way to develop more value from using saline formations as CO2 storage locations. WECSsim allows for both one-to-one comparisons of a single power plant to a single saline formation along with the ability to develop a national CO2 storage supply curve and related national assessments for these formations. This report summarizes the scope, structure, and methodology of WECSsim along with a few key results. Developing WECSsim from a small scoping study to the full national-scale modeling effort took approximately 5 years. This report represents the culmination of that effort. The key findings from the WECSsim model indicate the U.S. has several decades' worth of storage for CO2 in saline formations when managed appropriately. Competition for subsurface storage capacity, intrastate flows of CO2 and water, and a supportive regulatory environment all play a key role as to the performance and cost profile across the range from a single power plant to all coal and natural gas-based plants' ability to store CO2. The overall system's cost to capture, transport, and store CO2 for the national assessment range from $\$$74 to $\$$208 / tonne stored ($\$$96 to 272 / tonne avoided) for the first 25 to 50% of the 1126 power plants to between $\$$1,585 to well beyond $\$$2,000 / tonne stored ($\$$2,040 to well beyond $\$$2,000 / tonne avoided) for the remaining 75 to 100% of the plants. The latter range, while extremely large, includes all natural gas power plants in the U.S., many of which have an extremely low capacity factor and therefore relatively high system's cost to capture and store CO2.
Fault-tolerance has been identified as a major challenge for future extreme-scale systems. Current predictions suggest that, as systems grow in size, failures will occur more frequently. Because increases in failure frequency reduce the performance and scalability of these systems, significant effort has been devoted to developing and refining resilience mechanisms to mitigate the impact of failures. However, effective evaluation of these mechanisms has been challenging. Current systems are smaller and have significantly different architectural features (e.g., interconnect, persistent storage) than we expect to see in next-generation systems. To overcome these challenges, we propose the use of simulation. Simulation has been shown to be an effective tool for investigating performance characteristics of applications on future systems. In this work, we: identify the set of system characteristics that are necessary for accurate performance prediction of resilience mechanisms for HPC systems and applications; demonstrate how these system characteristics can be incorporated into an existing large-scale simulator; and evaluate the predictive performance of our modified simulator. We also describe how we were able to optimize the simulator for large temporal and spatial scales-allowing the simulator to run 4x faster and use over 100x less memory.
The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of the rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the data post-processing and analysis performed to date on the field test data. Results include the control capability of the trailing edge flaps, the combined structural and aerodynamic damping observed through application of step actuation with ensemble averaging, direct observation of time delays associated with aerodynamic response, and techniques for characterizing an operating turbine with active rotor control.
In an effort to improve the current state of the art in fire probabilistic risk assessment methodology, the U.S. Nuclear Regulatory Commission, Office of Regulatory Research, contracted Sandia National Laboratories (SNL) to conduct a series of scoping tests to identify thermal and mechanical probes that could be used to characterize the zone of influence (ZOI) during high energy arc fault (HEAF) testing. For the thermal evaluation, passive and active probes were exposed to HEAF-like heat fluxes for a period of 2 seconds at the SNL's National Solar Thermal Test Facility to determine their ability to survive and measure such an extreme environment. Thermal probes tested included temperature lacquers (passive), NANMAC thermocouples, directional flame thermometers, modified plate thermometers, infrared temperature sensors, and a Gardon heat flux gauge. Similarly, passive and active pressure probes were evaluated by exposing them to pressures resulting from various high-explosive detonations at the Sandia Terminal Ballistic Facility. Pressure probes included bikini pressure gauges (passive) and pressure transducers. Results from these tests provided good insight to determine which probes should be considered for use during future HEAF testing.
This Technical Manual contains descriptions of the calculation models and mathematical and numerical methods used in the RADTRAN 6 computer code for transportation risk and consequence assessment. The RADTRAN 6 code combines user-supplied input data with values from an internal library of physical and radiological data to calculate the expected radiological consequences and risks associated with the transportation of radioactive material. Radiological consequences and risks are estimated with numerical models of exposure pathways, receptor populations, package behavior in accidents, and accident severity and probability.
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.
Frequency-domain antenna-coupling measurements performed in the compact-range room of the FARM, will actually be dominated by reflected components from the ceiling, floor, walls, etc., not the direct freespace coupling. Consequently, signal processing must be applied to the frequency-domain data to extract the direct free-space coupling. The analysis presented above demonstrates that it is possible to do so successfully.
The US faces persistent, distributed threats from malevolent individuals, groups and organizations around the world. Computational Social Models (CSMs) help anticipate the dynamics and behaviors of these actors by modeling the behavior and interactions of individuals, groups and organizations. For strategic planners to trust the results of CSMs, they must have confidence in the validity of the models. Establishing validity before model use will enhance confidence and reduce the risk of error. One problem with validation is designing an appropriate controlled test of the model, similar to the testing of physical models. Lab experiments can do this, but are often limited to small numbers of subjects, with low subject diversity and are often in a contrived environment. Natural studies attempt to test models by gathering large-scale observational data (e.g., social media) however this loses the controlled aspect. We propose a new approach to run large-scale, controlled online experiments on diverse populations. Using Amazon Mechanical Turk, a crowdsourcing tool, we will draw large populations into controlled experiments in a manner that was not possible just a few years ago.
An analysis of frequency pulling in a varactor-tuned LC VCO under coupling from an on-chip PA is presented. The large-signal behavior of the VCO's inversion-mode MOS varactors is outlined, and the susceptibility of the VCO to frequency pulling from PA aggressor signals with various modulation schemes is discussed. We show that if the aggressor signal is aperiodic, band-limited, or amplitude-modulated, the varactor-tuned LC VCO will experience frequency pulling due to time-modulation of the varactor capacitance. However, if the aggressor signal has constant-envelope phase modulation, VCO pulling can be eliminated, even in the presence of coupling, through careful choice of VCO frequency and divider ratio. Additional mitigation strategies, including new inductor topologies and system-level architectural choices, are also examined.
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.
We report Pauli blockade in a multielectron silicon metal–oxide–semiconductor double quantum dot with an integrated charge sensor. The current is rectified up to a blockade energy of 0.18 ± 0.03 meV. The blockade energy is analogous to singlet–triplet splitting in a two electron double quantum dot. Built-in imbalances of tunnel rates in the MOS DQD obfuscate some edges of the bias triangles. A method to extract the bias triangles is described, and a numeric rate-equation simulation is used to understand the effect of tunneling imbalances and finite temperature on charge stability (honeycomb) diagram, in particular the identification of missing and shifting edges. A bound on relaxation time of the triplet-like state is also obtained from this measurement.
Novel experimental data are reported that reveal helical instability formation on imploding z -pinch liners that are premagnetized with an axial field. Such instabilities differ dramatically from the mostly azimuthally symmetric instabilities that form on unmagnetized liners. The helical structure persists at nearly constant pitch as the liner implodes. This is surprising since, at the liner surface, the azimuthal drive field presumably dwarfs the axial field for all but the earliest stages of the experiment. These fundamentally 3D results provide a unique and challenging test for 3D-magnetohydrodynamics simulations.
Nylon 6.6 containing 13C isotopic labels at specific positions along the macromolecular backbone has been subjected to extensive thermal-oxidative aging at 138 °C for time periods up to 243 days. In complementary experiments, unlabeled Nylon 6.6 was subjected to the same aging conditions under an atmosphere of 18O2. Volatile organic degradation products were analyzed by cryofocusing gas chromatography mass spectrometry (cryo-GC/MS) to identify the isotopic labeling. The labeling results, combined with basic considerations of free radical reaction chemistry, provided insights to the origin of degradation species, with respect to the macromolecular structure. A number of inferences on chemical mechanisms were drawn, based on 1) the presence (or absence) of the isotopic labels in the various products, 2) the location of the isotope within the product molecule, and 3) the relative abundance of products as indicated by large differences in peak intensities in the gas chromatogram. The overall degradation results can be understood in terms of free radical pathways originating from initial attacks on three different positions along the nylon chain which include hydrogen abstraction from: the (CH2) group adjacent to the nitrogen atom, at the (CH2) adjacent the carbonyl group, and direct radical attack on the carbonyl. Understanding the pathways which lead to Nylon 6.6 degradation ultimately provides new insight into changes that can be leveraged to detect and reduce early aging and minimize problems associated with material degradation.
All polymers are intrinsically susceptible to oxidation, which is the underlying process for thermally driven materials degradation and of concern in various applications. There are many approaches for predicting oxidative polymer degradation. Aging studies usually are meant to accelerate oxidation chemistry for predictive purposes. Kinetic models attempt to describe reaction mechanisms and derive rate constants, whereas rapid qualification tests should provide confidence for extended performance during application, and similarly TGA tests are meant to provide rapid guidance for thermal degradation features. What are the underlying commonalities or diverging trends and complications when we approach thermo-oxidative aging of polymers in such different ways? This review presents a brief status report on the important aspects of polymer oxidation and focuses on the complexity of thermally accelerated polymer aging phenomena. Thermal aging and lifetime prediction, the importance of DLO, property correlations, kinetic models, TGA approaches, and a framework for predictive aging models are briefly discussed. An overall perspective is provided showing the challenges associated with our understanding of polymer oxidation as it relates to lifetime prediction requirements.
High-temperature geothermal exploration requires a wide array of tools and sensors to instrument drilling and monitor downhole conditions. There is a steep decline in component availability as the operating temperature increases, limiting tool availability and capability for both drilling and monitoring. Several applications exist where a small motor can provide a significant benefit to the overall operation. Applications such as clamping systems for seismic monitoring, televiewers, valve actuators, and directional drilling systems would be able to utilize a robust motor controller capable of operating in these harsh environments. The development of a high-temperature motor controller capable of operation at 225°C significantly increases the operating envelope for next generation high temperature tools and provides a useful component for designers to integrate into future downhole systems. High-temperature motor control has not been an area of development until recently as motors capable of operating in extreme temperature regimes are becoming commercially available. Currently the most common method of deploying a motor controller is to use a Dewared, or heat shielded tool with low-temperature electronics to control the motor. This approach limits the amount of time that controller tool can stay in the high-temperature environments and does not allow for long-term deployments. A Dewared approach is suitable for logging tools which spend limited time in the well however, a longer-term deployment like a seismic tool [Henfling 2010], which may be deployed for weeks or even months at a time, is not possible. Utilizing high-temperature electronics and a high-temperature motor that does not need to be shielded provides a reliable and robust method for long-term deployments and long-life operations.
Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013
The dynamic failure of materials in a finite volume shock physics computational code poses many challenges. Sandia National Laboratories has added Lagrangian markers as a new capability to CTH. The failure process of a marker in CTH is driven by the nature of Lagrangian numerical methods. This process is performed in three steps and the first step is to detect failure using the material constitutive model. The constitutive model detects failure computing damage or other means from the strain rate, strain, stress, etc. Once failure has been determined the material stress and energy states are released along a path driven by the constitutive model. Once the magnitude of the stress reaches a critical value, the material is switched to another material that behaves hydrodynamically. The hydrodynamic failed material is by definition non-shear-supporting but still retains the Equation of State (EOS) portion of the constitutive model. The material switching process is conservative in mass, momentum and energy. The failed marker material is allowed to fail using the CTH method of void insertion as necessary during the computation.
ASME 2013 Heat Transfer Summer Conf. Collocated with the ASME 2013 7th Int. Conf. on Energy Sustainability and the ASME 2013 11th Int. Conf. on Fuel Cell Science, Engineering and Technology, HT 2013
Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013
The Lagrangian Material Point Method (MPM) [1, 2] has been implemented into the Eulerian shock physics code CTH[3], at Sandia National Laboratories. Since the MPM uses a background grid to calculate gradients, the method can numerically fracture if an insufficient number of particles per cell are used in high strain problems. Numerical fracture happens when the particles become separated by more than a grid cell leading to a loss of communication between them. One solution to this problem is the Convected Particle Domain Interpolation (CPDI) technique[4] where the shape functions are allowed to stretch smoothly across multiple grid cells, which alleviates this issue but introduces difficulties for parallelization because the particle domains can become non-local. This paper presents an approach where the particles are dynamically split when the volumetric strain for a particle becomes greater than a set limit so that the particle domain is always local, and presents an application to a large strain problem.
We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.
This report is a summary of research results from an Early Career LDRD project con-ducted from January 2012 to December 2013 at Sandia National Laboratories. Demonstrated here is the use of conducting polymers as active materials in the posi-tive electrodes of rechargeable aluminum-based batteries operating at room tempera-ture. The battery chemistry is based on chloroaluminate ionic liquid electrolytes, which allow reversible stripping and plating of aluminum metal at the negative elec-trode. Characterization of electrochemically synthesized polypyrrole films revealed doping of the polymers with chloroaluminate anions, which is a quasi-reversible reac-tion that facilitates battery cycling. Stable galvanostatic cycling of polypyrrole and polythiophene cells was demonstrated, with capacities at near-theoretical levels (30-100 mAh g-1) and coulombic efficiencies approaching 100%. The energy density of a sealed sandwich-type cell with polythiophene at the positive electrode was estimated as 44 Wh kg-1, which is competitive with state-of-the-art battery chemistries for grid-scale energy storage.
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize this goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.