All polymers are intrinsically susceptible to oxidation, which is the underlying process for thermally driven materials degradation and of concern in various applications. There are many approaches for predicting oxidative polymer degradation. Aging studies usually are meant to accelerate oxidation chemistry for predictive purposes. Kinetic models attempt to describe reaction mechanisms and derive rate constants, whereas rapid qualification tests should provide confidence for extended performance during application, and similarly TGA tests are meant to provide rapid guidance for thermal degradation features. What are the underlying commonalities or diverging trends and complications when we approach thermo-oxidative aging of polymers in such different ways? This review presents a brief status report on the important aspects of polymer oxidation and focuses on the complexity of thermally accelerated polymer aging phenomena. Thermal aging and lifetime prediction, the importance of DLO, property correlations, kinetic models, TGA approaches, and a framework for predictive aging models are briefly discussed. An overall perspective is provided showing the challenges associated with our understanding of polymer oxidation as it relates to lifetime prediction requirements.
High-temperature geothermal exploration requires a wide array of tools and sensors to instrument drilling and monitor downhole conditions. There is a steep decline in component availability as the operating temperature increases, limiting tool availability and capability for both drilling and monitoring. Several applications exist where a small motor can provide a significant benefit to the overall operation. Applications such as clamping systems for seismic monitoring, televiewers, valve actuators, and directional drilling systems would be able to utilize a robust motor controller capable of operating in these harsh environments. The development of a high-temperature motor controller capable of operation at 225°C significantly increases the operating envelope for next generation high temperature tools and provides a useful component for designers to integrate into future downhole systems. High-temperature motor control has not been an area of development until recently as motors capable of operating in extreme temperature regimes are becoming commercially available. Currently the most common method of deploying a motor controller is to use a Dewared, or heat shielded tool with low-temperature electronics to control the motor. This approach limits the amount of time that controller tool can stay in the high-temperature environments and does not allow for long-term deployments. A Dewared approach is suitable for logging tools which spend limited time in the well however, a longer-term deployment like a seismic tool [Henfling 2010], which may be deployed for weeks or even months at a time, is not possible. Utilizing high-temperature electronics and a high-temperature motor that does not need to be shielded provides a reliable and robust method for long-term deployments and long-life operations.
Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013
The dynamic failure of materials in a finite volume shock physics computational code poses many challenges. Sandia National Laboratories has added Lagrangian markers as a new capability to CTH. The failure process of a marker in CTH is driven by the nature of Lagrangian numerical methods. This process is performed in three steps and the first step is to detect failure using the material constitutive model. The constitutive model detects failure computing damage or other means from the strain rate, strain, stress, etc. Once failure has been determined the material stress and energy states are released along a path driven by the constitutive model. Once the magnitude of the stress reaches a critical value, the material is switched to another material that behaves hydrodynamically. The hydrodynamic failed material is by definition non-shear-supporting but still retains the Equation of State (EOS) portion of the constitutive model. The material switching process is conservative in mass, momentum and energy. The failed marker material is allowed to fail using the CTH method of void insertion as necessary during the computation.
ASME 2013 Heat Transfer Summer Conf. Collocated with the ASME 2013 7th Int. Conf. on Energy Sustainability and the ASME 2013 11th Int. Conf. on Fuel Cell Science, Engineering and Technology, HT 2013
Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013
The Lagrangian Material Point Method (MPM) [1, 2] has been implemented into the Eulerian shock physics code CTH[3], at Sandia National Laboratories. Since the MPM uses a background grid to calculate gradients, the method can numerically fracture if an insufficient number of particles per cell are used in high strain problems. Numerical fracture happens when the particles become separated by more than a grid cell leading to a loss of communication between them. One solution to this problem is the Convected Particle Domain Interpolation (CPDI) technique[4] where the shape functions are allowed to stretch smoothly across multiple grid cells, which alleviates this issue but introduces difficulties for parallelization because the particle domains can become non-local. This paper presents an approach where the particles are dynamically split when the volumetric strain for a particle becomes greater than a set limit so that the particle domain is always local, and presents an application to a large strain problem.
We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.
This report is a summary of research results from an Early Career LDRD project con-ducted from January 2012 to December 2013 at Sandia National Laboratories. Demonstrated here is the use of conducting polymers as active materials in the posi-tive electrodes of rechargeable aluminum-based batteries operating at room tempera-ture. The battery chemistry is based on chloroaluminate ionic liquid electrolytes, which allow reversible stripping and plating of aluminum metal at the negative elec-trode. Characterization of electrochemically synthesized polypyrrole films revealed doping of the polymers with chloroaluminate anions, which is a quasi-reversible reac-tion that facilitates battery cycling. Stable galvanostatic cycling of polypyrrole and polythiophene cells was demonstrated, with capacities at near-theoretical levels (30-100 mAh g-1) and coulombic efficiencies approaching 100%. The energy density of a sealed sandwich-type cell with polythiophene at the positive electrode was estimated as 44 Wh kg-1, which is competitive with state-of-the-art battery chemistries for grid-scale energy storage.
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize this goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.