Publications

Results 54801–55000 of 99,299

Search results

Jump to search filters

Validating agent based models through virtual worlds

Lakkaraju, Kiran; Lee, Jina; Naugle, Asmeret B.

As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior. Results from our work indicate that virtual worlds have the potential for serving as a proxy in allocating and populating behaviors that would be used within further agent-based modeling studies.

More Details

Understanding and regulation of microbial lignolysis for renewable platform chemicals

Turner, Kevin; Hudson, Corey M.; Tran-Gyamfi, Mary; Powell, Amy J.; Williams, Kelly P.

Lignin is often overlooked in the valorization of lignocellulosic biomass, but lignin-based materials and chemicals represent potential value-added products for biorefineries that could significantly improve the economics of a biorefinery. Fluctuating crude oil prices and changing fuel specifications are some of the driving factors to develop new technologies that could be used to convert polymeric lignin into low molecular weight lignin and or monomeric aromatic feedstocks to assist in the displacement of the current products associated with the conversion of a whole barrel of oil. Our project of understanding microbial lignolysis for renewable platform chemicals aimed to understand microbial and enzymatic lignolysis processes to break down lignin for conversion into commercially viable drop-in fuels. We developed novel lignin analytics to interrogate enzymatic and microbial lignolysis of native polymeric lignin and established a detailed understanding of lignolysis as a function of fungal enzyme, microbes and endophytes. Bioinformatics pipeline was developed for metatranscryptomic analysis of aridland ecosystem for investigating the potential discovery of new lignolysis gene and gene products.

More Details

Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution

Robinson, Allen C.

Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.

More Details

Integrated network design and scheduling problems :

Carlson, Jeffrey

We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.

More Details

Final LDRD report :

Ambrosini, Andrea A.; Miller, James E.; Allendorf, Mark; Coker, Eric N.; Ermanoski, Ivan; Hogan Jr., Roy E.; Mcdaniel, Anthony H.

Despite rapid progress, solar thermochemistry remains high risk; improvements in both active materials and reactor systems are needed. This claim is supported by studies conducted both prior to and as part of this project. Materials offer a particular large opportunity space as, until recently, very little effort apart from basic thermodynamic analysis was extended towards understanding this most fundamental component of a metal oxide thermochemical cycle. Without this knowledge, system design was hampered, but more importantly, advances in these crucial materials were rare and resulted more from intuition rather than detailed insight. As a result, only two basic families of potentially viable solid materials have been widely considered, each of which has significant challenges. Recent efforts towards applying an increased level of scientific rigor to the study of thermochemical materials have provided a much needed framework and insights toward developing the next generation of highly improved thermochemically active materials. The primary goal of this project was to apply this hard-won knowledge to rapidly advance the field of thermochemistry to produce a material within 2 years that is capable of yielding CO from CO2 at a 12.5 % reactor efficiency. Three principal approaches spanning a range of risk and potential rewards were pursued: modification of known materials, structuring known materials, and identifying/developing new materials for the application. A newly developed best-of-class material produces more fuel (9x more H2, 6x more CO) under milder conditions than the previous state of the art. Analyses of thermochemical reactor and system efficiencies and economics were performed and a new hybrid concept was reported. The larger case for solar fuels was also further refined and documented.

More Details

Xyce parallel electronic simulator users' guide, Version 6.0.1

Keiter, Eric R.; Warrender, Christina E.; Mei, Ting; Russo, Thomas V.; Schiek, Richard; Thornquist, Heidi K.; Verley, Jason C.; Coffey, Todd S.; Pawlowski, Roger

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.

More Details

Xyce parallel electronic simulator reference guide, Version 6.0.1

Keiter, Eric R.; Mei, Ting; Russo, Thomas V.; Pawlowski, Roger; Schiek, Richard; Coffey, Todd S.; Thornquist, Heidi K.; Verley, Jason C.; Warrender, Christina E.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .

More Details

Micro-scale heat-exchangers for Joule-Thomson cooling

Gross, Andrew J.

This project focused on developing a micro-scale counter flow heat exchangers for Joule-Thomson cooling with the potential for both chip and wafer scale integration. This project is differentiated from previous work by focusing on planar, thin film micromachining instead of bulk materials. A process will be developed for fabricating all the devices mentioned above, allowing for highly integrated micro heat exchangers. The use of thin film dielectrics provides thermal isolation, increasing efficiency of the coolers compared to designs based on bulk materials, and it will allow for wafer-scale fabrication and integration. The process is intended to implement a CFHX as part of a Joule-Thomson cooling system for applications with heat loads less than 1mW. This report presents simulation results and investigation of a fabrication process for such devices.

More Details

Development of MEMS Photoacoustic Spectroscopy

Eichenfield, Matt; Givler, Richard C.; Pfeifer, Kent B.; Reinke, Charles M.; Robinson, Alex; Resnick, Paul; Griffin, Benjamin; Langlois, Eric; Nielson, Gregory N.; Okandan, Murat; Shaw, Michael

After years in the field, many materials suffer degradation, off-gassing, and chemical changes causing build-up of measurable chemical atmospheres. Stand-alone embedded chemical sensors are typically limited in specificity, require electrical lines, and/or calibration drift makes data reliability questionable. Along with size, these "Achilles' heels" have prevented incorporation of gas sensing into sealed, hazardous locations which would highly benefit from in-situ analysis. We report on development of an all-optical, mid-IR, fiber-optic based MEMS Photoacoustic Spectroscopy solution to address these limitations. Concurrent modeling and computational simulation are used to guide hardware design and implementation.

More Details

Theoretical foundation for measuring the groundwater age distribution

Gardner, William P.; Arnold, Bill W.

In this study, we use PFLOTRAN, a highly scalable, parallel, flow and reactive transport code to simulate the concentrations of 3H, 3He, CFC-11, CFC-12, CFC-113, SF6, 39Ar, 81Kr, 4He and themean groundwater age in heterogeneous fields on grids with an excess of 10 million nodes. We utilize this computational platform to simulate the concentration of multiple tracers in high-resolution, heterogeneous 2-D and 3-D domains, and calculate tracer-derived ages. Tracer-derived ages show systematic biases toward younger ages when the groundwater age distribution contains water older than the maximum tracer age. The deviation of the tracer-derived age distribution from the true groundwater age distribution increases with increasing heterogeneity of the system. However, the effect of heterogeneity is diminished as the mean travel time gets closer the tracer age limit. Age distributions in 3-D domains differ significantly from 2-D domains. 3D simulations show decreased mean age, and less variance in age distribution for identical heterogeneity statistics. High-performance computing allows for investigation of tracer and groundwater age systematics in high-resolution domains, providing a platform for understanding and utilizing environmental tracer and groundwater age information in heterogeneous 3-D systems. Groundwater environmental tracers can provide important constraints for the calibration of groundwater flow models. Direct simulation of environmental tracer concentrations in models has the additional advantage of avoiding assumptions associated with using calculated groundwater age values. This study quantifies model uncertainty reduction resulting from the addition of environmental tracer concentration data. The analysis uses a synthetic heterogeneous aquifer and the calibration of a flow and transport model using the pilot point method. Results indicate a significant reduction in the uncertainty in permeability with the addition of environmental tracer data, relative to the use of hydraulic measurements alone. Anthropogenic tracers and their decay products, such as CFC11, 3H, and 3He, provide significant constraint oninput permeability values in the model. Tracer data for 39Ar provide even more complete information on the heterogeneity of permeability and variability in the flow system than the anthropogenic tracers, leading to greater parameter uncertainty reduction.

More Details

SMART Wind Turbine Rotor: Design and Field Test

Berg, Jonathan C.; Resor, Brian R.; Paquette, Joshua A.; White, Jonathan R.

The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the design, fabrication, and testing of the SMART Rotor. This report begins with an overview of active control research at Sandia and the objectives of this project. The SMART blade, based on the DOE / SNL 9-meter CX-100 blade design, is then documented including all modifications necessary to integrate the trailing edge flaps, sensors incorporated into the system, and the fabrication processes that were utilized. Finally the test site and test campaign are described.

More Details

Definition of Energy-Calibrated Spectra for National Reachback

Hertz, Kristin

Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes of National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.

More Details

The Water, Energy, and Carbon Dioxide Sequestration Simulation Model (WECSsim™): A User’s Manual

Kobos, Peter; Roach, Jesse D.; Klise, Geoffrey T.; Heath, Jason E.; Dewers, Thomas; Malczynski, Leonard A.; Borns, David J.

The Water, Energy, and Carbon Sequestration Simulation Model (WECSsim) is a national dynamic simulation model that calculates and assesses capturing, transporting, and storing CO2 in deep saline formations from all coal and natural gas-fired power plants in the U.S. An overarching capability of WECSsim is to also account for simultaneous CO2 injection and water extraction within the same geological saline formation. Extracting, treating, and using these saline waters to cool the power plant is one way to develop more value from using saline formations as CO2 storage locations. WECSsim allows for both one-to-one comparisons of a single power plant to a single saline formation along with the ability to develop a national CO2 storage supply curve and related national assessments for these formations. This report summarizes the scope, structure, and methodology of WECSsim along with a few key results. Developing WECSsim from a small scoping study to the full national-scale modeling effort took approximately 5 years. This report represents the culmination of that effort. The key findings from the WECSsim model indicate the U.S. has several decades' worth of storage for CO2 in saline formations when managed appropriately. Competition for subsurface storage capacity, intrastate flows of CO2 and water, and a supportive regulatory environment all play a key role as to the performance and cost profile across the range from a single power plant to all coal and natural gas-based plants' ability to store CO2. The overall system's cost to capture, transport, and store CO2 for the national assessment range from $\$$74 to $\$$208 / tonne stored ($\$$96 to 272 / tonne avoided) for the first 25 to 50% of the 1126 power plants to between $\$$1,585 to well beyond $\$$2,000 / tonne stored ($\$$2,040 to well beyond $\$$2,000 / tonne avoided) for the remaining 75 to 100% of the plants. The latter range, while extremely large, includes all natural gas power plants in the U.S., many of which have an extremely low capacity factor and therefore relatively high system's cost to capture and store CO2.

More Details

Using Simulation to Evaluate the Performance of Resilience Strategies and Process Failures

Levy, Scott L.N.; Ferreira, Kurt; Widener, Patrick

Fault-tolerance has been identified as a major challenge for future extreme-scale systems. Current predictions suggest that, as systems grow in size, failures will occur more frequently. Because increases in failure frequency reduce the performance and scalability of these systems, significant effort has been devoted to developing and refining resilience mechanisms to mitigate the impact of failures. However, effective evaluation of these mechanisms has been challenging. Current systems are smaller and have significantly different architectural features (e.g., interconnect, persistent storage) than we expect to see in next-generation systems. To overcome these challenges, we propose the use of simulation. Simulation has been shown to be an effective tool for investigating performance characteristics of applications on future systems. In this work, we: identify the set of system characteristics that are necessary for accurate performance prediction of resilience mechanisms for HPC systems and applications; demonstrate how these system characteristics can be incorporated into an existing large-scale simulator; and evaluate the predictive performance of our modified simulator. We also describe how we were able to optimize the simulator for large temporal and spatial scales-allowing the simulator to run 4x faster and use over 100x less memory.

More Details

SMART Wind Turbine Rotor: Data Analysis and Conclusions

Berg, Jonathan C.; Barone, Matthew F.

The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of the rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the data post-processing and analysis performed to date on the field test data. Results include the control capability of the trailing edge flaps, the combined structural and aerodynamic damping observed through application of step actuation with ensemble averaging, direct observation of time delays associated with aerodynamic response, and techniques for characterizing an operating turbine with active rotor control.

More Details

Evaluation of Select Heat and Pressure Measurement Gauges for Potential Use in the NRC/OECD High Energy Arc Fault (HEAF) Test Program

Lopez, Carlos; Wente, William; Figueroa Faria, Victor G.

In an effort to improve the current state of the art in fire probabilistic risk assessment methodology, the U.S. Nuclear Regulatory Commission, Office of Regulatory Research, contracted Sandia National Laboratories (SNL) to conduct a series of scoping tests to identify thermal and mechanical probes that could be used to characterize the zone of influence (ZOI) during high energy arc fault (HEAF) testing. For the thermal evaluation, passive and active probes were exposed to HEAF-like heat fluxes for a period of 2 seconds at the SNL's National Solar Thermal Test Facility to determine their ability to survive and measure such an extreme environment. Thermal probes tested included temperature lacquers (passive), NANMAC thermocouples, directional flame thermometers, modified plate thermometers, infrared temperature sensors, and a Gardon heat flux gauge. Similarly, passive and active pressure probes were evaluated by exposing them to pressures resulting from various high-explosive detonations at the Sandia Terminal Ballistic Facility. Pressure probes included bikini pressure gauges (passive) and pressure transducers. Results from these tests provided good insight to determine which probes should be considered for use during future HEAF testing.

More Details

RADTRAN 6 Technical Manual

Weiner, Ruth F.; Dennis, Matthew L.

This Technical Manual contains descriptions of the calculation models and mathematical and numerical methods used in the RADTRAN 6 computer code for transportation risk and consequence assessment. The RADTRAN 6 code combines user-supplied input data with values from an internal library of physical and radiological data to calculate the expected radiological consequences and risks associated with the transportation of radioactive material. Radiological consequences and risks are estimated with numerical models of exposure pathways, receptor populations, package behavior in accidents, and accident severity and probability.

More Details

Dynamic Analysis Methods for Detecting Anomalies in Asynchronously Interacting Systems

Solis, John H.; Kumar, Akshat

Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the need to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.

More Details

Feasibility of antenna-to-antenna isolation measurements at S-band in the Facility for Antenna and Radar-cross-section Measurements (FARM)

Brock, Billy C.

Frequency-domain antenna-coupling measurements performed in the compact-range room of the FARM, will actually be dominated by reflected components from the ceiling, floor, walls, etc., not the direct freespace coupling. Consequently, signal processing must be applied to the frequency-domain data to extract the direct free-space coupling. The analysis presented above demonstrates that it is possible to do so successfully.

More Details

Developing a System for Testing Computational Social Models using Amazon Mechanical Turk

Lakkaraju, Kiran; Rogers, Alisa M.

The US faces persistent, distributed threats from malevolent individuals, groups and organizations around the world. Computational Social Models (CSMs) help anticipate the dynamics and behaviors of these actors by modeling the behavior and interactions of individuals, groups and organizations. For strategic planners to trust the results of CSMs, they must have confidence in the validity of the models. Establishing validity before model use will enhance confidence and reduce the risk of error. One problem with validation is designing an appropriate controlled test of the model, similar to the testing of physical models. Lab experiments can do this, but are often limited to small numbers of subjects, with low subject diversity and are often in a contrived environment. Natural studies attempt to test models by gathering large-scale observational data (e.g., social media) however this loses the controlled aspect. We propose a new approach to run large-scale, controlled online experiments on diverse populations. Using Amazon Mechanical Turk, a crowdsourcing tool, we will draw large populations into controlled experiments in a manner that was not possible just a few years ago.

More Details

Mitigating Oscillator Pulling Due To Magnetic Coupling in Monolithic Mixed-Signal Radio-Frequency Integrated Circuits

Sobering, Ian D.

An analysis of frequency pulling in a varactor-tuned LC VCO under coupling from an on-chip PA is presented. The large-signal behavior of the VCO's inversion-mode MOS varactors is outlined, and the susceptibility of the VCO to frequency pulling from PA aggressor signals with various modulation schemes is discussed. We show that if the aggressor signal is aperiodic, band-limited, or amplitude-modulated, the varactor-tuned LC VCO will experience frequency pulling due to time-modulation of the varactor capacitance. However, if the aggressor signal has constant-envelope phase modulation, VCO pulling can be eliminated, even in the presence of coupling, through careful choice of VCO frequency and divider ratio. Additional mitigation strategies, including new inductor topologies and system-level architectural choices, are also examined.

More Details

Tools for Large-Scale Mobile Malware Analysis

Bierma, Michael

Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.

More Details

First-principles calculation of entropy for liquid metals

Physical Review E - Statistical, Nonlinear, and Soft Matter Physics

Desjarlais, Michael P.

We demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two-phase thermodynamic (2PT) model based on a decomposition of the velocity autocorrelation function into gas-like (hard sphere) and solid-like (harmonic) subsystems. The hard sphere model for the gas-like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical Lorentzian high-frequency tail. Using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low-frequency (long-time) behavior of the hard sphere model while providing for realistic short-time coherence and high-frequency tails to the spectrum. This approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. The use of this method for the determination of melt boundaries is demonstrated with a calculation of the high-pressure bcc melt boundary for sodium. With the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2PT model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science. © 2013 American Physical Society.

More Details

Tunable metamaterials based on voltage controlled strong coupling

Applied Physics Letters

Montano, Ines; Klem, John F.; Brener, Igal

We present the design, fabrication, and realization of an electrically tunable metamaterial operating in the mid-infrared spectral range. Our devices combine intersubband transitions in semiconductor quantum-wells with planar metamaterials and operate in the strong light-matter coupling regime. The resonance frequency of the intersubband transition can be controlled by an external bias relative to the fixed metamaterial resonance. This allows us to switch dynamically from an uncoupled to a strongly coupled system and thereby to shift the eigenfrequency of the upper polariton branch by 2.5 THz (corresponding to 8% of the center frequency or one full linewidth) with a bias of 5 V. © 2013 AIP Publishing LLC.

More Details

Quantum-engineered interband cascade photovoltaic devices

Proceedings of SPIE - The International Society for Optical Engineering

Klem, John F.

Quantum-engineered multiple stage photovoltaic (PV) devices are explored based on InAs/GaSb/AlSb interband cascade (IC) structures. These ICPV devices employ multiple discrete absorbers that are connected in series by widebandgap unipolar barriers using type-II heterostructure interfaces for facilitating carrier transport between cascade stages similar to IC lasers. The discrete architecture is beneficial for improving the collection efficiency and for spectral splitting by utilizing absorbers with different bandgaps. As such, the photo-voltages from each individual cascade stage in an ICPV device add together, creating a high overall open-circuit voltage, similar to conventional multi-junction tandem solar cells. Furthermore, photo-generated carriers can be collected with nearly 100% efficiency in each stage. This is because the carriers travel over only a single cascade stage, designed to be shorter than a typical diffusion length. The approach is of significant importance for operation at high temperatures where the diffusion length is reduced. Here, we will present our recent progress in the study of ICPV devices, which includes the demonstration of ICPV devices at room temperature and above with narrow bandgaps (e.g. 0.23 eV) and high open-circuit voltages. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

More Details

Charge Sensed Pauli Blockade in a Metal–Oxide–Semiconductor Lateral Double Quantum Dot

Nano Letters

Nguyen, Khoi T.; Lu, Tzu M.; Muller, Richard P.; Carroll, M.S.; Lilly, Michael; Nielsen, Erik N.; Bishop, Nathaniel B.; Young, Ralph W.; Wendt, Joel R.; Dominguez, Jason; Pluym, Tammy; Stevens, Jeffrey

We report Pauli blockade in a multielectron silicon metal–oxide–semiconductor double quantum dot with an integrated charge sensor. The current is rectified up to a blockade energy of 0.18 ± 0.03 meV. The blockade energy is analogous to singlet–triplet splitting in a two electron double quantum dot. Built-in imbalances of tunnel rates in the MOS DQD obfuscate some edges of the bias triangles. A method to extract the bias triangles is described, and a numeric rate-equation simulation is used to understand the effect of tunneling imbalances and finite temperature on charge stability (honeycomb) diagram, in particular the identification of missing and shifting edges. A bound on relaxation time of the triplet-like state is also obtained from this measurement.

More Details

Electrical breakdown phenomena involving material interfaces

Digest of Technical Papers-IEEE International Pulsed Power Conference

Hjalmarson, Harold P.; Zutavern, Fred J.; Williamson, Kenneth M.; Lehr, Jane; Mar, Alan

Electrical breakdown in a composite gas-solid dielectric is described in qualitative terms. Continuum- and particle-based calculations are performed on idealized structures. The analysis and the calculations suggest that dielectric permittivity has an important role at early times in the breakdown events. The continuum calculations show that the space-charge limited current in the solid dielectric has an important role at longer times. At very long times, the Joule heating from the space-charge limited current is expected to produce thermal breakdown. © 2013 IEEE.

More Details

Observations of Modified Three-Dimensional Instability Structure for Imploding z -Pinch Liners that are Premagnetized with an Axial Field

Physical Review Letters

Mcbride, Ryan; Gomez, Matthew R.; Hansen, Stephanie B.; Herrmann, Mark H.; Mckenney, John; Robertson, G.K.; Rochau, G.A.; Savage, Mark E.; Stygar, William A.; Jennings, Christopher A.; Lamppa, Derek C.; Martin, Matthew R.; Rovang, Dean C.; Slutz, Stephen A.; Cuneo, Michael E.; Owen, Albert C.; Sinars, Daniel

Novel experimental data are reported that reveal helical instability formation on imploding z -pinch liners that are premagnetized with an axial field. Such instabilities differ dramatically from the mostly azimuthally symmetric instabilities that form on unmagnetized liners. The helical structure persists at nearly constant pitch as the liner implodes. This is surprising since, at the liner surface, the azimuthal drive field presumably dwarfs the axial field for all but the earliest stages of the experiment. These fundamentally 3D results provide a unique and challenging test for 3D-magnetohydrodynamics simulations.

More Details

Application of isotopic labeling, and gas chromatography mass spectrometry, to understanding degradation products and pathways in the thermal-oxidative aging of Nylon 6.6

Polymer Degradation and Stability

Von White II, Gregory; Hochrein, James M.

Nylon 6.6 containing 13C isotopic labels at specific positions along the macromolecular backbone has been subjected to extensive thermal-oxidative aging at 138 °C for time periods up to 243 days. In complementary experiments, unlabeled Nylon 6.6 was subjected to the same aging conditions under an atmosphere of 18O2. Volatile organic degradation products were analyzed by cryofocusing gas chromatography mass spectrometry (cryo-GC/MS) to identify the isotopic labeling. The labeling results, combined with basic considerations of free radical reaction chemistry, provided insights to the origin of degradation species, with respect to the macromolecular structure. A number of inferences on chemical mechanisms were drawn, based on 1) the presence (or absence) of the isotopic labels in the various products, 2) the location of the isotope within the product molecule, and 3) the relative abundance of products as indicated by large differences in peak intensities in the gas chromatogram. The overall degradation results can be understood in terms of free radical pathways originating from initial attacks on three different positions along the nylon chain which include hydrogen abstraction from: the (CH2) group adjacent to the nitrogen atom, at the (CH2) adjacent the carbonyl group, and direct radical attack on the carbonyl. Understanding the pathways which lead to Nylon 6.6 degradation ultimately provides new insight into changes that can be leveraged to detect and reduce early aging and minimize problems associated with material degradation.

More Details

Plasmonics and nanoantennas for infrared detectors

2013 IEEE Photonics Conference, IPC 2013

Davids, Paul; Kim, Jin K.; Leonhardt, Darin; Wendt, Joel R.; Reinke, Charles M.

Detectors that take full advantage of the energy confinement offered by surface waves could have significant performance advantages in dark current and optical functionality. We use a subwavelength patterned metal nanoantenna structure to convert incoming plane waves to these surface waves. © 2013 IEEE.

More Details

Review of polymer oxidation and its relationship with materials performance and lifetime prediction

Polymer Degradation and Stability

Celina, Mathew C.

All polymers are intrinsically susceptible to oxidation, which is the underlying process for thermally driven materials degradation and of concern in various applications. There are many approaches for predicting oxidative polymer degradation. Aging studies usually are meant to accelerate oxidation chemistry for predictive purposes. Kinetic models attempt to describe reaction mechanisms and derive rate constants, whereas rapid qualification tests should provide confidence for extended performance during application, and similarly TGA tests are meant to provide rapid guidance for thermal degradation features. What are the underlying commonalities or diverging trends and complications when we approach thermo-oxidative aging of polymers in such different ways? This review presents a brief status report on the important aspects of polymer oxidation and focuses on the complexity of thermally accelerated polymer aging phenomena. Thermal aging and lifetime prediction, the importance of DLO, property correlations, kinetic models, TGA approaches, and a framework for predictive aging models are briefly discussed. An overall perspective is provided showing the challenges associated with our understanding of polymer oxidation as it relates to lifetime prediction requirements.

More Details

High-temperature brushless DC motor controller design

Transactions - Geothermal Resources Council

Cieslewski, Grzegorz; Lindblom, Scott C.; Maldonado, Frank J.; Echert, Michael

High-temperature geothermal exploration requires a wide array of tools and sensors to instrument drilling and monitor downhole conditions. There is a steep decline in component availability as the operating temperature increases, limiting tool availability and capability for both drilling and monitoring. Several applications exist where a small motor can provide a significant benefit to the overall operation. Applications such as clamping systems for seismic monitoring, televiewers, valve actuators, and directional drilling systems would be able to utilize a robust motor controller capable of operating in these harsh environments. The development of a high-temperature motor controller capable of operation at 225°C significantly increases the operating envelope for next generation high temperature tools and provides a useful component for designers to integrate into future downhole systems. High-temperature motor control has not been an area of development until recently as motors capable of operating in extreme temperature regimes are becoming commercially available. Currently the most common method of deploying a motor controller is to use a Dewared, or heat shielded tool with low-temperature electronics to control the motor. This approach limits the amount of time that controller tool can stay in the high-temperature environments and does not allow for long-term deployments. A Dewared approach is suitable for logging tools which spend limited time in the well however, a longer-term deployment like a seismic tool [Henfling 2010], which may be deployed for weeks or even months at a time, is not possible. Utilizing high-temperature electronics and a high-temperature motor that does not need to be shielded provides a reliable and robust method for long-term deployments and long-life operations.

More Details

Dynamic failure of materials using the material point method in CTH

Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013

Schumacher, Shane C.; Ruggirello, Kevin P.

The dynamic failure of materials in a finite volume shock physics computational code poses many challenges. Sandia National Laboratories has added Lagrangian markers as a new capability to CTH. The failure process of a marker in CTH is driven by the nature of Lagrangian numerical methods. This process is performed in three steps and the first step is to detect failure using the material constitutive model. The constitutive model detects failure computing damage or other means from the strain rate, strain, stress, etc. Once failure has been determined the material stress and energy states are released along a path driven by the constitutive model. Once the magnitude of the stress reaches a critical value, the material is switched to another material that behaves hydrodynamically. The hydrodynamic failed material is by definition non-shear-supporting but still retains the Equation of State (EOS) portion of the constitutive model. The material switching process is conservative in mass, momentum and energy. The failed marker material is allowed to fail using the CTH method of void insertion as necessary during the computation.

More Details

Error suppression and error correction in adiabatic quantum computation: Non-equilibrium dynamics

New Journal of Physics

Sarovar, Mohan; Young, Kevin

While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to 'Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)', which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. © IOP Publishing and Deutsche Physikalische Gesellschaft.

More Details

Unified creep plasticity damage (UCPD) model for SAC396 solder

ASME 2013 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2013

Neilsen, Michael K.; Vianco, Paul T.

A unified creep plasticity damage (UCPD) model for Sn-Pb and Pb-free solders was developed and implemented into finite element analysis codes. The new model will be described along with the relationship between the model's damage evolution equation and an empirical Coffin-Manson relationship for solder fatigue. Next, developments needed to model crack initiation and growth in solder joints will be described. Finally, experimentally observed cracks in typical solder joints subjected to thermal mechanical fatigue are compared with model predictions. Finite element based modeling is particularly suited for predicting solder joint fatigue of advanced electronics packaging, e.g. package-on-package (PoP), because it allows for evaluation of a variety of package materials and geometries. Copyright © 2013 by ASME.

More Details

Discriminating composite panels by use of a spectral reflectometer

ASME 2013 Heat Transfer Summer Conf. Collocated with the ASME 2013 7th Int. Conf. on Energy Sustainability and the ASME 2013 11th Int. Conf. on Fuel Cell Science, Engineering and Technology, HT 2013

Brown, Alexander L.

Carbon fibers are being increasingly used in composites for aircraft. They are bound together with a binder, often an epoxy. There are many grades of binders, and many different types of composites sold on the market. They are expensive. We have some donated materials of unknown type, and would like to be able to be cost-effective and use them without incurring a large cost to analyze the materials using laboratory methods. Visual inspection is not normally sufficiently accurate to be able to tell one composite from another. Optical methods that involve a broader spectrum have commonly been used to discriminate organic materials. A five-band spectral reflectometer is used to measure reflectivity of the surfaces, and is a simple way of extracting data into the infrared bands. The instrument used in these tests is less resolved than a narrow band spectrometer, but is easier to deploy because it is a hand-held device that only requires a flat surface of approximately 3 cm diameter. Reflectivity of many different composite materials, including a bismaleimide, several thermoset epoxies, and some low temperature epoxies from various manufacturers is measured. Other materials are also included to demonstrate that non-composites can be rejected by the methods. Analysis shows that the reflectometer measurements are capable of discriminating some materials, but have difficulty with discriminating others. The raw reflectivity data are likely to be helpful for future radiation modeling of composite surfaces. Copyright © 2013 by ASME.

More Details

Measurement of fatigue crack growth rates for SA-372 GR. J steel in 100 MPA hydrogen gas following article KD-10

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Somerday, Brian P.; San Marchi, Chris

The objective of this work is to enable the safe design of hydrogen pressure vessels by measuring the fatigue crack growth rates of ASME code-qualified steels in high-pressure hydrogen gas. While a design-life calculation framework has recently been established for high-pressure hydrogen vessels, a material property database does not exist to support the analysis. This study addresses such voids in the database by measuring the fatigue crack growth rates for three heats of ASME SA-372 Grade J steel in 100 MPa hydrogen gas at two different load ratios (R). Results show that fatigue crack growth rates are similar for all three steel heats and are only a mild function of R. Hydrogen accelerates the fatigue crack growth rates of the steels by at least an order of magnitude relative to crack growth rates in inert environments. Despite such dramatic effects of hydrogen on the fatigue crack growth rates, measurement of these properties enables reliable definition of the design life of steel hydrogen containment vessels. Copyright © 2013 by ASME.

More Details

A dynamic adaptation technique for the material point method

Particle-Based Methods III: Fundamentals and Applications - Proceedings of the 3rd International Conference on Particle-based MethodsFundamentals and Applications, Particles 2013

Ruggirello, Kevin P.; Schumacher, Shane C.

The Lagrangian Material Point Method (MPM) [1, 2] has been implemented into the Eulerian shock physics code CTH[3], at Sandia National Laboratories. Since the MPM uses a background grid to calculate gradients, the method can numerically fracture if an insufficient number of particles per cell are used in high strain problems. Numerical fracture happens when the particles become separated by more than a grid cell leading to a loss of communication between them. One solution to this problem is the Convected Particle Domain Interpolation (CPDI) technique[4] where the shape functions are allowed to stretch smoothly across multiple grid cells, which alleviates this issue but introduces difficulties for parallelization because the particle domains can become non-local. This paper presents an approach where the particles are dynamically split when the volumetric strain for a particle becomes greater than a set limit so that the particle domain is always local, and presents an application to a large strain problem.

More Details

Quantum Monte Carlo applied to solids

Physical Review. B, Condensed Matter and Materials Physics

Shulenburger, Luke N.; Mattsson, Thomas

We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.

More Details

Rechargeable aluminum batteries with conducting polymers as positive electrodes

Hudak, Nicholas S.

This report is a summary of research results from an Early Career LDRD project con-ducted from January 2012 to December 2013 at Sandia National Laboratories. Demonstrated here is the use of conducting polymers as active materials in the posi-tive electrodes of rechargeable aluminum-based batteries operating at room tempera-ture. The battery chemistry is based on chloroaluminate ionic liquid electrolytes, which allow reversible stripping and plating of aluminum metal at the negative elec-trode. Characterization of electrochemically synthesized polypyrrole films revealed doping of the polymers with chloroaluminate anions, which is a quasi-reversible reac-tion that facilitates battery cycling. Stable galvanostatic cycling of polypyrrole and polythiophene cells was demonstrated, with capacities at near-theoretical levels (30-100 mAh g-1) and coulombic efficiencies approaching 100%. The energy density of a sealed sandwich-type cell with polythiophene at the positive electrode was estimated as 44 Wh kg-1, which is competitive with state-of-the-art battery chemistries for grid-scale energy storage.

More Details

Classifying with confidence from incomplete information

Journal of Machine Learning Research

Anderson, Hyrum A.

For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize this goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.

More Details

Design basis of an impulsively loaded vessel for specific loading configurations

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Yip, Mien; Haroldsen, Brent L.

For an impulsively loaded containment vessel, such as the Sandia Explosive Destruction System (EDS), the traditional notion of a single-value explosive rating may not be sufficient to qualify the vessel for many real-life loading situations, such as those involving multiple munitions placed in various geometric configurations. Other significant factors, including detonation timing, geometry of explosive(s), and standoff distances, need to be considered for a more accurate assessment of the vessel integrity. It is obvious that the vessel structural response from an explosive charge detonated at the geometric center of the vessel will be very different from the structural response from the same explosive charge detonated next to the vessel wall. It is, however, less obvious that the same explosive can produce vastly different vessel response if it is detonated at one end versus at the middle versus from both ends. The goal of this paper is to identify some of the effects that non-trivial loading situations have on the vessel structural integrity. The metric for determining vessel integrity is based on Code Case 2564 of the ASME Boiler and Pressure Vessel Code. Based on the findings of this work, it may be necessary to qualify impulsively loaded containment vessels for specific explosive configurations, which should include the quantity, geometry and location of the explosives, as well as the detonation points. Copyright © 2013 by ASME.

More Details

Experience with using code case 2564 to design and certify an impulsively loaded vessel

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Haroldsen, Brent L.; Stofleth, Jerome H.; Yip, Mien

Code Case 2564 for the design of impulsively loaded vessels was approved in January 2008. In 2010 the US Army Non-Stockpile Chemical Materiel Program, with support from Sandia National Laboratories, procured a vessel per this Code Case for use on the Explosive Destruction System (EDS). The vessel was delivered to the Army in August of 2010 and approved for use by the DoD Explosives Safety Board in 2012. Although others have used the methodology and design limits of the Code Case to analyze vessels, to our knowledge, this was the first vessel to receive an ASME explosive rating with a U3 stamp. This paper discusses lessons learned in the process. Of particular interest were issues related to defining the design basis in the User Design Specification and explosive qualification testing required for regulatory approval. Specifying and testing an impulsively loaded vessel is more complicated than a static pressure vessel because the loads depend on the size, shape, and location of the explosive charges in the vessel and on the kind of explosives used and the point of detonation. Historically the US Department of Defense and Department of Energy have required an explosive test. Currently the Code Case does not address testing requirements, but it would be beneficial if it did since having vetted, third party standards for explosive qualification testing would simplify the process for regulatory approval. Copyright © 2013 by ASME.

More Details
Results 54801–55000 of 99,299
Results 54801–55000 of 99,299