Publications

Results 7201–7400 of 9,998

Search results

Jump to search filters

Characterization of Pathogens in Clinical Specimens via Suppression of Host Background for Efficient Second Generation Sequencing Analyses

Branda, Steven B.; Jebrail, Mais J.; Van De Vreugde, James L.; Langevin, Stanley A.; Bent, Zachary B.; Curtis, Deanna J.; Lane, Pamela L.; Carson, Bryan C.; La Bauve, Elisa L.; Patel, Kamlesh P.; Ricken, James B.; Schoeniger, Joseph S.; Solberg, Owen D.; Williams, Kelly P.; Misra, Milind; Powell, Amy J.; Pattengale, Nicholas D.; May, Elebeoba E.; Lane, Todd L.; Lindner, Duane L.; Young, Malin M.; VanderNoot, Victoria A.; Thaitrong, Numrin T.; Bartsch, Michael B.; Renzi, Ronald F.; Tran-Gyamfi, Mary B.; Meagher, Robert M.

Abstract not provided.

Optimized pulses for the control of uncertain qubits

Physical Review A - Atomic, Molecular, and Optical Physics

Carroll, Malcolm; Witzel, Wayne W.

The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses has calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.

More Details

Copy of Automated Molecular Biology Platform Enabling Rapid & Efficient SGS Analysis of Pathogens in Clinical Samples

Branda, Steven B.; Jebrail, Mais J.; Van De Vreugde, James L.; Langevin, Stanley A.; Bent, Zachary B.; Curtis, Deanna J.; Lane, Pamela L.; Carson, Bryan C.; La Bauve, Elisa L.; Patel, Kamlesh P.; Ricken, James B.; Schoeniger, Joseph S.; Solberg, Owen D.; Williams, Kelly P.; Misra, Milind; Powell, Amy J.; Pattengale, Nicholas D.; May, Elebeoba E.; Lane, Todd L.; Lindner, Duane L.; Young, Malin M.; VanderNoot, Victoria A.; Thaitrong, Numrin T.; Bartsch, Michael B.; Renzi, Ronald F.; Tran-Gyamfi, Mary B.; Meagher, Robert M.

Abstract not provided.

Automated Molecular Biology Platform Enabling Rapid & Efficient SGS Analysis of Pathogens in Clinical Samples

Branda, Steven B.; Jebrail, Mais J.; Van De Vreugde, James L.; Langevin, Stanley A.; Bent, Zachary B.; Curtis, Deanna J.; Lane, Pamela L.; Carson, Bryan C.; La Bauve, Elisa L.; Patel, Kamlesh P.; Ricken, James B.; Schoeniger, Joseph S.; Solberg, Owen D.; Williams, Kelly P.; Misra, Milind; Powell, Amy J.; Pattengale, Nicholas D.; May, Elebeoba E.; Lane, Todd L.; Lindner, Duane L.; Young, Malin M.; VanderNoot, Victoria A.; Thaitrong, Numrin T.; Bartsch, Michael B.; Renzi, Ronald F.; Tran-Gyamfi, Mary B.; Meagher, Robert M.

Abstract not provided.

Cooperative application/OS DRAM fault recovery

Hoemmen, Mark F.; Ferreira, Kurt; Heroux, Michael A.; Brightwell, Ronald B.

Exascale systems will present considerable fault-tolerance challenges to applications and system software. These systems are expected to suffer several hard and soft errors per day. Unfortunately, many fault-tolerance methods in use, such as rollback recovery, are unsuitable for many expected errors, for example DRAM failures. As a result, applications will need to address these resilience challenges to more effectively utilize future systems. In this paper, we describe work on a cross-layer application/OS framework to handle uncorrected memory errors. We illustrate the use of this framework through its integration with a new fault-tolerant iterative solver within the Trilinos library, and present initial convergence results.

More Details

Evaluating operating system vulnerability to memory errors

Ferreira, Kurt; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

More Details

Computational aspects of many-body potentials

MRS Bulletin

Plimpton, Steven J.; Thompson, Aidan P.

We discuss the relative complexity and computational cost of several popular many-body empirical potentials, developed by the materials science community over the past 30 years. The inclusion of more detailed many-body effects has come at a computational cost, but the cost still scales linearly with the number of atoms modeled. This is enabling very large molecular dynamics simulations with unprecedented atomic-scale fidelity to physical and chemical phenomena. The cost and scalability of the potentials, run in serial and parallel, are benchmarked in the LAMMPS molecular dynamics code. Several recent large calculations performed with these potentials are highlighted to illustrate what is now possible on current supercomputers. We conclude with a brief mention of high-performance computing architecture trends and the research issues they raise for continued potential development and use. © 2012 Materials Research Society.

More Details

The QCAD framework for quantum device modeling

Computational Electronics (IWCE), 2012 15th International Workshop on

Gao, Xujiao G.; Nielsen, Erik N.; Muller, Richard P.; Young, Ralph W.; Salinger, Andrew G.; Carroll, Malcolm

We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly Si double quantum dots (DQDs) developed for quantum computing. The simulator core includes Poisson, Schrodinger, and Configuration Interaction solvers which can be run individually or combined self-consistently. The simulator is built upon Sandia-developed Trilinos and Albany components, and is interfaced with the Dakota optimization tool. It is being developed for seamless integration, high flexibility and throughput, and is intended to be open source. The QCAD tool has been used to simulate a large number of fabricated silicon DQDs and has provided fast feedback for design comparison and optimization.

More Details

MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing

Barrett, Richard F.; Vaughan, Courtenay T.; Heroux, Michael A.

A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

More Details

Simple intrinsic defects in GaAs : numerical supplement

Schultz, Peter A.

This Report presents numerical tables summarizing properties of intrinsic defects in gallium arsenide, GaAs, as computed by density functional theory. This Report serves as a numerical supplement to the results published in: P.A. Schultz and O.A. von Lilienfeld, 'Simple intrinsic defects in GaAs', Modelling Simul. Mater. Sci Eng., Vol. 17, 084007 (2009), and intended for use as reference tables for a defect physics package in device models. The numerical results for density functional theory calculations of properties of simple intrinsic defects in gallium arsenide are presented.

More Details

First principles predictions of intrinsic defects in aluminum arsenide, AlAs : numerical supplement

Schultz, Peter A.

This Report presents numerical tables summarizing properties of intrinsic defects in aluminum arsenide, AlAs, as computed by density functional theory. This Report serves as a numerical supplement to the results published in: P.A. Schultz, 'First principles predictions of intrinsic defects in Aluminum Arsenide, AlAs', Materials Research Society Symposia Proceedings 1370 (2011; SAND2011-2436C), and intended for use as reference tables for a defect physics package in device models.

More Details

Fast Hybrid Silicon Double-Quantum-Dot Qubit

Physical Review Letters

Shi, Zhan; Simmons, C.B.; Prance, J.R.; Laros, James H.; Koh, Teck S.; Shim, Yun-Pil; Hu, Xuedong; Savage, D.E.; Lagally, M.G.; Eriksson, M.A.; Friesen, Mark; Coppersmith, S.N.

We introduce a quantum dot qubit architecture that has an attractive combination of speed and fabrication simplicity. It consists of a double quantum dot with one electron in one dot and two electrons in the other. The qubit itself is a set of two states with total spin quantum numbers S2 = 3/4 (S = 1/2) and Sz = - 1/2, with the two different states being singlet and triplet in the doubly occupied dot. Gate operations can be implemented electrically and the qubit is highly tunable, enabling fast implementation of one- and two-qubit gates in a simpler geometry and with fewer operations than in other proposed quantum dot qubit architectures with fast operations. Additionally, the system has potentially long decoherence times. These are all extremely attractive properties for use in quantum information processing devices.

More Details

Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions

Journal of the Optical Society of America. A, Optics, Image Science, and Vision

Baczewski, Andrew D.; Miller, Nicholas C.; Shanker, Balasubramaniam

Here, the analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require Ο(Ν2) operations, Ν being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodic dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in Ο(Ν) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.

More Details

Portals 4 network API definition and performance measurement

Brightwell, Ronald B.

Portals is a low-level network programming interface for distributed memory massively parallel computing systems designed by Sandia, UNM, and Intel. Portals has been designed to provide high message rates and to provide the flexibility to support a variety of higher-level communication paradigms. This project developed and analyzed an implementation of Portals using shared memory in order to measure and understand the impact of using general-purpose compute cores to handle network protocol processing functions. The goal of this study was to evaluate an approach to high-performance networking software design and hardware support that would enable important DOE modeling and simulation applications to perform well and to provide valuable input to Intel so they can make informed decisions about future network software and hardware products that impact DOE applications.

More Details

Prism users guide

Weirs, Vincent G.

Prism is a ParaView plugin that simultaneously displays simulation data and material model data. This document describes its capabilities and how to use them. A demonstration of Prism is given in the first section. The second section contains more detailed notes on less obvious behavior. The third and fourth sections are specifically for Alegra and CTH users. They tell how to generate the simulation data and SESAME files and how to handle aspects of Prism use particular to each of these codes.

More Details

Demonstration of a Legacy Application's Path to Exascale - ASC L2 Milestone 4467

Barrett, Brian B.; Kelly, Suzanne M.; Klundt, Ruth A.; Laros, James H.; Leung, Vitus J.; Levenhagen, Michael J.; Lofstead, Gerald F.; Moreland, Kenneth D.; Oldfield, Ron A.; Pedretti, Kevin P.; Rodrigues, Arun; Barrett, Richard F.; Ward, Harry L.; Vandyke, John P.; Vaughan, Courtenay T.; Wheeler, Kyle B.; Brandt, James M.; Brightwell, Ronald B.; Curry, Matthew L.; Fabian, Nathan D.; Ferreira, Kurt; Gentile, Ann C.; Hemmert, Karl S.

Abstract not provided.

Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale

Barrett, Brian B.; Kelly, Suzanne M.; Klundt, Ruth A.; Laros, James H.; Leung, Vitus J.; Levenhagen, Michael J.; Lofstead, Gerald F.; Moreland, Kenneth D.; Oldfield, Ron A.; Pedretti, Kevin T.T.; Rodrigues, Arun; Barrett, Richard F.; Thompson, David C.; Ward, Harry L.; Vandyke, John P.; Vaughan, Courtenay T.; Wheeler, Kyle B.; Brandt, James M.; Brightwell, Ronald B.; Curry, Matthew L.; Fabian, Nathan D.; Ferreira, Kurt; Gentile, Ann C.; Hemmert, Karl S.

This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to prove that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.

More Details

Evaluating parallel relational databases for medical data analysis

Wilson, Andrew T.; Rintoul, Mark D.

Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

More Details

A locally conservative, discontinuous least-squares finite element method for the Stokes equations

International Journal for Numerical Methods in Fluids

Bochev, Pavel B.; Lai, James; Olson, Luke

Conventional least-squares finite element methods (LSFEMs) for incompressible flows conserve mass only approximately. For some problems, mass loss levels are large and result in unphysical solutions. In this paper we formulate a new, locally conservative LSFEM for the Stokes equations wherein a discrete velocity field is computed that is point-wise divergence free on each element. The central idea is to allow discontinuous velocity approximations and then to define the velocity field on each element using a local stream-function. The effect of the new LSFEM approach on improved local and global mass conservation is compared with a conventional LSFEM for the Stokes equations employing standard C 0 Lagrangian elements. © 2011 John Wiley & Sons, Ltd.

More Details

Decision insight into stakeholder conflict for ERN

Siirola, John D.; Tidwell, Vincent C.; Warrender, Christina E.; Morrow, James D.; Benz, Zachary O.

Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effects in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.

More Details

Using the Sirocco File System for high-bandwidth checkpoints

Klundt, Ruth A.; Ward, Harry L.

The Sirocco File System, a file system for exascale under active development, is designed to allow the storage software to maximize quality of service through increased flexibility and local decision-making. By allowing the storage system to manage a range of storage targets that have varying speeds and capacities, the system can increase the speed and surety of storage to the application. We instrument CTH to use a group of RAM-based Sirocco storage servers allocated within the job as a high-performance storage tier to accept checkpoints, allowing computation to potentially continue asynchronously of checkpoint migration to slower, more permanent storage. The result is a 10-60x speedup in constructing and moving checkpoint data from the compute nodes. This demonstration of early Sirocco functionality shows a significant benefit for a real I/O workload, checkpointing, in a real application, CTH. By running Sirocco storage servers within a job as RAM-only stores, CTH was able to store checkpoints 10-60x faster than storing to PanFS, allowing the job to continue computing sooner. While this prototype did not include automatic data migration, the checkpoint was available to be pushed or pulled to disk-based storage as needed after the compute nodes continued computing. Future developments include the ability to dynamically spawn Sirocco nodes to absorb checkpoints, expanding this mechanism to other fast tiers of storage like flash memory, and sharing of dynamic Sirocco nodes between multiple jobs as needed.

More Details
Results 7201–7400 of 9,998
Results 7201–7400 of 9,998