Publications

Results 7301–7400 of 9,998

Search results

Jump to search filters

Fast Hybrid Silicon Double-Quantum-Dot Qubit

Physical Review Letters

Shi, Zhan; Simmons, C.B.; Prance, J.R.; Laros, James H.; Koh, Teck S.; Shim, Yun-Pil; Hu, Xuedong; Savage, D.E.; Lagally, M.G.; Eriksson, M.A.; Friesen, Mark; Coppersmith, S.N.

We introduce a quantum dot qubit architecture that has an attractive combination of speed and fabrication simplicity. It consists of a double quantum dot with one electron in one dot and two electrons in the other. The qubit itself is a set of two states with total spin quantum numbers S2 = 3/4 (S = 1/2) and Sz = - 1/2, with the two different states being singlet and triplet in the doubly occupied dot. Gate operations can be implemented electrically and the qubit is highly tunable, enabling fast implementation of one- and two-qubit gates in a simpler geometry and with fewer operations than in other proposed quantum dot qubit architectures with fast operations. Additionally, the system has potentially long decoherence times. These are all extremely attractive properties for use in quantum information processing devices.

More Details

Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions

Journal of the Optical Society of America. A, Optics, Image Science, and Vision

Baczewski, Andrew D.; Miller, Nicholas C.; Shanker, Balasubramaniam

Here, the analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require Ο(Ν2) operations, Ν being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodic dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in Ο(Ν) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.

More Details

Portals 4 network API definition and performance measurement

Brightwell, Ronald B.

Portals is a low-level network programming interface for distributed memory massively parallel computing systems designed by Sandia, UNM, and Intel. Portals has been designed to provide high message rates and to provide the flexibility to support a variety of higher-level communication paradigms. This project developed and analyzed an implementation of Portals using shared memory in order to measure and understand the impact of using general-purpose compute cores to handle network protocol processing functions. The goal of this study was to evaluate an approach to high-performance networking software design and hardware support that would enable important DOE modeling and simulation applications to perform well and to provide valuable input to Intel so they can make informed decisions about future network software and hardware products that impact DOE applications.

More Details

Prism users guide

Weirs, Vincent G.

Prism is a ParaView plugin that simultaneously displays simulation data and material model data. This document describes its capabilities and how to use them. A demonstration of Prism is given in the first section. The second section contains more detailed notes on less obvious behavior. The third and fourth sections are specifically for Alegra and CTH users. They tell how to generate the simulation data and SESAME files and how to handle aspects of Prism use particular to each of these codes.

More Details

Demonstration of a Legacy Application's Path to Exascale - ASC L2 Milestone 4467

Barrett, Brian B.; Kelly, Suzanne M.; Klundt, Ruth A.; Laros, James H.; Leung, Vitus J.; Levenhagen, Michael J.; Lofstead, Gerald F.; Moreland, Kenneth D.; Oldfield, Ron A.; Pedretti, Kevin P.; Rodrigues, Arun; Barrett, Richard F.; Ward, Harry L.; Vandyke, John P.; Vaughan, Courtenay T.; Wheeler, Kyle B.; Brandt, James M.; Brightwell, Ronald B.; Curry, Matthew L.; Fabian, Nathan D.; Ferreira, Kurt; Gentile, Ann C.; Hemmert, Karl S.

Abstract not provided.

Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale

Barrett, Brian B.; Kelly, Suzanne M.; Klundt, Ruth A.; Laros, James H.; Leung, Vitus J.; Levenhagen, Michael J.; Lofstead, Gerald F.; Moreland, Kenneth D.; Oldfield, Ron A.; Pedretti, Kevin T.T.; Rodrigues, Arun; Barrett, Richard F.; Thompson, David C.; Ward, Harry L.; Vandyke, John P.; Vaughan, Courtenay T.; Wheeler, Kyle B.; Brandt, James M.; Brightwell, Ronald B.; Curry, Matthew L.; Fabian, Nathan D.; Ferreira, Kurt; Gentile, Ann C.; Hemmert, Karl S.

This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to prove that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.

More Details

Evaluating parallel relational databases for medical data analysis

Wilson, Andrew T.; Rintoul, Mark D.

Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

More Details

A locally conservative, discontinuous least-squares finite element method for the Stokes equations

International Journal for Numerical Methods in Fluids

Bochev, Pavel B.; Lai, James; Olson, Luke

Conventional least-squares finite element methods (LSFEMs) for incompressible flows conserve mass only approximately. For some problems, mass loss levels are large and result in unphysical solutions. In this paper we formulate a new, locally conservative LSFEM for the Stokes equations wherein a discrete velocity field is computed that is point-wise divergence free on each element. The central idea is to allow discontinuous velocity approximations and then to define the velocity field on each element using a local stream-function. The effect of the new LSFEM approach on improved local and global mass conservation is compared with a conventional LSFEM for the Stokes equations employing standard C 0 Lagrangian elements. © 2011 John Wiley & Sons, Ltd.

More Details

Decision insight into stakeholder conflict for ERN

Siirola, John D.; Tidwell, Vincent C.; Warrender, Christina E.; Morrow, James D.; Benz, Zachary O.

Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effects in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.

More Details

Using the Sirocco File System for high-bandwidth checkpoints

Klundt, Ruth A.; Ward, Harry L.

The Sirocco File System, a file system for exascale under active development, is designed to allow the storage software to maximize quality of service through increased flexibility and local decision-making. By allowing the storage system to manage a range of storage targets that have varying speeds and capacities, the system can increase the speed and surety of storage to the application. We instrument CTH to use a group of RAM-based Sirocco storage servers allocated within the job as a high-performance storage tier to accept checkpoints, allowing computation to potentially continue asynchronously of checkpoint migration to slower, more permanent storage. The result is a 10-60x speedup in constructing and moving checkpoint data from the compute nodes. This demonstration of early Sirocco functionality shows a significant benefit for a real I/O workload, checkpointing, in a real application, CTH. By running Sirocco storage servers within a job as RAM-only stores, CTH was able to store checkpoints 10-60x faster than storing to PanFS, allowing the job to continue computing sooner. While this prototype did not include automatic data migration, the checkpoint was available to be pushed or pulled to disk-based storage as needed after the compute nodes continued computing. Future developments include the ability to dynamically spawn Sirocco nodes to absorb checkpoints, expanding this mechanism to other fast tiers of storage like flash memory, and sharing of dynamic Sirocco nodes between multiple jobs as needed.

More Details
Results 7301–7400 of 9,998
Results 7301–7400 of 9,998