Publications

Results 2176–2200 of 2,290

Search results

Jump to search filters

Toward an evolutionary task parallel integrated MPI + X Programming Model

Proceedings of the 6th International Workshop on Programming Models and Applications for Multicores and Manycores, PMAM 2015

Barrett, Richard F.; Stark, Dylan S.; Vaughan, Courtenay T.; Grant, Ryan E.; Olivier, Stephen L.; Laros, James H.

The Bulk Synchronous Parallel programming model is showing performance limitations at high processor counts. We propose over-decomposition of the domain, operated on as tasks, to smooth out utilization of the computing resource, in particular the node interconnect and processing cores, and hide intra- and inter-node data movement. Our approach maintains the existing coding style commonly employed in computational science and engineering applications. Although we show improved performance on existing computers, up to 131,072 processor cores, the effectiveness of this approach on expected future architectures will require the continued evolution of capabilities throughout the codesign stack. Success then will not only result in decreased time to solution, but would also make better use of the hardware capabilities and reduce power and energy requirements, while fundamentally maintaining the current code configuration strategy.

More Details

Portable File Format (PFF) specifications

Laros, James H.

Created at Sandia National Laboratories, the Portable File Format (PFF) allows binary data transfer across computer platforms. Although this capability is supported by many other formats, PFF files are still in use at Sandia, particularly in pulsed power research. This report provides detailed PFF specifications for accessing data without relying on legacy code.

More Details

Sandia Data Archive (SDA) file specifications

Laros, James H.; Ao, Tommy A.

The Sandia Data Archive (SDA) format is a specific implementation of the HDF5 (Hierarchal Data Format version 5) standard. The format was developed for storing data in a universally accessible manner. SDA files may contain one or more data records, each associated with a distinct text label. Primitive records provide basic data storage, while compound records support more elaborate grouping. External records allow text/binary files to be carried inside an archive and later recovered. This report documents version 1.0 of the SDA standard. The information provided here is sufficient for reading from and writing to an archive. Although the format was original designed for use in MATLAB, broader use is encouraged.

More Details

Improvements to address issues leading to cancellation of the July 2014 Plutonium shot

Laros, James H.

Overview: A Pu shot scheduled for July 17 on the Z machine at SNL was cancelled this past summer. The LiF windows on the Pu targets were cracked during assembly because of configuration changes. Sandia management concluded that continuing with this experiment would present an unacceptable level of risk to the facility and possibly to the workers. In this report, we document the events that occurred which led to this decision and also present some lessons learned and plans and procedures put in place to reduce the likelihood of another such occurrence. The changes and this memorandum reflect the thinking of subject matter experts at both LANL and SNL. These changes represent significant improvements in both communication protocols and quality of the hardware assemblies.

More Details

2nd Sandia Fracture Challenge Summit: Sandia California's Modeling Approach

Karlson, Kyle N.; Brown, Arthur B.; Laros, James H.

Team Sandia California (Team H) used the Sandia code SIERRA Solid Mechanics: Implicit (SIERRA SM) to model the SFC2 challenge problem. SIERRA SM is a Lagrangian, three-dimensional, implicit code for the analysis of solids and structures. It contains a versatile library of continuum and structural elements, and an extensive library of material models. For all SFC2 related simulations, our team used Q1P0, 8 node hexahedral elements with element side lengths on the order 0.175 mm in failure regions. To model crack initiation and failure, element death removed elements from the simulation according to a continuum damage model. SIERRA SM’s implicit dynamics, implemented with an HHT time integration scheme for numerical damping [1], was used to model the unstable failure modes of the models. We chose SIERRA SM’s isotropic Elasto Viscoplastic material model for our simulations because it contains most of the physics required to accurately model the SFC2 challenge problem such as the flexibility to include temperature and rate dependence for a material.

More Details

Exploiting data representation for fault tolerance

Journal of Computational Science

Laros, James H.; Hoemmen, Mark F.; Mueller, F.

Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product's result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.

More Details

Detection range enhancement using circularly polarized light in scattering environments for infrared wavelengths

Applied Optics

Laros, James H.; Scrymgeour, David S.; Kemme, S.A.; Dereniak, E.L.

We find for infrared wavelengths that there are broad ranges of particle sizes and refractive indices that represent fog and rain, where circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that, for specific scene parameters, circular polarization outperforms linear polarization in maintaining the illuminating polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Initially, researchers employed polarizationdiscriminating schemes, often using linearly polarized active illumination, to further distinguish target signals from the background noise. More recently, researchers have investigated circular polarization as a means to separate signal from noise even more. Specifically, we quantify both linearly and circularly polarized active illumination and show here that circular polarization persists better than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave and long-wave infrared, and large particle sizes of Sahara dust around the 4 μmwavelength. Conversely, we quantify where linear polarization persists better than circular polarization for some limited particle sizes of radiation fog in the long-wave infrared, small particle sizes of Sahara dust for wavelengths of 9-10.5 μm, and large particle sizes of Sahara dust through the 8-11 μm wavelength range in the long-wave infrared.

More Details

Turbocharging Quantum Tomography

Blume-Kohout, Robin J.; Laros, James H.; Nielsen, Erik N.; Maunz, Peter L.; Scholten, Travis L.; Rudinger, Kenneth M.

Quantum tomography is used to characterize quantum operations implemented in quantum information processing (QIP) hardware. Traditionally, state tomography has been used to characterize the quantum state prepared in an initialization procedure, while quantum process tomography is used to characterize dynamical operations on a QIP system. As such, tomography is critical to the development of QIP hardware (since it is necessary both for debugging and validating as-built devices, and its results are used to influence the next generation of devices). But tomography suffers from several critical drawbacks. In this report, we present new research that resolves several of these flaws. We describe a new form of tomography called gate set tomography (GST), which unifies state and process tomography, avoids prior methods critical reliance on precalibrated operations that are not generally available, and can achieve unprecedented accuracies. We report on theory and experimental development of adaptive tomography protocols that achieve far higher fidelity in state reconstruction than non-adaptive methods. Finally, we present a new theoretical and experimental analysis of process tomography on multispin systems, and demonstrate how to more effectively detect and characterize quantum noise using carefully tailored ensembles of input states.

More Details

Toward Interactive Scenario Analysis and Exploration

Gayle, Thomas R.; Summers, Kenneth L.; Jungels, John J.; Laros, James H.

As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increase the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.

More Details
Results 2176–2200 of 2,290
Results 2176–2200 of 2,290