Publications

Results 2601–2625 of 9,998

Search results

Jump to search filters

DARMA-EMPIRE Integration and Performance Assessment – Interim Report

Lifflander, Jonathan J.; Bettencourt, Matthew T.; Slattengren, Nicole L.; Templet, Gary J.; Miller, Phil; Perrinel, Meriadeg; Rizzi, Francesco; Pebay, Philippe P.

We begin by presenting an overview of the general philosophy that is guiding the novel DARMA developments, followed by a brief reminder about the background of this project. We finally present the FY19 design requirements. As the Exascale era arises, DARMA is uniquely positioned at the forefront of asychronous many-task (AMT) research and development (R&D) to explore emerging programming model paradigms for next-generation HPC applications at Sandia, across NNSA labs, and beyond. The DARMA project explores how to fundamentally shift the expression(PM) and execution(EM)of massively concurrent HPC scientific algorithms to be more asynchronous, resilient to executional aberrations in heterogeneous/unpredictable environments, and data-dependency conscious—thereby enabling an intelligent, dynamic, and self-aware runtime to guide execution.

More Details

Approximating Two-Stage Chance-Constrained Programs with Classical Probability Bounds

Optimization Letters

Singh, Bismark; Watson, Jean-Paul

We consider a joint-chance constraint (JCC) as a union of sets, and approximate this union using bounds from classical probability theory. When these bounds are used in an optimization model constrained by the JCC, we obtain corresponding upper and lower bounds on the optimal objective function value. We compare the strength of these bounds against each other under two different sampling schemes, and observe that a larger correlation between the uncertainties tends to result in more computationally challenging optimization models. We also observe the same set of inequalities to provide the tightest upper and lower bounds in our computational experiments.

More Details

Evaluating demand response opportunities for power systems resilience using MILP and MINLP Formulations

AIChE Journal

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

While peak shaving is commonly used to reduce power costs, chemical process facilities that can reduce power consumption on demand during emergencies (e.g., extreme weather events) bring additional value through improved resilience. For process facilities to effectively negotiate demand response (DR) contracts and make investment decisions regarding flexibility, they need to quantify their additional value to the grid. We present a grid–centric mixed–integer stochastic programming framework to determine the value of DR for improving grid resilience in place of capital investments that can be cost prohibitive for system operators. We formulate problems using both a linear approximation and a nonlinear alternating current power flow model. Our numerical results with both models demonstrate that DR can be used to reduce the capital investment necessary for resilience, increasing the value that chemical process facilities bring through DR. Furthermore, the linearized model often underestimates the amount of DR needed in our case studies.

More Details

General modeling framework for quantum photodetectors

Physical Review A

Leonard, Francois; Young, Steve M.; Sarovar, Mohan

Photodetection plays a key role in basic science and technology, with exquisite performance having been achieved down to the single-photon level. Further improvements in photodetectors would open new possibilities across a broad range of scientific disciplines and enable new types of applications. However, it is still unclear what is possible in terms of ultimate performance and what properties are needed for a photodetector to achieve such performance. Here, we present a general modeling framework for photodetectors whereby the photon field, the absorption process, and the amplification process are all treated as one coupled quantum system. The formalism naturally handles field states with single or multiple photons as well as a variety of detector configurations and includes a mathematical definition of ideal photodetector performance. The framework reveals how specific photodetector architectures introduce limitations and tradeoffs for various performance metrics, providing guidance for optimization and design.

More Details

Talk to me: A case study on coordinating expertise in large-scale scientific software projects

Proceedings - IEEE 14th International Conference on eScience, e-Science 2018

Milewicz, Reed M.; Raybourn, Elaine M.

Large-scale collaborative scientific software projects require more knowledge than any one person typically possesses. This makes coordination and communication of knowledge and expertise a key factor in creating and safeguarding software quality, without which we cannot have sustainable software. However, as researchers attempt to scale up the production of software, they are confronted by problems of awareness and understanding. This presents an opportunity to develop better practices and tools that directly address these challenges. To that end, we conducted a case study of developers of the Trilinos project. We surveyed the software development challenges addressed and show how those problems are connected with what they know and how they communicate. Based on these data, we provide a series of practicable recommendations, and outline a path forward for future research.

More Details

Human Factors in Security

Proceedings - International Carnahan Conference on Security Technology

Speed, Ann E.; Woo, Bryana L.; Kouhestani, Camron G.; Stubbs, Jaclynn J.; Birch, Gabriel C.

Physical security systems (PSS) and humans are inescapably tied in the current physical security paradigm. Yet, physical security system evaluations often end at the console that displays information to the human. That is, these evaluations do not account for human-in-The-loop factors that can greatly impact performance of the security system, even though methods for doing so are well-established. This paper highlights two examples of methods for evaluating the human component of the current physical security system. One of these methods is qualitative, focusing on the information the human needs to adequately monitor alarms on a physical site. The other of these methods objectively measures the impact of false alarm rates on threat detection. These types of human-centric evaluations are often treated as unnecessary or not cost effective under the belief that human cognition is straightforward and errors can be either trained away or mitigated with technology. These assumptions are not always correct, are often surprising, and can often only be identified with objective assessments of human-system performance. Thus, taking the time to perform human element evaluations can identify unintuitive human-system weaknesses and can provide significant cost savings in the form of mitigating vulnerabilities and reducing costly system patches or retrofits to correct an issue after the system has been deployed.

More Details

ECP ST Capability Assesment Report (CAR) for VTK-m (FY18)

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.

More Details

Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

Journal of Computational and Applied Mathematics

Lin, Paul T.; Shadid, John N.; Hu, Jonathan J.; Pawlowski, Roger; Cyr, Eric C.

This work explores the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. This study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.

More Details

Sparse Data Acquisition on Emerging Memory Architectures

IEEE Access

Quach, Tu T.; Agarwal, Sapan; James, Conrad D.; Marinella, Matthew; Aimone, James B.

Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. Finally, in addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.

More Details
Results 2601–2625 of 9,998
Results 2601–2625 of 9,998