Publications

Results 2401–2450 of 9,998

Search results

Jump to search filters

Global Solution Strategies for the Network-Constrained Unit Commitment Problem with AC Transmission Constraints

IEEE Transactions on Power Systems

Castillo, Anya; Watson, Jean-Paul W.; Laird, Carl D.

We propose a novel global solution algorithm for the network-constrained unit commitment problem that incorporates a nonlinear alternating current (ac) model of the transmission network, which is a nonconvex mixed-integer nonlinear programming problem. Our algorithm is based on the multi-tree global optimization methodology, which iterates between a mixed-integer lower-bounding problem and a nonlinear upper-bounding problem. We exploit the mathematical structure of the unit commitment problem with ac power flow constraints and leverage second-order cone relaxations, piecewise outer approximations, and optimization-based bounds tightening to provide a globally optimal solution at convergence. Numerical results on four benchmark problems illustrate the effectiveness of our algorithm, both in terms of convergence rate and solution quality.

More Details

An adaptive local reduced basis method for solving PDEs with uncertain inputs and evaluating risk

Computer Methods in Applied Mechanics and Engineering

Kouri, Drew P.; Aquino, Wilkins A.; Zou, Zilong

Many physical systems are modeled using partial differential equations (PDEs) with uncertain or random inputs. For such systems, naively propagating a fixed number of samples of the input probability law (or an approximation thereof) through the PDE is often inadequate to accurately quantify the “risk” associated with critical system responses. In this paper, we develop a goal-oriented, adaptive sampling and local reduced basis approximation for PDEs with random inputs. Our method determines a set of samples and an associated (implicit) Voronoi partition of the parameter domain on which we build local reduced basis approximations of the PDE solution. The samples are selected in an adaptive manner using an a posteriori error indicator. A notable advantage of the proposed approach is that the computational cost of the approximation during the adaptive process remains constant. We provide theoretical error bounds for our approximation and numerically demonstrate the performance of our method when compared to widely used adaptive sparse grid techniques. In addition, we tailor our approach to accurately quantify the risk of quantities of interest that depend on the PDE solution. We demonstrate our method on an advection–diffusion example and a Helmholtz example.

More Details

Small scale to extreme: Methods for characterizing energy efficiency in supercomputing applications

Sustainable Computing: Informatics and Systems

Younge, Andrew J.

Power measurement capabilities are becoming commonplace on large scale HPC system deployments. There exist several different approaches to providing power measurements that are used today, primarily in-band and out-of-band measurements. Both of these fundamental techniques can be augmented with application-level profiling and the combination of different techniques is also possible. However, it can be difficult to assess the type and detail of measurement needed to obtain insights and knowledge of the power profile of an application. In addition, the heterogeneity of modern hybrid supercomputing platforms requires that different CPU architectures must be examined as well. This paper presents a taxonomy for classifying power profiling techniques on modern HPC platforms. Three relevant HPC mini-applications are analyzed across systems of multicore and manycore nodes to examine the level of detail, scope, and complexity of these power profiles. We demonstrate that a combination of out-of-band measurement with in-band application region profiling can provide an accurate, detailed view of power usage without introducing overhead. Furthermore, we confirm the energy and power profile of these mini applications at an extreme scale with the Trinity supercomputer. This finding validates the extrapolation of the power profiling techniques from testbed scale of just several dozen nodes to extreme scale Petaflops supercomputing systems, along with providing a set of recommendations on how to best profile future HPC workloads.

More Details

Spectral risk measures: the risk quadrangle and optimal approximation

Mathematical Programming

Kouri, Drew P.

We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. We prove the consistency of this approximation and demonstrate our results through numerical examples.

More Details

Hardware MPI message matching: Insights into MPI matching behavior to inform design: Hardware MPI message matching

Concurrency and Computation. Practice and Experience

Ferreira, Kurt B.; Grant, Ryan E.; Levenhagen, Michael J.; Levy, Scott L.; Groves, Taylor

Here, this paper explores key differences of MPI match lists for several important United States Department of Energy (DOE) applications and proxy applications. This understanding is critical in determining the most promising hardware matching design for any given high-speed network. The results of MPI match list studies for the major open-source MPI implementations, MPICH and Open MPI, are presented, and we modify an MPI simulator, LogGOPSim, to provide match list statistics. These results are discussed in the context of several different potential design approaches to MPI matching–capable hardware. The data illustrate the requirements for different hardware designs in terms of performance and memory capacity. Finally, this paper's contributions are the collection and analysis of data to help inform hardware designers of common MPI requirements and highlight the difficulties in determining these requirements by only examining a single MPI implementation.

More Details

Compressed optimization of device architectures for semiconductor quantum devices compressed optimization of device architectures... ADAM FREES et al

Physical Review Applied

Ward, Daniel R.; Frees, Adam; Gamble, John K.; Blume-Kohout, Robin J.; Eriksson, M.A.; Friesen, Mark; Coppersmith, S.N.

Recent advances in nanotechnology have enabled researchers to manipulate small collections of quantum-mechanical objects with unprecedented accuracy. In semiconductor quantum-dot qubits, this manipulation requires controlling the dot orbital energies, the tunnel couplings, and the electron occupations. These properties all depend on the voltages placed on the metallic electrodes that define the device, the positions of which are fixed once the device is fabricated. While there has been much success with small numbers of dots, as the number of dots grows, it will be increasingly useful to control these systems with as few electrode voltage changes as possible. Here, we introduce a protocol, which we call the "compressed optimization of device architectures" (CODA), in order both to efficiently identify sparse sets of voltage changes that control quantum systems and to introduce a metric that can be used to compare device designs. As an example of the former, we apply this method to simulated devices with up to 100 quantum dots and show that CODA automatically tunes devices more efficiently than other common nonlinear optimizers. To demonstrate the latter, we determine the optimal lateral scale for a triple quantum dot, yielding a simulated device that can be tuned with small voltage changes on a limited number of electrodes.

More Details

SST-GPU: An Execution -Driven CUDA Kernel Scheduler and Streaming-Multiprocessor Compute Model

Khairy, Mahmoud; Zhang, Mengchi; Green, Roland; Hammond, Simon D.; Hoekstra, Robert J.; Rogers, Timothy; Hughes, Clayton H.

Programmable accelerators have become commonplace in modern computing systems. Advances in programming models and the availability of massive amounts of data have created a space for massively parallel acceleration where the context for thousands of concurrent threads are resident on-chip. These threads are grouped and interleaved on a cycle-by-cycle basis among several massively parallel computing cores. The design of future supercomputers relies on an ability to model the performance of these massively parallel cores at scale. To address the need for a scalable, decentralized GPU model that can model large GPUs, chiplet-based GPUs and multi-node GPUs, this report details the first steps in integrating the open-source, execution driven GPGPU-Sim into the SST framework. The first stage of this project, creates two elements: a kernel scheduler SST element accepts work from SST CPU models and schedules it to an SM-collection element that performs cycle-by-cycle timing using SSTs Mem Hierarchy to model a flexible memory system.

More Details

Curvature Based Analysis to Identify and Categorize Trajectory Segments

Schrum Jr., Paul T.; Laros, James H.; Newton, Benjamin D.

Since the attacks carried out against the United States on September 11, 2001, which involved the commandeering of commercial aircraft, interest has increased in performing trajectory analysis of vehicle types not constrained by roadways or railways, i.e., aircraft and watercraft. Anomalous trajectories need to be automatically identified along with other trajectories of interest to flag them for further investigation. There is also interest in analyzing trajectories without a focus on anomaly detection. Various approaches to analyzing these trajectories have been undertaken with useful results to date. In this research, we seek to augment trajectory analysis by carrying out analysis of the trajectory curvature along with other parameters, including distance and total deflection (change in direction). At each point triplet in the ordered sequence of points, these parameters are computed. Adjacent point triplets with similar values are grouped together to form a higher level of semantic categorization. These categorizations are then analyzed to form a yet higher level of categorization which has more specific semantic meaning. This top level of categorization is then summarized for all trajectories under study, allowing for fast identification of trajectories with various semantic characteristics.

More Details

An optimization-based framework to define the probabilistic design space of pharmaceutical processes with model uncertainty

Processes

Laky, Daniel; Xu, Shu; Rodriguez, Jose S.; Vaidyaraman, Shankar; Munoz, Salvador G.; Laird, Carl D.

To increase manufacturing flexibility and system understanding in pharmaceutical development, the FDA launched the quality by design (QbD) initiative. Within QbD, the design space is the multidimensional region (of the input variables and process parameters) where product quality is assured. Given the high cost of extensive experimentation, there is a need for computational methods to estimate the probabilistic design space that considers interactions between critical process parameters and critical quality attributes, as well as model uncertainty. In this paper we propose two algorithms that extend the flexibility test and flexibility index formulations to replace simulation-based analysis and identify the probabilistic design space more efficiently. The effectiveness and computational efficiency of these approaches is shown on a small example and an industrial case study.

More Details
Results 2401–2450 of 9,998
Results 2401–2450 of 9,998