Publications

Results 3451–3475 of 9,998

Search results

Jump to search filters

Global sensitivity analysis and estimation of model error, toward uncertainty quantification in scramjet computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

IEEE Transactions on Visualization and Computer Graphics

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; Wang, Zhiyuan; Wilson, Andrew T.

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

More Details

Nonlocal and mixed-locality multiscale finite element methods

Multiscale Modeling and Simulation

Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.

More Details

Multifidelity statistical analysis of large eddy simulations in scramjet computations

AIAA Non-Deterministic Approaches Conference, 2018

Huan, Xun H.; Geraci, Gianluca; Safta, Cosmin; Eldred, Michael; Sargsyan, Khachik; Vane, Zachary P.; Oefelein, Joseph C.; Najm, Habib N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires accurate and computationally affordable flow simulations, as well as uncertainty quantification (UQ). While traditional UQ techniques can become prohibitive under expensive simulations and high-dimensional parameter spaces, polynomial chaos (PC) surrogate modeling is a useful tool for alleviating some of the computational burden. However, non-intrusive quadrature-based constructions of PC expansions relying on a single high-fidelity model can still be quite expensive. We thus introduce a two-stage numerical procedure for constructing PC surrogates while making use of multiple models of different fidelity. The first stage involves an initial dimension reduction through global sensitivity analysis using compressive sensing. The second stage utilizes adaptive sparse quadrature on a multifidelity expansion to compute PC surrogate coefficients in the reduced parameter space where quadrature methods can be more effective. The overall method is used to produce accurate surrogates and to propagate uncertainty induced by uncertain boundary conditions and turbulence model parameters, for performance quantities of interest from large eddy simulations of supersonic reactive flows inside a scramjet engine.

More Details

Multilevel-multifidelity approaches for forward uq in the DARPA SEQUOIA project

AIAA Non-Deterministic Approaches Conference, 2018

Eldred, Michael; Geraci, Gianluca; Gorodetsky, Alex; Jakeman, John D.

Within the SEQUOIA project, funded by the DARPA EQUiPS program, we pursue algorithmic approaches that enable comprehensive design under uncertainty, through inclusion of aleatory/parametric and epistemic/model form uncertainties within scalable forward/inverse UQ approaches. These statistical methods are embedded within design processes that manage computational expense through active subspace, multilevel-multifidelity, and reduced-order modeling approximations. To demonstrate these methods, we focus on the design of devices that involve multi-physics interactions in advanced aerospace vehicles. A particular problem of interest is the shape design of nozzles for advanced vehicles such as the Northrop Grumman UCAS X-47B, involving coupled aero-structural-thermal simulations for nozzle performance. In this paper, we explore a combination of multilevel and multifidelity forward and inverse UQ algorithms to reduce the overall computational cost of the analysis by leveraging hierarchies of model form (i.e., multifidelity hierarchies) and solution discretization (i.e., multilevel hierarchies) in order of exploit trade offs between solution accuracy and cost. In particular, we seek the most cost effective fusion of information across complex multi-dimensional modeling hierarchies. Results to date indicate the utility of multiple approaches, including methods that optimally allocate resources when estimator variance varies smoothly across levels, methods that allocate sufficient sampling density based on sparsity estimates, and methods that employ greedy multilevel refinement.

More Details

Hierarchical material property representation in finite element analysis: Convergence behavior and the electrostatic response of vertical fracture sets

2018 SEG International Exposition and Annual Meeting, SEG 2018

Weiss, Chester J.; Beskardes, Gungor D.; Van Bloemen Waanders, Bart

Methods for the efficient representation of fracture response in geoelectric models impact an impressively broad range of problems in applied geophysics. We adopt the recently-developed hierarchical material property representation in finite element analysis (Weiss, 2017) to model the electrostatic response of a discrete set of vertical fractures in the near surface and compare these results to those from anisotropic continuum models. We also examine the power law behavior of these results and compare to continuum theory. We find that in measurement profiles from a single point source in directions both parallel and perpendicular to the fracture set, the fracture signature persists over all distances. Furthermore, the homogenization limit (distance at which the individual fracture anomalies are too small to be either measured or of interest) is not strictly a function of the geometric distribution of the fractures, but also their conductivity relative to the background. Hence, we show that the definition of “representative elementary volume”, that distance over which the statistics of the underlying heterogeneities is stationary, is incomplete as it pertains to the applicability of an equivalent continuum model. We also show that detailed interrogation of such intrinsically heterogeneous models may reveal power law behavior that appears anomalous, thus suggesting a possible mechanism to reconcile emerging theories in fractional calculus with classical electromagnetic theory.

More Details

Solution Approaches to Stochastic Programming Problems under Endogenous and/or Exogenous Uncertainties

Computer Aided Chemical Engineering

Cremaschi, Selen; Siirola, John D.

Optimization problems under uncertainty involve making decisions without the full knowledge of the impact the decisions will have and before all the facts relevant to those decisions are known. These problems are common, for example, in process synthesis and design, planning and scheduling, supply chain management, and generation and distribution of electric power. The sources of uncertainty in optimization problems fall into two broad categories: endogenous and exogenous. Exogenous uncertain parameters are realized at a known stage (e.g., time period or decision point) in the problem irrespective of the values of the decision variables. For example, demand is generally considered to be independent of any capacity expansion decisions in process industries, and hence, is regarded as an exogenous uncertain parameter. In contrast, decisions impact endogenous uncertain parameters. The impact can either be in the resolution or in the distribution of the uncertain parameter. The realized values of a Type-I endogenous uncertain parameter are affected by the decisions. An example of this type of uncertainty would be facility protection problem where the likelihood of a facility failing to deliver goods or services after a disruptive event depends on the level of resources allocated as protection to that facility. On the other hand, only the realization times of Type-II endogenous uncertain parameters are affected by decisions. For example, in a clinical trial planning problem, whether a clinical trial is successful or not is only realized after the clinical trial has been completed, and whether the clinical trial is successful or not is not impacted by when the clinical trial is started. There are numerous approaches to modelling and solving optimization problems with exogenous and/or endogenous uncertainty, including (adjustable) robust optimization, (approximate) dynamic programming, model predictive control, and stochastic programming. Stochastic programming is a particularly attractive approach, as there is a straightforward translation from the deterministic model to the stochastic equivalent. The challenge with stochastic programming arises through the rapid, sometimes exponential, growth in the program size as we sample the uncertainty space or increase the number of recourse stages. In this talk, we will give an overview of our research activities developing practical stochastic programming approaches to problems with exogeneous and/or endogenous uncertainty. We will highlight several examples from power systems planning and operations, process modelling, synthesis and design optimization, artificial lift infrastructure planning for shale gas production, and clinical trial planning. We will begin by discussing the straightforward case of exogenous uncertainty. In this situation, the stochastic program can be expressed completely by a deterministic model, a scenario tree, and the scenario-specific parameterizations of the deterministic model. Beginning with the deterministic model, modelers create instances of the deterministic model for each scenario using the scenario-specific data. Coupling the scenario models occurs through the addition of nonanticipativity constraints, equating the stage decision variables across all scenarios that pass through the same stage node in the scenario tree. Modelling tools like PySP (Watson, 2012) greatly simplify the process of composing large stochastic programs by beginning either with an abstract representation of the deterministic model written in Pyomo (Hart, et al., 2017) and scenario data, or a function that will return the deterministic Pyomo model for a specific scenario. PySP automatically can create the extensive form (deterministic equivalent) model from a general representation of the scenario tree. The challenge with large scale stochastic programs with exogenous uncertainty arises through managing the growth of the problem size. Fortunately, there are several well-known approaches to decomposing the problem, both stage-wise (e.g., Benders’ decomposition) and scenario-based (e.g., Lagrangian relaxation or Progressive Hedging), enabling the direct solution of stochastic programs with hundreds or thousands of scenarios. We will then discuss developments in modelling and solving stochastic programs with endogenous uncertainty. These problems are significantly more challenging to both pose and to solve, due to the exponential growth in scenarios required to cover the decision-dependent uncertainties relative to the number of stages in the problem. In this situation, standardized frameworks for expressing stochastic programs do not exist, requiring a modeler to explicitly generate the representations and nonanticipativity constraints. Further, the size of the resulting scenario space (frequently exceeding millions of scenarios) precludes the direct solution of the resulting program. In this case, numerous decomposition algorithms and heuristics have been developed (e.g., Lagrangean decomposition-based algorithms (Tarhan, et al. 2013) or Knapsack-based decomposition Algorithms (Christian and Cremaschi, 2015)).

More Details

Results and correlations from analyses of the ENSA ENUN 32P cask transport tests

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Kalinina, Elena A.; Gordon, Natalie; Ammerman, Douglas; Uncapher, William L.; Saltzstein, Sylvia J.; Wright, Catherine

An ENUN 32P cask supplied by Equipos Nucleares S.A. (ENSA) was transported 9,600 miles by road, sea, and rail in 2017 in order to collect shock and vibration data on the cask system and surrogate spent fuel assemblies within the cask. The task of examining 101,857 ASCII data files – 6.002 terabytes of data (this includes binary and ASCII files) – has begun. Some results of preliminary analyses are presented in this paper. A total of seventy-seven accelerometers and strain gauges were attached by Sandia National Laboratories (SNL) to three surrogate spent fuel assemblies, the cask basket, the cask body, the transport cradle, and the transport platforms. The assemblies were provided by SNL, Empresa Nacional de Residuos Radiactivos, S.A. (ENRESA), and a collaboration of Korean institutions. The cask system was first subjected to cask handling operations at the ENSA facility. The cask was then transported by heavy-haul truck in northern Spain and shipped from Spain to Belgium and subsequently to Baltimore on two roll-on/roll-off ships. From Baltimore, the cask was transported by rail using a 12- axle railcar to the American Association of Railroads’ Transportation Technology Center, Inc. (TTCI) near Pueblo, Colorado where a series of special rail tests were performed. Data were continuously collected during this entire sequence of multi-modal transportation events. (We did not collect data on the transfer between modes of transportation.) Of particular interest – indeed the original motivation for these tests – are the strains measured on the zirconium-alloy tubes in the assemblies. The strains for each of the transport modes are compared to the yield strength of irradiated Zircaloy to illustrate the margin against rod failure during normal conditions of transport. The accelerometer data provides essential comparisons of the accelerations on the different components of the cask system exhibiting both amplification and attenuation of the accelerations at the transport platforms through the cradle and cask and up to the interior of the cask. These data are essential for modeling cask systems. This paper concentrates on analyses of the testing of the cask on a 12-axle railcar at TTCI.

More Details

Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations

SIAM-ASA Journal on Uncertainty Quantification

Adcock, Ben; Bao, Anyi; Jakeman, John D.; Narayan, Akil

The recovery of approximately sparse or compressible coefficients in a polynomial chaos expansion is a common goal in many modern parametric uncertainty quantification (UQ) problems. However, relatively little effort in UQ has been directed toward theoretical and computational strategies for addressing the sparse corruptions problem, where a small number of measurements are highly corrupted. Such a situation has become pertinent today since modern computational frameworks are sufficiently complex with many interdependent components that may introduce hardware and software failures, some of which can be difficult to detect and result in a highly polluted simulation result. In this paper we present a novel compressive sampling-based theoretical analysis for a regularized \ell1 minimization algorithm that aims to recover sparse expansion coefficients in the presence of measurement corruptions. Our recovery results are uniform (the theoretical guarantees hold for all compressible signals and compressible corruptions vectors) and prescribe algorithmic regularization parameters in terms of a user-defined a priori estimate on the ratio of measurements that are believed to be corrupted. We also propose an iteratively reweighted optimization algorithm that automatically refines the value of the regularization parameter and empirically produces superior results. Our numerical results test our framework on several medium to high dimensional examples of solutions to parameterized differential equations and demonstrate the effectiveness of our approach.

More Details

A General Framework for Sensitivity-Based Optimal Control and State Estimation

Computer Aided Chemical Engineering

Thierry, David; Nicholson, Bethany L.; Biegler, Lorenz

New modelling and optimization platforms have enabled the creation of frameworks for solution strategies that are based on solving sequences of dynamic optimization problems. This study demonstrates the application of the Python-based Pyomo platform as a basis for formulating and solving Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE) problems, which enables fast on-line computations through large-scale nonlinear optimization and Nonlinear Programming (NLP) sensitivity. We describe these underlying approaches and sensitivity computations, and showcase the implementation of the framework with large DAE case studies including tray-by-tray distillation models and Bubbling Fluidized Bed Reactors (BFB).

More Details

Pyomo.GDP: Disjunctive Models in Python

Computer Aided Chemical Engineering

Chen, Qi; Johnson, Emma S.; Siirola, John D.; Grossmann, Ignacio E.

In this work, we describe new capabilities for the Pyomo.GDP modeling environment, moving beyond classical reformulation approaches to include non-standard reformulations and a new logic-based solver, GDPopt. Generalized Disjunctive Programs (GDPs) address optimization problems involving both discrete and continuous decision variables. For difficult problems, advanced reformulations such as the disjunctive “basic step” to intersect multiple disjunctions or the use of procedural reformulations may be necessary. Complex nonlinear GDP models may also be tackled using logic-based outer approximation. These expanded capabilities highlight the flexibility that Pyomo.GDP offers modelers in applying novel strategies to solve difficult optimization problems.

More Details

Assessing task-to-data affinity in the LLVM OpenMP runtime

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Klinkenberg, Jannis; Samfass, Philipp; Terboven, Christian; Duran, Alejandro; Klemm, Michael; Teruel, Xavier; Mateo, Sergi; Olivier, Stephen L.; Muller, Matthias S.

In modern shared-memory NUMA systems which typically consist of two or more multi-core processor packages with local memory, affinity of data to computation is crucial for achieving high performance with an OpenMP program. OpenMP* 3.0 introduced support for task-parallel programs in 2008 and has continued to extend its applicability and expressiveness. However, the ability to support data affinity of tasks is missing. In this paper, we investigate several approaches for task-to-data affinity that combine locality-aware task distribution and task stealing. We introduce the task affinity clause that will be part of OpenMP 5.0 and provide the reasoning behind its design. Evaluation with our experimental implementation in the LLVM OpenMP runtime shows that task affinity improves execution performance up to 4.5x on an 8-socket NUMA machine and significantly reduces runtime variability of OpenMP tasks. Our results demonstrate that a variety of applications can benefit from task affinity and that the presented clause is closing the gap of task-to-data affinity in OpenMP 5.0.

More Details

Taxonomist: Application Detection Through Rich Monitoring Data

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Ates, Emre; Tuncer, Ozan; Turk, Ata; Leung, Vitus J.; Brandt, James M.; Egele, Manuel; Coskun, Ayse K.

Modern supercomputers are shared among thousands of users running a variety of applications. Knowing which applications are running in the system can bring substantial benefits: knowledge of applications that intensively use shared resources can aid scheduling; unwanted applications such as cryptocurrency mining or password cracking can be blocked; system architects can make design decisions based on system usage. However, identifying applications on supercomputers is challenging because applications are executed using esoteric scripts along with binaries that are compiled and named by users. This paper introduces a novel technique to identify applications running on supercomputers. Our technique, Taxonomist, is based on the empirical evidence that applications have different and characteristic resource utilization patterns. Taxonomist uses machine learning to classify known applications and also detect unknown applications. We test our technique with a variety of benchmarks and cryptocurrency miners, and also with applications that users of a production supercomputer ran during a 6 month period. We show that our technique achieves nearly perfect classification for this challenging data set.

More Details

Results and correlations from analyses of the ENSA ENUN 32P cask transport tests

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Kalinina, Elena A.; Gordon, Natalie; Ammerman, Douglas; Uncapher, William L.; Saltzstein, Sylvia J.; Wright, Catherine

An ENUN 32P cask supplied by Equipos Nucleares S.A. (ENSA) was transported 9,600 miles by road, sea, and rail in 2017 in order to collect shock and vibration data on the cask system and surrogate spent fuel assemblies within the cask. The task of examining 101,857 ASCII data files – 6.002 terabytes of data (this includes binary and ASCII files) – has begun. Some results of preliminary analyses are presented in this paper. A total of seventy-seven accelerometers and strain gauges were attached by Sandia National Laboratories (SNL) to three surrogate spent fuel assemblies, the cask basket, the cask body, the transport cradle, and the transport platforms. The assemblies were provided by SNL, Empresa Nacional de Residuos Radiactivos, S.A. (ENRESA), and a collaboration of Korean institutions. The cask system was first subjected to cask handling operations at the ENSA facility. The cask was then transported by heavy-haul truck in northern Spain and shipped from Spain to Belgium and subsequently to Baltimore on two roll-on/roll-off ships. From Baltimore, the cask was transported by rail using a 12- axle railcar to the American Association of Railroads’ Transportation Technology Center, Inc. (TTCI) near Pueblo, Colorado where a series of special rail tests were performed. Data were continuously collected during this entire sequence of multi-modal transportation events. (We did not collect data on the transfer between modes of transportation.) Of particular interest – indeed the original motivation for these tests – are the strains measured on the zirconium-alloy tubes in the assemblies. The strains for each of the transport modes are compared to the yield strength of irradiated Zircaloy to illustrate the margin against rod failure during normal conditions of transport. The accelerometer data provides essential comparisons of the accelerations on the different components of the cask system exhibiting both amplification and attenuation of the accelerations at the transport platforms through the cradle and cask and up to the interior of the cask. These data are essential for modeling cask systems. This paper concentrates on analyses of the testing of the cask on a 12-axle railcar at TTCI.

More Details

The future of scientific workflows

International Journal of High Performance Computing Applications

Deelman, Ewa; Peterka, Tom; Altintas, Ilkay; Carothers, Christopher D.; Van Dam, Kerstin K.; Moreland, Kenneth D.; Parashar, Manish; Ramakrishnan, Lavanya; Taufer, Michela; Vetter, Jeffrey

Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on those workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.

More Details

Measuring Multithreaded Message Matching Misery

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Schonbein, William W.; Dosanjh, Matthew G.; Grant, Ryan; Bridges, Patrick G.

MPI usage patterns are changing as applications move towards fully-multithreaded runtimes. However, the impact of these patterns on MPI message matching is not well-studied. In particular, MPI’s mechanic for receiver-side data placement, message matching, can be impacted by increased message volume and nondeterminism incurred by multithreading. While there has been significant developer interest and work to provide an efficient MPI interface for multithreaded access, there has not been a study showing how these patterns affect messaging patterns and matching behavior. In this paper, we present a framework for studying the effects of multithreading on MPI message matching. This framework allows us to explore the implications of different common communication patterns and thread-level decompositions. We present a study of these impacts on the architecture of two of the Top 10 supercomputers (NERSC’s Cori and LANL’s Trinity). This data provides a baseline to evaluate reasonable matching engine queue lengths, search depths, and queue drain times under the multithreaded model. Furthermore, the study highlights surprising results on the challenge posed by message matching for multithreaded application performance.

More Details

Footprint placement for mosaic imaging by sampling and optimization

Proceedings International Conference on Automated Planning and Scheduling, ICAPS

Mitchell, Scott A.; Valicka, Christopher G.; Rowe, Stephen; Zou, Simon

We consider the problem of selecting a small set (mosaic) of sensor images (footprints) whose union covers a two-dimensional Region Of Interest (ROI) on Earth. We take the approach of modeling the mosaic problem as a Mixed-Integer Linear Program (MILP). This allows solutions to this subproblem to feed into a larger remote-sensor collection-scheduling MILP. This enables the scheduler to dynamically consider alternative mosaics, without having to perform any new geometric computations. Our approach to set up the optimization problem uses maximal disk sampling and point-in-polygon geometric calculations. Footprints may be of any shape, even non-convex, and we show examples using a variety of shapes that may occur in practice. The general integer optimization problem can become computationally expensive for large problems. In practice, the number of placed footprints is within an order of magnitude of ten, making the time to solve to optimality on the order of minutes. This is fast enough to make the approach relevant for near real-time mission applications. We provide open source software for all our methods, "GeoPlace."

More Details

Adjoint-enabled multidimensional optimization of satellite electron/proton shields

20th Topical Meeting of the Radiation Protection and Shielding Division, RPSD 2018

Pautz, Shawn D.; Bruss, Donald E.; Adams, Brian M.; Franke, Brian C.; Blansett, Ethan

The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is complicated by the need to protect sensitive electronics from the space radiation environment. There is growing interest in automated design optimization techniques to help achieve that objective. Traditional optimization approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron/proton shields in one-dimensional slab geometries. In this paper we extend that work to two-dimensional Cartesian geometries. This consists primarily of deriving the sensitivities to geometric changes, given a particular prescription for parametrizing the shield geometry. We incorporate these sensitivities into our optimization process and demonstrate their effectiveness in such design calculations.

More Details

Global sensitivity analysis and estimation of model error, toward uncertainty quantification in scramjet computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph; Najm, Habib N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Profiling and Debugging Support for the Kokkos Programming Model

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hammond, Simon; Trott, Christian R.; Ibanez-Granados, Daniel A.; Sunderland, Daniel

Supercomputing hardware is undergoing a period of significant change. In order to cope with the rapid pace of hardware and, in many cases, programming model innovation, we have developed the Kokkos Programming Model – a C++-based abstraction that permits performance portability across diverse architectures. Our experience has shown that the abstractions developed can significantly frustrate debugging and profiling activities because they break expected code proximity and layout assumptions. In this paper we present the Kokkos Profiling interface, a lightweight, suite of hooks to which debugging and profiling tools can attach to gain deep insights into the execution and data structure behaviors of parallel programs written to the Kokkos interface.

More Details

Scalable preconditioners for structure preserving discretizations of maxwell equations in first order form

SIAM Journal on Scientific Computing

Phillips, Edward; Shadid, John N.; Cyr, Eric C.

Multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physics compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.

More Details

Data-driven uncertainty quantification for multisensor analytics

Proceedings of SPIE - The International Society for Optical Engineering

Stracuzzi, David J.; Darling, Michael C.; Chen, Maximillian G.; Peterson, Matthew G.

We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes "interesting" activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi-source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.

More Details
Results 3451–3475 of 9,998
Results 3451–3475 of 9,998