Publications

Results 3201–3400 of 9,998

Search results

Jump to search filters

Changing the Engineering Design & Qualification Paradigm in Component Design & Manufacturing (Born Qualified)

Roach, R.A.; Bishop, Joseph E.; Jared, Bradley H.; Keicher, David M.; Cook, Adam W.; Whetten, Shaun R.; Forrest, Eric C.; Stanford, Joshua S.; Boyce, Brad B.; Johnson, Kyle J.; Rodgers, Theron R.; Ford, Kurtis R.; Martinez, Mario J.; Moser, Daniel M.; van Bloemen Waanders, Bart G.; Chandross, M.; Abdeljawad, Fadi F.; Allen, Kyle M.; Stender, Michael S.; Beghini, Lauren L.; Swiler, Laura P.; Lester, Brian T.; Argibay, Nicolas A.; Brown-Shaklee, Harlan J.; Kustas, Andrew K.; Sugar, Joshua D.; Kammler, Daniel K.; Wilson, Mark A.

Abstract not provided.

Simple effective conservative treatment of uncertainty from sparse samples of random functions

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems. Part B. Mechanical Engineering

Romero, Vicente J.; Schroeder, Benjamin B.; Dempsey, James F.; Lewis, John R.; Breivik, Nicole L.; Orient, George E.; Antoun, Bonnie R.; Winokur, Justin W.; Glickman, Matthew R.; Red-Horse, John R.

This paper examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a finite element model of a ductile steel can being slowly crushed. Over 140 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) are tracked in the simulations. Each response quantity’s behavior varies according to the particular stress-strain curves used for the materials in the model. We desire to estimate response variability when only a few stress-strain curve samples are available from material testing. Propagation of just a few samples will usually result in significantly underestimated response uncertainty relative to propagation of a much larger population that adequately samples the presiding random-function source. A simple classical statistical method, Tolerance Intervals, is tested for effectively treating sparse stress-strain curve data. The method is found to perform well on the highly nonlinear input-to-output response mappings and non-standard response distributions in the can-crush problem. The results and discussion in this paper support a proposition that the method will apply similarly well for other sparsely sampled random variable or function data, whether from experiments or models. Finally, the simple Tolerance Interval method is also demonstrated to be very economical.

More Details

Formulation and computation of dynamic, interface-compatible Whitney complexes in three dimensions

Journal of Computational Physics

Siefert, Christopher S.; Kramer, Richard M.; Voth, Thomas E.; Bochev, Pavel B.

A discrete De Rham complex enables compatible, structure-preserving discretizations for a broad range of partial differential equations problems. Such discretizations can correctly reproduce the physics of interface problems, provided the grid conforms to the interface. However, large deformations, complex geometries, and evolving interfaces makes generation of such grids difficult. We develop and demonstrate two formally equivalent approaches that, for a given background mesh, dynamically construct an interface-conforming discrete De Rham complex. Both approaches start by dividing cut elements into interface-conforming subelements but differ in how they build the finite element basis on these subelements. The first approach discards the existing non-conforming basis of the parent element and replaces it by a dynamic set of degrees of freedom of the same kind. The second approach defines the interface-conforming degrees of freedom on the subelements as superpositions of the basis functions of the parent element. These approaches generalize the Conformal Decomposition Finite Element Method (CDFEM) and the extended finite element method with algebraic constraints (XFEM-AC), respectively, across the De Rham complex.

More Details

Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments

Deveci, Mehmet D.; Hammond, Simon D.; Wolf, Michael W.; Rajamanickam, Sivasankaran R.

Architectures with multiple classes of memory media are becoming a common part of mainstream supercomputer deployments. So called multi-level memories offer differing characteristics for each memory component including variation in bandwidth, latency and capacity. This paper investigates the performance of sparse matrix multiplication kernels on two leading highperformance computing architectures — Intel's Knights Landing processor and NVIDIA's Pascal GPU. We describe a data placement method and a chunking-based algorithm for our kernels that exploits the existence of the multiple memory spaces in each hardware platform. We evaluate the performance of these methods w.r.t. standard algorithms using the auto-caching mechanisms Our results show that standard algorithms that exploit cache reuse performed as well as multi-memory-aware algorithms for architectures such as Ki\iLs where the memory subsystems have similar latencies. However, for architectures such as GPUS where memory subsystems differ significantly in both bandwidth and latency, multi-memory-aware methods are crucial for good performance. In addition, our new approaches permit the user to run problems that require larger capacities than the fastest memory of each compute node without depending on the software-managed cache mechanisms.

More Details

Rebooting Computers to Avoid Meltdown and Spectre

Computer

Conte, Thomas M.; DeBenedictis, Erik; Mendelson, Avi; Milojicic, Dejan

Security vulnerabilities such as Meltdown and Spectre demonstrate how chip complexity grew faster than our ability to manage unintended consequences. Attention to security from the outset should be part of the rememdy, yet complexity must be controlled at a more fundamental level.

More Details

Exploiting Geometric Partitioning in Task Mapping for Parallel Computes

Deveci, Mehmet D.; Devine, Karen D.; Laros, James H.; Taylor, Mark A.; Rajamanickam, Sivasankaran R.; Catalyurek, Umit V.

We present a new method for mapping applications' MPI tasks to cores of a parallel computer such that applications' communication time is reduced. We address the case of sparse node allocation, where the nodes assigned to a job are not necessarily located in a contiguous block nor within close proximity to each other in the network, although our methods generalize to contiguous allocations as well. The goal is to assign tasks to cores so that interdependent tasks are performed by "nearby' cores, thus lowering the distance messages must travel, the amount of congestion in the network, and the overall cost of communication. Our new method applies a geometric partitioning algorithm to both the tasks and the processors, and assigns task parts to the corresponding processor parts. We also present a number of algorithmic optimizations that exploit specific features of the network or application. We show that, for the structured finite difference mini-application MiniGhost, our mapping methods reduced communication time up to 75% relative to MiniGhost's default mapping on 128K cores of a Cray XK7 with sparse allocation. For the atmospheric modeling code E3SM/HOMME, our methods reduced communication time up to 31% on 32K cores of an IBM BlueGene/Q with contiguous allocation.

More Details

ECP ST Capability Assesment Report VTK-m

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors and many integrated core. The results of this project will be delivered in tools like Para View, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.

More Details

Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations

Adcock, Ben; Bao, Anyi; Jakeman, John D.; Naryan, Akil

The recovery of approximately sparse or compressible coefficients in a polynomial chaos expansion is a common goal in many modern parametric uncertainty quantification (UQ) problems. However, relatively little effort in UQ has been directed toward theoretical and computational strategies for addressing the sparse corruptions problem, where a small number of measurements are highly corrupted. Such a situation has become pertinent today since modern computational frameworks are sufficiently complex with many interdependent components that may introduce hardware and software failures, some of which can be difficult to detect and result in a highly polluted simulation result. In this paper we present a novel compressive sampling-based theoretical analysis for a regularized t1 minimization algorithm that aims to recover sparse expansion coefficients in the presence of measurement corruptions. Our recovery results are uniform (the theoretical guarantees hold for all compressible signals and compressible corruptions vectors), and prescribe algorithmic regularization parameters in terms of a user-defined a priori estimate on the ratio of measurements that are believed to be corrupted. We also propose an iteratively reweighted optimization algorithm that automatically refines the value of the regularization parameter, and empirically produces superior results. Our numerical results test our framework on several medium-to-high dimensional examples of solutions to parameterized differential equations, and demonstrate the effectiveness of our approach.

More Details

Fundamental limits to single-photon detection determined by quantum coherence and backaction

Physical Review A

Young, Steve M.; Sarovar, Mohan S.; Leonard, Francois L.

Single-photon detectors have achieved impressive performance and have led to a number of new scientific discoveries and technological applications. Existing models of photodetectors are semiclassical in that the field-matter interaction is treated perturbatively and time-separated from physical processes in the absorbing matter. An open question is whether a fully quantum detector, whereby the optical field, the optical absorption, and the amplification are considered as one quantum system, could have improved performance. Here we develop a theoretical model of such photodetectors and employ simulations to reveal the critical role played by quantum coherence and amplification backaction in dictating the performance. We show that coherence and backaction lead to trade-offs between detector metrics and also determine optimal system designs through control of the quantum-classical interface. Importantly, we establish the design parameters that result in a ideal photodetector with 100% efficiency, no dark counts, and minimal jitter, thus paving the route for next-generation detectors.

More Details

An overview of methods to identify and manage uncertainty for modelling problems in the water-environment-agriculture cross-sector

Mathematics for Industry

Jakeman, Anthony J.; Jakeman, John D.

Uncertainty pervades the representation of systems in the water–environment–agriculture cross-sector. Successful methods to address uncertainties have largely focused on standard mathematical formulations of biophysical processes in a single sector, such as partial or ordinary differential equations. More attention to integrated models of such systems is warranted. Model components representing the different sectors of an integrated model can have less standard, and different, formulations to one another, as well as different levels of epistemic knowledge and data informativeness. Thus, uncertainty is not only pervasive but also crosses boundaries and propagates between system components. Uncertainty assessment (UA) cries out for more eclectic treatment in these circumstances, some of it being more qualitative and empirical. Here in this paper, we discuss the various sources of uncertainty in such a cross-sectoral setting and ways to assess and manage them. We have outlined a fast-growing set of methodologies, particularly in the computational mathematics literature on uncertainty quantification (UQ), that seem highly pertinent for uncertainty assessment. There appears to be considerable scope for advancing UA by integrating relevant UQ techniques into cross-sectoral problem applications. Of course this will entail considerable collaboration between domain specialists who often take first ownership of the problem and computational methods experts.

More Details

Shock compression of strongly correlated oxides: A liquid-regime equation of state for cerium(IV) oxide

Physical Review B

Weck, Philippe F.; Cochrane, Kyle C.; Root, Seth R.; Lane, James M.; Shulenburger, Luke N.; Carpenter, John H.; Mattsson, Thomas M.; Vogler, Tracy V.

The shock Hugoniot for full-density and porous CeO2 was investigated in the liquid regime using ab initio molecular dynamics (AIMD) simulations with Erpenbeck's approach based on the Rankine-Hugoniot jump conditions. The phase space was sampled by carrying out NVT simulations for isotherms between 6000 and 100 000 K and densities ranging from ρ=2.5 to 20g/cm3. The impact of on-site Coulomb interaction corrections +U on the equation of state (EOS) obtained from AIMD simulations was assessed by direct comparison with results from standard density functional theory simulations. Classical molecular dynamics (CMD) simulations were also performed to model atomic-scale shock compression of larger porous CeO2 models. Results from AIMD and CMD compression simulations compare favorably with Z-machine shock data to 525 GPa and gas-gun data to 109 GPa for porous CeO2 samples. Using results from AIMD simulations, an accurate liquid-regime Mie-Grüneisen EOS was built for CeO2. In addition, a revised multiphase SESAME-Type EOS was constrained using AIMD results and experimental data generated in this work. This study demonstrates the necessity of acquiring data in the porous regime to increase the reliability of existing analytical EOS models.

More Details

Sierra/SolidMechanics 4.46 Example Problems Manual

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.

Presented in this document are tests that exist in the Sierra/SolidMechanics example problem suite, which is a subset of the Sierra/SM regression and performance test suite. These examples showcase common and advanced code capabilities. A wide variety of other regression and verification tests exist in the Sierra/SM test suite that are not included in this manual.

More Details

Special issue on uncertainty quantification in multiscale system design and simulation

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Swiler, Laura P.; Wang, Yan

The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

More Details

Optimal experimental design using a consistent Bayesian approach

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Walsh, Scott N.; Wildey, Timothy M.; Jakeman, John D.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

More Details

Decrease time-to-solution through improved linear-system setup and solve

Hu, Jonathan J.; Thomas, Stephen; Dohrmann, Clark R.; Ananthan, Shreyas; Domino, Stefan P.; Williams, Alan B.; Sprague, Michael

The goal of the ExaWind project is to enable predictive simulations of wind farms composed of many MW-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.

More Details

Eyes On the Ground: Year 2 Assessment

Brost, Randolph B.; Little, Charles; McDaniel, Michael M.; McLendon, William C.; Wade, James R.

The goal of the Eyes On the Ground project is to develop tools to aid IAEA inspectors. Our original vision was to produce a tool that would take three-dimensional measurements of an unknown piece of equipment, construct a semantic representation of the measured object, and then use the resulting data to infer possible explanations of equipment function. We report our tests of a 3-d laser scanner to obtain 3-d point cloud data, and subsequent tests of software to convert the resulting point clouds into primitive geometric objects such as planes and cylinders. These tests successfully identified pipes of moderate diameter and planar surfaces, but also incurred significant noise. We also investigated the IAEA inspector task context, and learned that task constraints may present significant obstacles to using 3-d laser scanners. We further learned that equipment scale and enclosing cases may confound our original goal of equipment diagnosis. Meanwhile, we also surveyed the rapidly evolving field of 3-d measurement technology, and identified alternative sensor modalities that may prove more suitable for inspector use in a safeguards context. We conclude with a detailed discussion of lessons learned and the resulting implications for project goals. Approved for public release; further dissemination unlimited.

More Details

Sierra/SolidMechanics 4.48 Capabilities in Development

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Xavier, Patrick G.

This document is a user's guide for capabilities that are not considered mature but are available in Sierra/SolidMechanics (Sierra/SM) for early adopters. The determination of maturity of a capability is determined by many aspects: having regression and verification level testing, documentation of functionality and syntax, and usability are such considerations. Capabilities in this document are lacking in one or many of these aspects.

More Details

Library of Advanced Materials for Engineering (LAME) 4.48

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Xavier, Patrick G.

Accurate and efficient constitutive modeling remains a cornerstone issues for solid mechanics analysis. Over the years, the LAME advanced material model library has grown to address this challenge by implement- ing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting imple- mentation. Therefore, to enhance confidence and enable the utilization of the LAME library in application, this effort seeks to document and verify the various models in the LAME library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verifi- cation tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.

More Details

Sierra/Solid Mechanics 4.48 User's Guide

Merewether, Mark T.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Gampert, Scott G.; Xavier, Patrick G.; Plews, Julia A.

Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.

More Details

Sierra/SolidMechanics 4.48 Goodyear Specific

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Xavier, Patrick G.

This document covers Sierra/SolidMechanics capabilities specific to Goodyear use cases. Some information may be duplicated directly from the Sierra/SolidMechanics User's Guide but is reproduced here to provide context for Goodyear-specific options.

More Details

Sierra/SolidMechanics 4.48 User's Guide: Addendum for Shock Capabilities

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Xavier, Patrick G.

This is an addendum to the Sierra/SolidMechanics 4.48 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export-control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export-control requirements. The ITAR enhancements to Sierra/SM in- clude material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. Since this is an addendum to the standard Sierra/SM user's guide, please refer to that document first for general descriptions of code capability and use.

More Details

Challenges in Visual Analysis of Ensembles

IEEE Computer Graphics and Applications

Crossno, Patricia J.

Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. This article explores the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.

More Details

Sierra/SolidMechanics 4.48 Verification Tests Manual

Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel J.; Le, San L.; Littlewood, David J.; Merewether, Mark T.; Mosby, Matthew D.; Pierson, Kendall H.; Porter, V.L.; Shelton, Timothy S.; Thomas, Jesse D.; Tupek, Michael R.; Veilleux, Michael V.; Xavier, Patrick G.

Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.

More Details

Kokkos User Support Infrastructure WBS STPM12 Milestone 2

Trott, Christian R.; Lopez, Graham; Shipman, Galen

This report documents the completion of milestone STPM12-2 Kokkos User Support Infrastructure. The goal of this milestone was to develop and deploy an initial Kokkos support infrastructure, which facilitates communication and growth of the user community, adds a central place for user documentation and manages access to technical experts. Multiple possible support infrastructure venues were considered and a solution was put into place by Q1 of FY 18 consisting of (1) a Wiki programming guide, (2) github issues and projects for development planning and bug tracking and (3) a “Slack” channel for low latency support communications with the Kokkos user community. Furthermore, the desirability of a cloud based training infrastructure was recognized and put in place in order to support training events.

More Details

Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota

Portone, Teresa; Niederhaus, John H.; Sanchez, Jason J.; Swiler, Laura P.

This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

More Details

Behavior of the maximum likelihood in quantum state tomography

New Journal of Physics

Blume-Kohout, Robin J.; Scholten, Travis L.

Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.

More Details

Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Sargsyan, Khachik S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem M.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

A meshless Galerkin method for non-local diffusion using localized kernel bases

Mathematics of Computation

Lehoucq, Richard B.; Rowe, Stephen R.; Narcowich, Fran J.; Ward, Joe D.

Here, we introduce a meshless method for solving both continuous and discrete variational formulations of a volume constrained, nonlocal diffusion problem. We use the discrete solution to approximate the continuous solution. Our method is nonconforming and uses a localized Lagrange basis that is constructed out of radial basis functions. By verifying that certain inf-sup conditions hold, we demonstrate that both the continuous and discrete problems are well-posed, and also present numerical and theoretical results for the convergence behavior of the method. The stiffness matrix is assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, symmetric matrix. This then is used to find the discretized solution.

More Details

Behavior of the maximum likelihood in quantum state tomography

New Journal of Physics

Scholten, Travis L.; Blume-Kohout, Robin J.

Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.

More Details

Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

Computational Mechanics

Alleman, Coleman A.; Laros, James H.; Mota, Alejandro M.; Lim, Hojun L.; Littlewood, David J.

The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.

More Details

A Future with Quantum Machine Learning

Computer

DeBenedictis, Erik

Could combining quantum computing and machine learning with Moore's law produce a true 'rebooted computer'? This article posits that a three-technology hybrid-computing approach might yield sufficiently improved answers to a broad class of problems such that energy efficiency will no longer be the dominant concern.

More Details

PIMS: Memristor-Based Processing-in-Memory-and-Storage

Cook, Jeanine C.

Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energy efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.

More Details

Geometric Hitting Set for Segments of Few Orientations

Theory of Computing Systems

Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S.B.; Parekh, Ojas D.; Phillips, Cynthia A.

We study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the “hitting points”). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. We give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.

More Details
Results 3201–3400 of 9,998
Results 3201–3400 of 9,998