Publications

Results 5751–5775 of 9,998

Search results

Jump to search filters

A hybrid approach for parallel transistor-level full-chip circuit simulation

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Thornquist, Heidi K.; Rajamanickam, Sivasankaran

The computer-aided design (CAD) applications that are fundamental to the electronic design automation industry need to harness the available hardware resources to be able to perform full-chip simulation for modern technology nodes (45nm and below). We will present a hybrid (MPI+threads) approach for parallel transistor-level transient circuit simulation that achieves scalable performance for some challenging large-scale integrated circuits. This approach focuses on the computationally expensive part of the simulator: the linear system solve. Hybrid versions of two iterative linear solver strategies are presented, one takes advantage of block triangular form structure while the other uses a Schur complement technique. Results indicate up to a 27x improvement in total simulation time on 256 cores.

More Details

Preserving lagrangian structure in nonlinear model reduction with application to structural dynamics

SIAM Journal on Scientific Computing

Carlberg, Kevin; Tuminaro, Raymond S.; Boggs, Paul

This work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's "Lagrangian ingredients"-the Riemannian metric, the potential-energy function, the dissipation function, and the external force-and subsequently derives reduced-order equations of motion by applying the (forced) Euler-Lagrange equation with these quantities. From the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.

More Details

Generalized hypergraph matching via iterated packing and local ratio

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Parekh, Ojas D.; Pritchard, David

In k-hypergraph matching, we are given a collection of sets of size at most k, each with an associated weight, and we seek a maximumweight subcollection whose sets are pairwise disjoint. More generally, in k-hypergraph b-matching, instead of disjointness we require that every element appears in at most b sets of the subcollection. Our main result is a linear-programming based (k - 1 + 1/k)-approximation algorithm for k-hypergraph b-matching. This settles the integrality gap when k is one more than a prime power, since it matches a previously-known lower bound. When the hypergraph is bipartite, we are able to improve the approximation ratio to k - 1, which is also best possible relative to the natural LP. These results are obtained using a more careful application of the iterated packing method. Using the bipartite algorithmic integrality gap upper bound, we show that for the family of combinatorial auctions in which anyone can win at most t items, there is a truthful-in-expectation polynomial-time auction that t-approximately maximizes social welfare. We also show that our results directly imply new approximations for a generalization of the recently introduced bounded-color matching problem. We also consider the generalization of b-matching to demand matching, where edges have nonuniform demand values. The best known approximation algorithm for this problem has ratio 2k on k-hypergraphs. We give a new algorithm, based on local ratio, that obtains the same approximation ratio in a much simpler way.

More Details

Inverse problems in heterogeneous and fractured media using peridynamics

Journal of Mechanics of Materials and Structures

Turner, D.Z.; Van Bloemen Waanders, Bart; Parks, Michael L.

The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measured values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. This type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.

More Details

Variable Horizon In A Peridynamic Medium

Journal of Mechanics of Materials and Structures

Silling, Stewart; Littlewood, David J.; Seleson, Pablo

A notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forces by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local- nonlocal coupling, illustrate the methods.

More Details

Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

Journal of Aerospace Information Systems

Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael

In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

More Details

Hybrid sparse linear solutions with substituted factorization

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Booth, Joshua D.; Raghavan, Padma

We develop a computationally less expensive alternative to the direct solution of a large sparse symmetric positive definite system arising from the numerical solution of elliptic partial differential equation models. Our method, substituted factorization, replaces the computationally expensive factorization of certain dense submatrices that arise in the course of direct solution with sparse Cholesky factorization with one or more solutions of triangular systems using substitution. These substitutions fit into the tree-structure commonly used by parallel sparse Cholesky, and reduce the initial factorization cost at the expense of a slight increase cost in solving for a right-hand side vector. Our analysis shows that substituted factorization reduces the number of floating-point operations for the model k × k 5-point finite-difference problem by 10% and empirical tests show execution time reduction on average of 24.4%. On a test suite of three-dimensional problems we observe execution time reduction as high as 51.7% and 43.1% on average.

More Details

A Signal Processing Approach for Cyber Data Classification with Deep Neural Networks

Procedia Computer Science

James, Conrad D.; Aimone, James B.

Recent cyber security events have demonstrated the need for algorithms that adapt to the rapidly evolving threat landscape of complex network systems. In particular, human analysts often fail to identify data exfiltration when it is encrypted or disguised as innocuous data. Signature-based approaches for identifying data types are easily fooled and analysts can only investigate a small fraction of network events. However, neural networks can learn to identify subtle patterns in a suitably chosen input space. To this end, we have developed a signal processing approach for classifying data files which readily adapts to new data formats. We evaluate the performance for three input spaces consisting of the power spectral density, byte probability distribution and sliding-window entropy of the byte sequence in a file. By combining all three, we trained a deep neural network to discriminate amongst nine common data types found on the Internet with 97.4% accuracy.

More Details

Through a scanner quickly: Elicitation of P3 in transportation security officers following rapid image presentation and categorization

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Trumbo, Michael C.S.; Matzen, Laura E.; Silva, Austin R.; Haass, Michael J.; Divis, Kristin M.; Speed, Ann E.

Numerous domains, ranging from medical diagnostics to intelligence analysis, involve visual search tasks in which people must find and identify specific items within large sets of imagery. These tasks rely heavily on human judgment, making fully automated systems infeasible in many cases. Researchers have investigated methods for combining human judgment with computational processing to increase the speed at which humans can triage large image sets. One such method is rapid serial visual presentation (RSVP), in which images are presented in rapid succession to a human viewer. While viewing the images and looking for targets of interest, the participant’s brain activity is recorded using electroencephalography (EEG). The EEG signals can be time-locked to the presentation of each image, producing event-related potentials (ERPs) that provide information about the brain’s response to those stimuli. The participants’ judgments about whether or not each set of images contained a target and the ERPs elicited by target and non-target images are used to identify subsets of images that merit close expert scrutiny [1]. Although the RSVP/EEG paradigm holds promise for helping professional visual searchers to triage imagery rapidly, it may be limited by the nature of the target items. Targets that do not vary a great deal in appearance are likely to elicit useable ERPs, but more variable targets may not. In the present study, we sought to extend the RSVP/EEG paradigm to the domain of aviation security screening, and in doing so to explore the limitations of the technique for different types of targets. Professional Transportation Security Officers (TSOs) viewed bag X-rays that were presented using an RSVP paradigm. The TSOs viewed bursts of images containing 50 segments of bag X-rays that were presented for 100 ms each. Following each burst of images, the TSOs indicated whether or not they thought there was a threat item in any of the images in that set. EEG was recorded during each burst of images and ERPs were calculated by time-locking the EEG signal to the presentation of images containing threats and matched images that were identical except for the presence of the threat item. Half of the threat items had a prototypical appearance and half did not. We found that the bag images containing threat items with a prototypical appearance reliably elicited a P300 ERP component, while those without a prototypical appearance did not. These findings have implications for the application of the RSVP/EEG technique to real-world visual search domains.

More Details

Exploratory analysis of visual search data

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Stracuzzi, David J.; Speed, Ann E.; Silva, Austin R.; Haass, Michael J.; Trumbo, Derek

Visual search data describe people’s performance on the common perceptual problem of identifying target objects in a complex scene. Technological advances in areas such as eye tracking now provide researchers with a wealth of data not previously available. The goal of this work is to support researchers in analyzing this complex and multimodal data and in developing new insights into visual search techniques. We discuss several methods drawn from the statistics and machine learning literature for integrating visual search data derived from multiple sources and performing exploratory data analysis. We ground our discussion in a specific task performed by officers at the Transportation Security Administration and consider the applicability, likely issues, and possible adaptations of several candidate analysis methods.

More Details

Design methodology for optimizing optical interconnection networks in high performance systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Rumley, Sebastien; Glick, Madeleine; Hammond, Simon; Rodrigues, Arun; Bergman, Keren

Modern high performance computers connect hundreds of thousands of endpoints and employ thousands of switches. This allows for a great deal of freedom in the design of the network topology. At the same time, due to the sheer numbers and complexity involved, it becomes more challenging to easily distinguish between promising and improper designs. With ever increasing line rates and advances in optical interconnects, there is a need for renewed design methodologies that comprehensively capture the requirements and expose tradeoffs expeditiously in this complex design space. We introduce a systematic approach, based on Generalized Moore Graphs, allowing one to quickly gauge the ideal level of connectivity required for a given number of end-points and traffic hypothesis, and to collect insight on the role of the switch radix in the topology cost. Based on this approach, we present a methodology for the identification of Pareto-optimal topologies. We apply our method to a practical case with 25,000 nodes and present the results.

More Details

A quantitative methodology for identifying attributes which contribute to performance for officers at the transportation security administration

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Avina, Glory E.; Kittinger, Robert; Speed, Ann E.

Performance at Transportation Security Administration (TSA) airport checkpoints must be consistently high to skillfully mitigate national security threats and incidents. To accomplish this, Transportation Security Officers (TSOs) must exceptionally perform in threat detection, interaction with passengers, and efficiency. It is difficult to measure the human attributes that contribute to high performing TSOs because cognitive ability such as memory, personality, and competence are inherently latent variables. Cognitive scientists at Sandia National Laboratories have developed a methodology that links TSOs’ cognitive ability to their performance. This paper discusses how the methodology was developed using a strict quantitative process, the strengths and weaknesses, as well as how this could be generalized to other non-TSA contexts. The scope of this project is to identify attributes that distinguished high and low TSO performance for the duties at the checkpoint that involved direct interaction with people going through the checkpoint.

More Details

Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

SIAM Journal on Scientific Computing

Jakeman, John D.; Chen, Yi; Gittelson, Claude; Xiu, Dongbin

In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. The local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained from the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In this paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.

More Details

Toward an Objective Measure of Automation for the Electric Grid

Procedia Manufacturing

Haass, Michael J.; Warrender, Christina E.; Burnham, Laurie; Jeffers, Robert; Adams, Susan S.; Cole, Kerstan; Forsythe, James C.

The impact of automation on human performance has been studied by human factors researchers for over 35 years. One unresolved facet of this research is measurement of the level of automation across and within engineered systems. Repeatable methods of observing, measuring and documenting the level of automation are critical to the creation and validation of generalized theories of automation's impact on the reliability and resilience of human-in-the-loop systems. Numerous qualitative scales for measuring automation have been proposed. However these methods require subjective assessments based on the researcher's knowledge and experience, or through expert knowledge elicitation involving highly experienced individuals from each work domain. More recently, quantitative scales have been proposed, but have yet to be widely adopted, likely due to the difficulty associated with obtaining a sufficient number of empirical measurements from each system component. Our research suggests the need for a quantitative method that enables rapid measurement of a system's level of automation, is applicable across domains, and can be used by human factors practitioners in field studies or by system engineers as part of their technical planning processes. In this paper we present our research methodology and early research results from studies of electricity grid distribution control rooms. Using a system analysis approach based on quantitative measures of level of automation, we provide an illustrative analysis of select grid modernization efforts. This measure of the level of automation can be displayed as either a static, historical view of the system's automation dynamics (the dynamic interplay between human and automation required to maintain system performance) or it can be incorporated into real-time visualization systems already present in control rooms.

More Details

FEMA asteroid impact tabletop exercise simulations

Procedia Engineering

Boslough, Mark; Jennings, Barbara J.; Carvey, Bradley J.; Fogleman, William E.

We describe the computational simulations and damage assessments that we provided in support of a tabletop exercise (TTX) at the request of NASA's Near-Earth Objects Program Office. The overall purpose of the exercise was to assess leadership reactions, information requirements, and emergency management responses to a hypothetical asteroid impact with Earth. The scripted exercise consisted of discovery, tracking, and characterization of a hypothetical asteroid; inclusive of mission planning, mitigation, response, impact to population, infrastructure and GDP, and explicit quantification of uncertainty. Participants at the meeting included representatives of NASA, Department of Defense, Department of State, Department of Homeland Security/Federal Emergency Management Agency (FEMA), and the White House. The exercise took place at FEMA headquarters. Sandia's role was to assist the Jet Propulsion Laboratory (JPL) in developing the impact scenario, to predict the physical effects of the impact, and to forecast the infrastructure and economic losses. We ran simulations using Sandia's CTH hydrocode to estimate physical effects on the ground, and to produce contour maps indicating damage assessments that could be used as input for the infrastructure and economic models. We used the FASTMap tool to provide estimates of infrastructure damage over the affected area, and the REAcct tool to estimate the potential economic severity expressed as changes to GDP (by nation, region, or sector) due to damage and short-term business interruptions.

More Details

On the scalability of the Albany/FELIX first-order stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets

Procedia Computer Science

Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; Salinger, Andrew G.; Price, Stephen F.

We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditioner results in faster linear solve times but the ILU preconditioner exhibits better scalability. A weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. Here, we show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.

More Details

MapReduce SVM game

Procedia Computer Science

Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.

Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom-bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.

More Details

Canaries in a coal mine: Using application-level checkpoints to detect memory failures

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Widener, Patrick; Ferreira, Kurt; Levy, Scott L.N.; Fabian, Nathan

Memory failures in future extreme scale applications are a significant concern in the high-performance computing community and have attracted much research attention. We contend in this paper that using application checkpoint data to detect memory failures has potential benefits and is preferable to examining application memory. To support this contention, we describe the application of machine learning techniques to evaluate the veracity of checkpoint data. Our preliminary results indicate that supervised decision tree machine learning approaches can effectively detect corruption in restart files, suggesting that future extreme-scale applications and systems may benefit from incorporating such approaches in order to cope with memory failues.

More Details

Wave speed propagation measurements on highly attenuative heated materials

Physics Procedia

Moore, David G.; Ober, Curtis C.; Rodacy, Philip J.; Nelson, Ciji

Ultrasonic wave propagation decreases as a material is heated. Two factors that can characterize material properties are changes in wave speed and energy loss from interactions within the media. Relatively small variations in velocity and attenuation can detect significant differences in microstructures. This paper discusses an overview of experimental techniques that document the changes within a highly attenuative material as it is either being heated or cooled from 25°C to 90°C. The experimental set-up utilizes ultrasonic probes in a through-transmission configuration. The waveforms are recorded and analyzed during thermal experiments. To complement the ultrasonic data, a Discontinuous-Galerkin Model (DGM) was also created which uses unstructured meshes and documents how waves travel in these anisotropic media. This numerical method solves particle motion travel using partial differential equations and outputs a wave trace per unit time. Both experimental and analytical data are compared and presented.

More Details

Formal metrics for large-scale parallel performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Moreland, Kenneth D.; Oldfield, Ron

Performance measurement of parallel algorithms is well studied and well understood. However, a flaw in traditional performance metrics is that they rely on comparisons to serial performance with the same input. This comparison is convenient for theoretical complexity analysis but impossible to perform in large-scale empirical studies with data sizes far too large to run on a single serial computer. Consequently, scaling studies currently rely on ad hoc methods that, although effective, have no grounded mathematical models. In this position paper we advocate using a rate-based model that has a concrete meaning relative to speedup and efficiency and that can be used to unify strong and weak scaling studies.

More Details

Visual search in operational environments: Balancing operational constraints with experimental control

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Speed, Ann E.

Visual search has been an active area of research – empirically and theoretically – for a number of decades, however much of that work is based on novice searchers performing basic tasks in a laboratory. This paper summarizes some of the issues associated with quantifying expert, domain-specific visual search behavior in operationally realistic environments.

More Details

Towards task-parallel reductions in OpenMP

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Ciesko, Jan; Mateo, Sergi; Teruel, Xavier; Martorell, Xavier; Ayguade, Eduard; Labarta, Jesus; Duran, Alex; De Supinski, Bronis R.; Olivier, Stephen L.; Li, Kelvin; Eichenberger, Alexandre E.

Reductions represent a common algorithmic pattern in many scientific applications. OpenMP* has always supported them on parallel and worksharing constructs. OpenMP 3.0’s tasking constructs enable new parallelization opportunities through the annotation of irregular algorithms. Unfortunately the tasking model does not easily allow the expression of concurrent reductions, which limits the general applicability of the programming model to such algorithms. In this work, we present an extension to OpenMP that supports task-parallel reductions on task and taskgroup constructs to improve productivity and programmability. We present specification of the feature and explore issues for programmers and software vendors regarding programming transparency as well as the impact on the current standard with respect to nesting, untied task support and task data dependencies. Our performance evaluation demonstrates comparable results to hand coded task reductions.

More Details

Situation Awareness and Automation in the Electric Grid Control Room

Procedia Manufacturing

Adams, Susan S.; Cole, Kerstan; Haass, Michael J.; Warrender, Christina E.; Jeffers, Robert; Burnham, Laurie; Forsythe, James C.

Electric distribution utilities, the companies that feed electricity to end users, are overseeing a technological transformation of their networks, installing sensors and other automated equipment, that are fundamentally changing the way the grid operates. These grid modernization efforts will allow utilities to incorporate some of the newer technology available to the home user – such as solar panels and electric cars – which will result in a bi-directional flow of energy and information. How will this new flow of information affect control room operations? How will the increased automation associated with smart grid technologies influence control room operators’ decisions? And how will changes in control room operations and operator decision making impact grid resilience? These questions have not been thoroughly studied, despite the enormous changes that are taking place. In this study, which involved collaborating with utility companies in the state of Vermont, the authors proposed to advance the science of control-room decision making by understanding the impact of distribution grid modernization on operator performance. Distribution control room operators were interviewed to understand daily tasks and decisions and to gain an understanding of how these impending changes will impact control room operations. Situation awareness was found to be a major contributor to successful control room operations. However, the impact of growing levels of automation due to smart grid technology on operators’ situation awareness is not well understood. Future work includes performing a naturalistic field study in which operator situation awareness will be measured in real-time during normal operations and correlated with the technological changes that are underway. The results of this future study will inform tools and strategies that will help system operators adapt to a changing grid, respond to critical incidents and maintain critical performance skills.

More Details

Mechanical properties of zirconium alloys and zirconium hydrides predicted from density functional perturbation theory

Dalton Transactions

Weck, Philippe F.; Kim, Eunja; Tikare, Veena; Mitchell, John A.

The elastic properties and mechanical stability of zirconium alloys and zirconium hydrides have been investigated within the framework of density functional perturbation theory. Results show that the lowest-energy cubic Pn3m polymorph of δ-ZrH1.5 does not satisfy all the Born requirements for mechanical stability, unlike its nearly degenerate tetragonal P42/mcm polymorph. Elastic moduli predicted with the Voigt-Reuss-Hill approximations suggest that mechanical stability of α-Zr, Zr-alloy and Zr-hydride polycrystalline aggregates is limited by the shear modulus. According to both Pugh's and Poisson's ratios, α-Zr, Zr-alloy and Zr-hydride polycrystalline aggregates can be considered ductile. The Debye temperatures predicted for γ-ZrH, δ-ZrH1.5 and ε-ZrH2 are D = 299.7, 415.6 and 356.9 K, respectively, while D = 273.6, 284.2, 264.1 and 257.1 K for the α-Zr, Zry-4, ZIRLO and M5 matrices, i.e. suggesting that Zry-4 possesses the highest micro-hardness among Zr matrices.

More Details
Results 5751–5775 of 9,998
Results 5751–5775 of 9,998