Publications

Results 7601–7800 of 9,998

Search results

Jump to search filters

Ductile failure X-prize

Boyce, Brad B.; Foulk, James W.; Littlewood, David J.; Mota, Alejandro M.; Ostien, Jakob O.; Silling, Stewart A.; Spencer, Benjamin S.; Wellman, Gerald W.; Bishop, Joseph E.; Brown, Arthur B.; Córdova, Theresa E.; Cox, James C.; Crenshaw, Thomas B.; Dion, Kristin D.; Emery, John M.

Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictions had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.

More Details

Keeping checkpoint/restart viable for exascale systems

Ferreira, Kurt; Oldfield, Ron A.; Stearley, Jon S.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systems due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.

More Details

Investigation of type-I interferon dysregulation by arenaviruses : a multidisciplinary approach

Branda, Catherine B.; James, Conrad D.; Kozina, Carol L.; Manginell, Ronald P.; Misra, Milind; Moorman, Matthew W.; Negrete, Oscar N.; Ricken, James B.; Wu, Meiye W.

This report provides a detailed overview of the work performed for project number 130781, 'A Systems Biology Approach to Understanding Viral Hemorrhagic Fever Pathogenesis.' We report progress in five key areas: single cell isolation devices and control systems, fluorescent cytokine and transcription factor reporters, on-chip viral infection assays, molecular virology analysis of Arenavirus nucleoprotein structure-function, and development of computational tools to predict virus-host protein interactions. Although a great deal of work remains from that begun here, we have developed several novel single cell analysis tools and knowledge of Arenavirus biology that will facilitate and inform future publications and funding proposals.

More Details

Tracking topic birth and death in LDA

Wilson, Andrew T.; Robinson, David G.

Most topic modeling algorithms that address the evolution of documents over time use the same number of topics at all times. This obscures the common occurrence in the data where new subjects arise and old ones diminish or disappear entirely. We propose an algorithm to model the birth and death of topics within an LDA-like framework. The user selects an initial number of topics, after which new topics are created and retired without further supervision. Our approach also accommodates many of the acceleration and parallelization schemes developed in recent years for standard LDA. In recent years, topic modeling algorithms such as latent semantic analysis (LSA)[17], latent Dirichlet allocation (LDA)[10] and their descendants have offered a powerful way to explore and interrogate corpora far too large for any human to grasp without assistance. Using such algorithms we are able to search for similar documents, model and track the volume of topics over time, search for correlated topics or model them with a hierarchy. Most of these algorithms are intended for use with static corpora where the number of documents and the size of the vocabulary are known in advance. Moreover, almost all current topic modeling algorithms fix the number of topics as one of the input parameters and keep it fixed across the entire corpus. While this is appropriate for static corpora, it becomes a serious handicap when analyzing time-varying data sets where topics come and go as a matter of course. This is doubly true for online algorithms that may not have the option of revising earlier results in light of new data. To be sure, these algorithms will account for changing data one way or another, but without the ability to adapt to structural changes such as entirely new topics they may do so in counterintuitive ways.

More Details

Peridigm summary report : lessons learned in development with agile components

Parks, Michael L.; Littlewood, David J.; Salinger, Andrew G.; Mitchell, John A.

This report details efforts to deploy Agile Components for rapid development of a peridynamics code, Peridigm. The goal of Agile Components is to enable the efficient development of production-quality software by providing a well-defined, unifying interface to a powerful set of component-based software. Specifically, Agile Components facilitate interoperability among packages within the Trilinos Project, including data management, time integration, uncertainty quantification, and optimization. Development of the Peridigm code served as a testbed for Agile Components and resulted in a number of recommendations for future development. Agile Components successfully enabled rapid integration of Trilinos packages into Peridigm. A cost of this approach, however, was a set of restrictions on Peridigm's architecture which impacted the ability to track history-dependent material data, dynamically modify the model discretization, and interject user-defined routines into the time integration algorithm. These restrictions resulted in modifications to the Agile Components approach, as implemented in Peridigm, and in a set of recommendations for future Agile Components development. Specific recommendations include improved handling of material states, a more flexible flow control model, and improved documentation. A demonstration mini-application, SimpleODE, was developed at the onset of this project and is offered as a potential supplement to Agile Components documentation.

More Details

LDRD final report : autotuning for scalable linear algebra

Heroux, Michael A.

This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.

More Details

Investigation of advanced UQ for CRUD prediction with VIPRE

Eldred, Michael S.

This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.

More Details

Final report for LDRD project 11-0029 : high-interest event detection in large-scale multi-modal data sets : proof of concept

Rohrer, Brandon R.

Events of interest to data analysts are sometimes difficult to characterize in detail. Rather, they consist of anomalies, events that are unpredicted, unusual, or otherwise incongruent. The purpose of this LDRD was to test the hypothesis that a biologically-inspired anomaly detection algorithm could be used to detect contextual, multi-modal anomalies. There currently is no other solution to this problem, but the existence of a solution would have a great national security impact. The technical focus of this research was the application of a brain-emulating cognition and control architecture (BECCA) to the problem of anomaly detection. One aspect of BECCA in particular was discovered to be critical to improved anomaly detection capabilities: it's feature creator. During the course of this project the feature creator was developed and tested against multiple data types. Development direction was drawn from psychological and neurophysiological measurements. Major technical achievements include the creation of hierarchical feature sets created from both audio and imagery data.

More Details

Final report for LDRD project 11-0783 : directed robots for increased military manpower effectiveness

Rohrer, Brandon R.; Morrow, James D.; Rothganger, Fredrick R.; Xavier, Patrick G.; Wagner, John S.

The purpose of this LDRD is to develop technology allowing warfighters to provide high-level commands to their unmanned assets, freeing them to command a group of them or commit the bulk of their attention elsewhere. To this end, a brain-emulating cognition and control architecture (BECCA) was developed, incorporating novel and uniquely capable feature creation and reinforcement learning algorithms. BECCA was demonstrated on both a mobile manipulator platform and on a seven degree of freedom serial link robot arm. Existing military ground robots are almost universally teleoperated and occupy the complete attention of an operator. They may remove a soldier from harm's way, but they do not necessarily reduce manpower requirements. Current research efforts to solve the problem of autonomous operation in an unstructured, dynamic environment fall short of the desired performance. In order to increase the effectiveness of unmanned vehicle (UV) operators, we proposed to develop robots that can be 'directed' rather than remote-controlled. They are instructed and trained by human operators, rather than driven. The technical approach is modeled closely on psychological and neuroscientific models of human learning. Two Sandia-developed models are utilized in this effort: the Sandia Cognitive Framework (SCF), a cognitive psychology-based model of human processes, and BECCA, a psychophysical-based model of learning, motor control, and conceptualization. Together, these models span the functional space from perceptuo-motor abilities, to high-level motivational and attentional processes.

More Details

Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011

Benner, R.E.; Onunkwo, Uzoma O.; Johnson, Joshua A.; Naegle, John H.; Patel, Jay D.; Pearson, David B.; Shelburg, Jeffery S.; Wheeler, Kyle B.; Wright, Brian J.; Zage, David J.

This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

More Details

Control Volume Finite Element Method with Multidimensional Edge Element Scharfetter-Gummel upwinding. Part 2. Computational Study

Peterson, Kara J.; Bochev, Pavel B.

In [3] we proposed a new Control Volume Finite Element Method with multi-dimensional, edge- based Scharfetter-Gummel upwinding (CVFEM-MDEU). This report follows up with a detailed computational study of the method. The study compares the CVFEM-MDEU method with other CVFEM and FEM formulations for a set of standard scalar advection-diffusion test problems in two dimensions. The first two CVFEM formulations are derived from the CVFEM-MDEU by simplifying the computation of the flux integrals on the sides of the control volumes, the third is the nodal CVFEM [2] without upwinding, and the fourth is the streamline upwind version of CVFEM [10]. The finite elements in our study are the standard Galerkin, SUPG and artificial diffusion methods. All studies employ logically Cartesian partitions of the unit square into quadrilateral elements. Both uniform and non-uniform grids are considered. Our results demonstrate that CVFEM-MDEU and its simplified versions perform equally well on rectangular or nearly rectangular grids. However, performance of the simplified versions significantly degrades on non-affine grids, whereas the CVFEM-MDEU remains stable and accurate over a wide range of mesh Peclet numbers and non-affine grids. Compared to FEM formulations the CVFEM-MDEU appears to be slightly more dissipative than the SUPG, but has much less local overshoots and undershoots.

More Details

Measuring and tuning energy efficiency on large scale high performance computing platforms

Laros, James H.

Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

More Details

A Model-Based Case for Redundant Computation

Stearley, Jon S.; Robinson, David G.; Ferreira, Kurt

Despite its seemingly nonsensical cost, we show through modeling and simulation that redundant computation merits full consideration as a resilience strategy for next-generation systems. Without revolutionary breakthroughs in failure rates, part counts, or stable-storage bandwidths, it has been shown that the utility of Exascale systems will be crushed by the overheads of traditional checkpoint/restart mechanisms. Alternate resilience strategies must be considered, and redundancy is a proven unrivaled approach in many domains. We develop a distribution-independent model for job interrupts on systems of arbitrary redundancy, adapt Daly’s model for total application runtime, and find that his estimate for optimal checkpoint interval remains valid for redundant systems. We then identify conditions where redundancy is more cost effective than non-redundancy. These are done in the context of the number one supercomputers of the last decade, showing that thorough consideration of redundant computation is timely - if not overdue.

More Details

Optimizing Tpetra%3CU%2B2019%3Es sparse matrix-matrix multiplication routine

Nusbaum, Kurtis L.

Over the course of the last year, a sparse matrix-matrix multiplication routine has been developed for the Tpetra package. This routine is based on the same algorithm that is used in EpetraExt with heavy modifications. Since it achieved a working state, several major optimizations have been made in an effort to speed up the routine. This report will discuss the optimizations made to the routine, its current state, and where future work needs to be done.

More Details

Minimize impact or maximize benefit: The role of objective function in approximately optimizing sensor placement for municipal water distribution networks

World Environmental and Water Resources Congress 2011: Bearing Knowledge for Sustainability - Proceedings of the 2011 World Environmental and Water Resources Congress

Hart, William E.; Murray, Regan; Phillips, Cynthia A.

We consider the design of a sensor network to serve as an early warning system against a potential suite of contamination incidents. Given any measure for evaluating the quality of a sensor placement, there are two ways to model the objective. One is to minimize the impact or damage to the network, the other is to maximize the reduction in impact compared to the network without sensors. These objectives are the same when the problem is solved optimally. But when given equally-good approximation algorithms for each of this pair of complementary objectives, either one might be a better choice. The choice generally depends upon the quality of the approximation algorithms, the impact when there are no sensors, and the number of sensors available. We examine when each objective is better than the other by examining multiple real world networks. When assuming perfect sensors, minimizing impact is frequently superior for virulent contaminants. But when there are long response delays, or it is very difficult to reduce impact, maximizing impact reduction may be better. © 2011 ASCE.

More Details

Communications-based automated assessment of team cognitive performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Lakkaraju, Kiran; Adams, Susan S.; Abbott, Robert G.; Forsythe, James C.

In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.

More Details

Communications-based automated assessment of team cognitive performance

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Lakkaraju, Kiran L.; Adams, Susan S.; Abbott, Robert G.; Forsythe, James C.

In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.

More Details

VM-based slack emulation of large-scale systems

Proceedings of the 1st International Workshop on Runtime and Operating Systems for Supercomputers, ROSS 2011

Bridges, Patrick G.; Arnold, Dorian; Pedretti, Kevin P.

This paper describes the design of a system to enable large-scale testing of new software stacks and prospective high-end computing architectures. The proposed architecture combines system virtualization, time dilation, architectural simulation, and slack simulation to provide scalable emulation of hypothetical systems. We also describe virtualization-based full-system measurement and monitoring tools to aid in using the proposed system for co-design of high-performance computing system software and architectural features for future systems. Finally, we provide a description of the implementation strategy and status of the proposed system. © 2011 ACM.

More Details

Improving CSE software through reproducibility requirements

Proceedings - International Conference on Software Engineering

Heroux, Michael A.

It is often observed that software engineering (SE) processes and practices for computational science and engineering (CSE) lag behind other SE areas [7]. This issue has been a concern for funding agencies, since new research increasingly relies upon and produces computational tools. At the same time, CSE research organizations find it difficult to prescribe formal SE practices for funded projects. Theoretical and experimental science rely heavily on independent verification of results as part of the scientific process. Computational science should have the same regard for independent verification but it does not. In this paper, we present an argument for using reproducibility and independent verification requirements as a driver to improve SE processes and practices. We describe existing efforts that support our argument, how these requirements can impact SE, challenges we face, and new opportunities for using reproducibility requirements as a driver for higher quality CSE software. Copyright 2011 ACM.

More Details

Minimal-overhead virtualization of a large scale supercomputer

ACM SIGPLAN Notices

Lange, John R.; Pedretti, Kevin P.; Dinda, Peter; Bae, Chang; Bridges, Patrick G.; Soltero, Philip; Merritt, Alexander

Virtualization has the potential to dramatically increase the usability and reliability of high performance computing (HPC) systems. However, this potential will remain unrealized unless overheads can be minimized. This is particularly challenging on large scale machines that run carefully crafted HPC OSes supporting tightlycoupled, parallel applications. In this paper, we show how careful use of hardware and VMM features enables the virtualization of a large-scale HPC system, specifically a Cray XT4 machine, with .5% overhead on key HPC applications, microbenchmarks, and guests at scales of up to 4096 nodes. We describe three techniques essential for achieving such low overhead: passthrough I/O, workload-sensitive selection of paging mechanisms, and carefully controlled preemption. These techniques are forms of symbiotic virtualization, an approach on which we elaborate. Copyright © 2011 ACM.

More Details

Evaporation of Lennard-Jones fluids

Journal of Chemical Physics

Cheng, Shengfeng C.; Lechman, Jeremy B.; Plimpton, Steven J.; Grest, Gary S.

Evaporation and condensation at a liquid/vapor interface are ubiquitous interphase mass and energy transfer phenomena that are still not well understood. We have carried out large scale molecular dynamics simulations of Lennard-Jones (LJ) fluids composed of monomers, dimers, or trimers to investigate these processes with molecular detail. For LJ monomers in contact with a vacuum, the evaporation rate is found to be very high with significant evaporative cooling and an accompanying density gradient in the liquid domain near the liquid/vapor interface. Increasing the chain length to just dimers significantly reduces the evaporation rate. We confirm that mechanical equilibrium plays a key role in determining the evaporation rate and the density and temperature profiles across the liquid/vapor interface. The velocity distributions of evaporated molecules and the evaporation and condensation coefficients are measured and compared to the predictions of an existing model based on kinetic theory of gases. Our results indicate that for both monatomic and polyatomic molecules, the evaporation and condensation coefficients are equal when systems are not far from equilibrium and smaller than one, and decrease with increasing temperature. For the same reduced temperature TT c, where Tc is the critical temperature, these two coefficients are higher for LJ dimers and trimers than for monomers, in contrast to the traditional viewpoint that they are close to unity for monatomic molecules and decrease for polyatomic molecules. Furthermore, data for the two coefficients collapse onto a master curve when plotted against a translational length ratio between the liquid and vapor phase. © 2011 American Institute of Physics.

More Details

Control volume finite element method with multidimensional edge element Scharfetter-Gummel upwinding. Part 1, formulation

Bochev, Pavel B.

We develop a new formulation of the Control Volume Finite Element Method (CVFEM) with a multidimensional Scharfetter-Gummel (SG) upwinding for the drift-diffusion equations. The formulation uses standard nodal elements for the concentrations and expands the flux in terms of the lowest-order Nedelec H(curl; {Omega})-compatible finite element basis. The SG formula is applied to the edges of the elements to express the Nedelec element degree of freedom on this edge in terms of the nodal degrees of freedom associated with the endpoints of the edge. The resulting upwind flux incorporates the upwind effects from all edges and is defined at the interior of the element. This allows for accurate evaluation of integrals on the boundaries of the control volumes for arbitrary quadrilateral elements. The new formulation admits efficient implementation through a standard loop over the elements in the mesh followed by loops over the element nodes (associated with control volume fractions in the element) and element edges (associated with flux degrees of freedom). The quantities required for the SG formula can be precomputed and stored for each edge in the mesh for additional efficiency gains. For clarity the details are presented for two-dimensional quadrilateral grids. Extension to other element shapes and three dimensions is straightforward.

More Details

Optika : a GUI framework for parameterized applications

Nusbaum, Kurtis L.

In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.

More Details

Assessment of existing Sierra/Fuego capabilities related to grid-to-rod-fretting (GTRF)

Turner, Daniel Z.; Rodriguez, Salvador B.

More Details

Xyce parallel electronic simulator : reference guide

Keiter, Eric R.; Warrender, Christina E.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

More Details

Xyce parallel electronic simulator : users' guide

Keiter, Eric R.; Warrender, Christina E.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

More Details

NEAMS Nuclear Waste Management IPSC : evaluation and selection of tools for the quality environment

Vigil, Dena V.; Edwards, Harold C.; Bouchard, Julie F.; Stubblefield, W.A.

The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Nuclear Waste Management Integrated Performance and Safety Codes (NEAMS Nuclear Waste Management IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. These M&S capabilities are to be managed, verified, and validated within the NEAMS Nuclear Waste Management IPSC quality environment. M&S capabilities and the supporting analysis workflow and simulation data management tools will be distributed to end-users from this same quality environment. The same analysis workflow and simulation data management tools that are to be distributed to end-users will be used for verification and validation (V&V) activities within the quality environment. This strategic decision reduces the number of tools to be supported, and increases the quality of tools distributed to end users due to rigorous use by V&V activities. This report documents an evaluation of the needs, options, and tools selected for the NEAMS Nuclear Waste Management IPSC quality environment. The objective of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation Nuclear Waste Management Integrated Performance and Safety Codes (NEAMS Nuclear Waste Management IPSC) program element is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to assess quantitatively the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. This objective will be fulfilled by acquiring and developing M&S capabilities, and establishing a defensible level of confidence in these M&S capabilities. The foundation for assessing the level of confidence is based upon the rigor and results from verification, validation, and uncertainty quantification (V&V and UQ) activities. M&S capabilities are to be managed, verified, and validated within the NEAMS Nuclear Waste Management IPSC quality environment. M&S capabilities and the supporting analysis workflow and simulation data management tools will be distributed to end-users from this same quality environment. The same analysis workflow and simulation data management tools that are to be distributed to end-users will be used for verification and validation (V&V) activities within the quality environment. This strategic decision reduces the number of tools to be supported, and increases the quality of tools distributed to end users due to rigorous use by V&V activities. NEAMS Nuclear Waste Management IPSC V&V and UQ practices and evidence management goals are documented in the V&V Plan. This V&V plan includes a description of the quality environment into which M&S capabilities are imported and V&V and UQ activities are managed. The first phase of implementing the V&V plan is to deploy an initial quality environment through the acquisition and integration of a set of software tools. An evaluation of the needs, options, and tools selected for the quality environment is given in this report.

More Details
Results 7601–7800 of 9,998
Results 7601–7800 of 9,998