Publications

Results 76001–76025 of 99,299

Search results

Jump to search filters

Interoperable mesh components for large-scale, distributed-memory simulations

Journal of Physics: Conference Series

Devine, Karen; Diachin, L.; Kraftcheck, J.; Jansen, K.E.; Leung, Vitus J.; Luo, X.; Miller, M.; Ollivier-Gooch, C.; Ovcharenko, A.; Sahni, O.; Shephard, M.S.; Tautges, T.; Xie, T.; Zhou, M.

SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications. © 2009 IOP Publishing Ltd.

More Details

Type Ia supernovae: Advances in large scale simulation

Journal of Physics: Conference Series

Woosley, S.E.; Almgren, A.S.; Aspden, A.J.; Bell, J.B.; Kasen, D.; Kerstein, Alan R.; Ma, H.; Nonaka, A.; Zingale, M.

There are two principal scientific objectives in the study of Type Ia supernovae - first, a better understanding of these complex explosions from as near first principles as possible, and second, enabling the more accurate utilization of their emission to measure distances in cosmology. Both tasks lend themselves to large scale numerical simulation, yet take us beyond the current frontiers in astrophysics, combustion science, and radiation transport. Their study requires novel approaches and the creation of new, highly scalable codes. © 2009 IOP Publishing Ltd.

More Details

Formation of a fin trailing vortex in undisturbed and interacting flows

39th AIAA Fluid Dynamics Conference

Beresh, Steven J.; Henfling, John F.; Spillers, Russell

An experiment using fins mounted on a wind tunnel wall has examined the proposition that the interaction between axially-separated aerodynamic control surfaces fundamentally results from an angle of attack superposed upon the downstream fin by the vortex shed from the upstream fin. Particle Image Velocimetry data captured on the surface of a single fin show the formation of the trailing vortex first as a leading-edge vortex, then becoming a tip vortex as it propagates to the fin's spanwise edge. Data acquired on the downstream fin surface in the presence of a trailing vortex shed from an upstream fin may remove this impinging vortex by subtracting its mean velocity field as measured in single-fin experiments, after which the vortex forming on the downstream fin's leeside becomes evident. The properties of the downstream fin's lifting vortex appear to be determined by the total angle of attack imposed upon it, which is a combination of its physical fin cant and the angle of attack induced by the impinging vortex, and are consistent with those of a single fin at equivalent angle of attack.

More Details

DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

Journal of Physics: Conference Series

Murphy, Richard C.

This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures. © 2009 IOP Publishing Ltd.

More Details

A rapidly deployable virtual presence extended defense system

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009

Koch, Mark W.; Giron, Casey; Nguyen, Hung D.

We have developed algorithms for a virtual presence and extended defense (VPED) system that automatically learns the detection map of a deployed sensor field without a-priori knowledge of the local terrain. The VPED system is a network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has a limited detection range, but a network of pods can form a virtual perimeter. The site's geography and soil conditions can affect the detection performance of the pods. Thus a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being constructed. We demonstrate results using simulated and real data. © 2009 IEEE.

More Details

Causal factors of non-fickian dispersion explored through measures of aquifer connectivity

IAMG 2009 - Computational Methods for the Earth, Energy and Environmental Sciences

Klise, Katherine A.; Mckenna, Sean A.; Tidwell, Vincent C.; Lane, Jonathan W.; Weissmann, Gary S.; Wawrzyniec, Tim F.; Nichols, Elizabeth M.

While connectivity is an important aspect of heterogeneous media, methods to measure and simulate connectivity are limited. For this study, we use natural aquifer analogs developed through lidar imagery to track the importance of connectivity on dispersion characteristics. A 221.8 cm by 50 cm section of a braided sand and gravel deposit of the Ceja Formation in Bernalillo County, New Mexico is selected for the study. The use of two-point (SISIM) and multipoint (Snesim and Filtersim) stochastic simulation methods are then compared based on their ability to replicate dispersion characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Connectivity analysis suggests a strong relationship between the length distribution of sand and gravel facies along streamlines and dispersion characteristics.

More Details

Current trends in parallel computation and the implications for modeling and optimization

Computer Aided Chemical Engineering

Siirola, John D.

More Details

Microresonant impedance transformers

Proceedings - IEEE Ultrasonics Symposium

Wojciechowski, Kenneth E.; Olsson, Roy H.; Tuck, Melanie R.; Stevens, James E.

Widely applied to RF filtering, AlN microresonators offer the ability to perform additional functions such as impedance matching and single-ended-to- differential conversion. This paper reports microresonators capable of transforming the characteristic impedance from input to output over a wide range while performing low loss filtering. Microresonant transformer theory of operation and equivalent circuit models are presented and compared with measured 2 and 3-Port devices. Impedance transformation ratios as large as 18:1 are realized with insertion losses less than 5.8 dB, limited by parasitic shunt capacitance. These impedance transformers occupy less than 0.052 mm2, orders of magnitude smaller than competing technologies in the VHF and UHF frequency bands. ©2009 IEEE.

More Details

Analysis of nuclear spectra with non-linear techniques and its implementation in the Cambio software application

Journal of Radioanalytical and Nuclear Chemistry

Lasche, George; Coldwell, Robert L.

Popular nuclear spectral analysis applications typically use either the results of a peak search or of the best match of a set of linear templates as the basis for their conclusions. These well-proven methods work well in controlled environments. However, they often fail in cases where the critical information resides in well-masked peaks, where the data is sparse and good statistics cannot be obtained, and where little is known about the detector that was used. These conditions are common in emergency analysis situations, but are also common in radio-assay situations where background radiation is high and time is limited. To address these limitations, non-linear fitting techniques have been introduced into an application called ''Cambio'' suitable for public use. With this approach, free parameters are varied in iterative steps to converge to values that minimize differences between the actual data and the approximating functions that correspond to the values of the parameters. For each trial nuclide, a single parameter is varied that often has a strongly non-linear dependence on other, simultaneously varied parameters for energy calibration, attenuation by intervening matter, detector resolution, and peak-shape deviations. A brief overview of this technique and its implementation is presented, together with an example of its performance and differences from more common methods of nuclear spectral analysis. © Akadémiai Kiadó, 2009.

More Details

Ten million and one penguins, or, lessons learned from booting millions of virtual machines on HPC systems

Minnich, Ronald G.; Rudish, Donald W.

In this paper we describe Megatux, a set of tools we are developing for rapid provisioning of millions of virtual machines and controlling and monitoring them, as well as what we've learned from booting one million Linux virtual machines on the Thunderbird (4660 nodes) and 550,000 Linux virtual machines on the Hyperion (1024 nodes) clusters. As might be expected, our tools use hierarchical structures. In contrast to existing HPC systems, our tools do not require perfect hardware; that all systems be booted at the same time; and static configuration files that define the role of each node. While we believe these tools will be useful for future HPC systems, we are using them today to construct botnets. Botnets have been in the news recently, as discoveries of their scale (millions of infected machines for even a single botnet) and their reach (global) and their impact on organizations (devastating in financial costs and time lost to recovery) have become more apparent. A distinguishing feature of botnets is their emergent behavior: fairly simple operational rule sets can result in behavior that cannot be predicted. In general, there is no reducible understanding of how a large network will behave ahead of 'running it'. 'Running it' means observing the actual network in operation or simulating/emulating it. Unfortunately, this behavior is only seen at scale, i.e. when at minimum 10s of thousands of machines are infected. To add to the problem, botnets typically change at least 11% of the machines they are using in any given week, and this changing population is an integral part of their behavior. The use of virtual machines to assist in the forensics of malware is not new to the cyber security world. Reverse engineering techniques often use virtual machines in combination with code debuggers. Nevertheless, this task largely remains a manual process to get past code obfuscation and is inherently slow. As part of our cyber security work at Sandia National Laboratories, we are striving to understand the global network behavior of botnets. We are planning to take existing botnets, as found in the wild, and run them on HPC systems. We have turned to HPC systems to support the creation and operation of millions of Linux virtual machines as a means of observing the interaction of the botnet and other noninfected hosts. We started out using traditional HPC tools, but these tools are designed for a much smaller scale, typically topping out at one to ten thousand machines. HPC programming libraries and tools also assume complete connectivity between all nodes, with the attendant configuration files and data structures to match; this assumption holds up very poorly on systems with millions of nodes.

More Details

Nonlinear slewing spacecraft control based on exergy, power flow, and static and dynamic stability

Journal of the Astronautical Sciences

Robinett, Rush D.; Wilson, David G.

This paper presents a new nonlinear control methodology for slewing spacecraft, which provides both necessary and sufficient conditions for stability by identifying the stability boundaries, rigid body modes, and limit cycles. Conservative Hamiltonian system concepts, which are equivalent to static stability of airplanes, are used to find and deal with the static stability boundaries: rigid body modes. The application of exergy and entropy thermodynamic concepts to the work-rate principle provides a natural partitioning through the second law of thermodynamics of power flows into exergy generator, dissipator, and storage for Hamiltonian systems that is employed to find the dynamic stability boundaries: limit cycles. This partitioning process enables the control system designer to directly evaluate and enhance the stability and performance of the system by balancing the power flowing into versus the power dissipated within the system subject to the Hamiltonian surface (power storage). Relationships are developed between exergy, power flow, static and dynamic stability, and Lyapunov analysis. The methodology is demonstrated with two illustrative examples: (1) a nonlinear oscillator with sinusoidal damping and (2) a multi-input-multioutput three-axis slewing spacecraft that employs proportional-integral-derivative tracking control with numerical simulation results.

More Details

Using detailed maps of science to identify potential collaborations

Scientometrics

Boyack, Kevin W.

Research on the effects of collaboration in scientific research has been increasing in recent years. A variety of studies have been done at the institution and country level, many with an eye toward policy implications. However, the question of how to identify the most fruitful targets for future collaboration in high-performing areas of science has not been addressed. This paper presents a method for identifying targets for future collaboration between two institutions. The utility of the method is shown in two different applications: identifying specific potential collaborations at the author level between two institutions, and generating an index that can be used for strategic planning purposes. Identification of these potential collaborations is based on finding authors that belong to the same small paper-level community (or cluster of papers), using a map of science and technology containing nearly 1 million papers organized into 117,435 communities. The map used here is also unique in that it is the first map to combine the ISI Proceedings database with the Science and Social Science Indexes at the paper level. © 2008 Springer Science+Business Media B.V.

More Details

TrustBuilder2: A reconfigurable framework for trust negotiation

IFIP Advances in Information and Communication Technology

Lee, Adam J.; Winslett, Marianne; Perano, Kenneth J.

To date, research in trust negotiation has focused mainly on the theoretical aspects of the trust negotiation process, and the development of proof of concept implementations. These theoretical works and proofs of concept have been quite successful from a research perspective, and thus researchers must now begin to address the systems constraints that act as barriers to the deployment of these systems. To this end, we present TrustBuilder2, a fully-configurable and extensible framework for prototyping and evaluating trust negotiation systems. TrustBuilder2 leverages a plug-in based architecture, extensible data type hierarchy, and flexible communication protocol to provide a framework within which numerous trust negotiation protocols and system configurations can be quantitatively analyzed. In this paper, we discuss the design and implementation of TrustBuilder2, study its performance, examine the costs associated with flexible authorization systems, and leverage this knowledge to identify potential topics for future research, as well as a novel method for attacking trust negotiation systems.

More Details

Cutting Efficiency of a Single PDC Cutter on Hard Rock

Journal of Canadian Petroleum Technology

Hareland, G.; Yan, W.; Nygaard, R.; Wise, Jack L.

Polycrystalline diamond compact (PDC) bits have gained i wide popularity in the petroleum industry for drilling soft and; moderately firm formations. However, in hard formation applications, the PDC bit still has limitations, even though recent developments in PDC cutter designs and materials steadily imj proves PDC bit performance. The limitations of PDC bits for drilling hard formations is an important technical obstacle that must be overcome before using the PDC bit to develop competii tively priced electricity from enhanced geothermal systems, as well as deep continental gas fields. Enhanced geothermal energy is a very promising source for generating electrical energy and therefore, there is an urgent need to further enhance PDC bit per-j formance in hard formations. In this paper, the cutting efficiency of the PDC bit has been) analyzed based on the development of an analytical single PDC cutter force model. The cutting efficiency of a single PDC cutterj is defined as the ratio of the volume removed by a cutter over the force required to remove that volume of rock. The cutting I efficiency is found to be a function of the back rake angle, the depth of cut and the rock property, such as the angle of internal' friction. The highest cutting efficiency is found to occur at specific back rake angles of the cutter based on the material properties of the rock. The cutting efficiency directly relates to the internal angle of friction of the rock being cut. The results of this analysis can be integrated to study PDC bit performance. It can also provide a guideline to the application' and design of PDC bits for specific rocks.

More Details

An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment

Forester, John A.

Since the Reactor Safety Study in the early 1970's, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0

Turgeon, Jennifer; Minana, Molly A.; Pilch, Martin

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

More Details

Baseline Ecological Footprint of Sandia National Laboratories, New Mexico

Mizner, Jack H.

The Ecological Footprint Model is a mechanism for measuring the environmental effects of operations at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM). This analysis quantifies environmental impact associated with energy use, transportation, waste, land use, and water consumption at SNL/NM for fiscal year 2005 (FY05). Since SNL/NM’s total ecological footprint (96,434 gha) is greater than the waste absorption capacity of its landholdings (338 gha), it created an ecological deficit of 96,096 gha. This deficit is equal to 886,470lha, or about 3,423 square miles of Pinyon-Juniper woodlands and desert grassland. 89% of the ecological footprint can be attributed to energy use, indicating that in order to mitigate environmental impact, efforts should be focused on energy efficiency, energy reduction, and the incorporation of additional renewable energy alternatives at SNL/NM.

More Details

Graphite oxidation modeling for application in MELCOR

Gelbard, Fred M.

The Arrhenius parameters for graphite oxidation in air are reviewed and compared. One-dimensional models of graphite oxidation coupled with mass transfer of oxidant are presented in dimensionless form for rectangular and spherical geometries. A single dimensionless group is shown to encapsulate the coupled phenomena, and is used to determine the effective reaction rate when mass transfer can impede the oxidation process. For integer reaction order kinetics, analytical expressions are presented for the effective reaction rate. For noninteger reaction orders, a numerical solution is developed and compared to data for oxidation of a graphite sphere in air. Very good agreement is obtained with the data without any adjustable parameters. An analytical model for surface burn-off is also presented, and results from the model are within an order of magnitude of the measurements of burn-off in air and in steam.

More Details

Design & development fo a 20-MW flywheel-based frequency regulation power plant : a study for the DOE Energy Storage Systems program

Huff, Georgianne

This report describes the successful efforts of Beacon Power to design and develop a 20-MW frequency regulation power plant based solely on flywheels. Beacon's Smart Matrix (Flywheel) Systems regulation power plant, unlike coal or natural gas generators, will not burn fossil fuel or directly produce particulates or other air emissions and will have the ability to ramp up or down in a matter of seconds. The report describes how data from the scaled Beacon system, deployed in California and New York, proved that the flywheel-based systems provided faster responding regulation services in terms of cost-performance and environmental impact. Included in the report is a description of Beacon's design package for a generic, multi-MW flywheel-based regulation power plant that allows accurate bids from a design/build contractor and Beacon's recommendations for site requirements that would ensure the fastest possible construction. The paper concludes with a statement about Beacon's plans for a lower cost, modular-style substation based on the 20-MW design.

More Details

Modeling leaks from liquid hydrogen storage systems

Winters, William S.

This report documents a series of models for describing intended and unintended discharges from liquid hydrogen storage systems. Typically these systems store hydrogen in the saturated state at approximately five to ten atmospheres. Some of models discussed here are equilibrium-based models that make use of the NIST thermodynamic models to specify the states of multiphase hydrogen and air-hydrogen mixtures. Two types of discharges are considered: slow leaks where hydrogen enters the ambient at atmospheric pressure and fast leaks where the hydrogen flow is usually choked and expands into the ambient through an underexpanded jet. In order to avoid the complexities of supersonic flow, a single Mach disk model is proposed for fast leaks that are choked. The velocity and state of hydrogen downstream of the Mach disk leads to a more tractable subsonic boundary condition. However, the hydrogen temperature exiting all leaks (fast or slow, from saturated liquid or saturated vapor) is approximately 20.4 K. At these temperatures, any entrained air would likely condense or even freeze leading to an air-hydrogen mixture that cannot be characterized by the REFPROP subroutines. For this reason a plug flow entrainment model is proposed to treat a short zone of initial entrainment and heating. The model predicts the quantity of entrained air required to bring the air-hydrogen mixture to a temperature of approximately 65 K at one atmosphere. At this temperature the mixture can be treated as a mixture of ideal gases and is much more amenable to modeling with Gaussian entrainment models and CFD codes. A Gaussian entrainment model is formulated to predict the trajectory and properties of a cold hydrogen jet leaking into ambient air. The model shows that similarity between two jets depends on the densimetric Froude number, density ratio and initial hydrogen concentration.

More Details
Results 76001–76025 of 99,299
Results 76001–76025 of 99,299