Publications

Results 42201–42400 of 99,299

Search results

Jump to search filters

High Performance Computing: Power Application Programming Interface Specification (V.1.3)

Foulk, James W.; Kelly, Suzanne M.; Foulk, James W.; Grant, Ryan; Olivier, Stephen L.; Levenhagen, Michael; Debonis, David

Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

More Details

Partitioning of Ionization and Displacement Kerma in Material Response Functions

Hehr, Brian D.

Calculations of total dose (relatable to heating) and ionizing dose (relatable to electron - hole pair formation) typically rely upon material kerma response functions and an assumption of charged particle equilibrium. Traditionally, kerma functions designed for use with the Sandia ASC NuGET code were created via the HEATR module of the NJOY code system, in which a simplifying monoatomic assumption in made. The purpose of this study is to relax that approximation through the use of binary collision simulation techniques, which can take into account the co - existence of multiple elements in a material. Specifically, the total, ionization, and displacement components of kerma are evaluated in silicon, gallium arsenide, gallium nitride, and indium phosphide using the TRIM and MARLOWE codes, and are compared against the equivalent NJOY - based functions. Based on the results, a binary collision - based methodology is proposed for extracting the partial kerma components of high - importance materials.

More Details

Visualizing Wind Farm Wakes Using SCADA Data

Martin, Shawn; Westergaard, Carsten H.; White, Jonathan R.; Karlson, Benjamin

As wind farms scale to include more and more turbines, questions about turbine wake interactions become increasingly important. Turbine wakes reduce wind speed and downwind turbines suffer decreased performance. The cumulative effect of the wakes throughout a wind farm will therefore decrease the performance of the entire farm. These interactions are dynamic and complicated, and it is difficult to quantify the overall effect of the wakes. This problem has attracted some attention in terms of computational modelling for siting turbines on new farms, but less attention in terms of empirical studies and performance validation of existing farms. In this report, Supervisory Control and Data Acquisition (SCADA) data from an existing wind farm is analyzed in order to explore methods for documenting wake interactions. Visualization techniques are proposed and used to analyze wakes in a 67 turbine farm. The visualizations are based on directional analysis using power measurements, and can be considered to be normalized capacity factors below rated power. Wind speed measurements are not used in the analysis except for data pre-processing. Four wake effects are observed; including wake deficit, channel speed up, and two potentially new effects, single and multiple shear point speed up. In addition, an attempt is made to quantify wake losses using the same SCADA data. Power losses for the specific wind farm investigated are relatively low, estimated to be in the range of 3-5%. Finally, a simple model based on the wind farm geometrical layout is proposed. Key parameters for the model have been estimated by comparing wake profiles at different ranges and making some ad hoc assumptions. A preliminary comparison of six selected profiles shows excellent agreement with the model. Where discrepancies are observed, reasonable explanations can be found in multi-turbine speedup effects and landscape features, which are yet to be modelled.

More Details

Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging

Dunlavy, Daniel M.; Chew, Peter A.

This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembled the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.

More Details

Analysis of a Full Scale Blowdown Due to a Mechanical Failure of a Pressure Relief Device in a Natural Gas Vehicle Maintenance Facility

Blaylock, Myra L.; Bozinoski, Radoslav; Ekoto, Isaac W.

A computational fluid dynamics (CFD) analysis of a natural gas vehicle experiencing a mechanical failure of a pressure relief device on a full CNG cylinder was completed to determine the resulting amount and location of flammable gas. The resulting overpressure if it were to ignite was also calculated. This study completes what is discussed in Ekoto et al. which covers other related leak scenarios. We are not determining whether or not this is a credible release, rather just showing the result of a possible worst case scenario. The Sandia National Laboratories computational tool Netflow was used to calculate the leak velocity and temperature. The in - house CFD code Fuego was used to determine the flow of the leak into the maintenance garage. A maximum flammable mass of 35 kg collected along the roof of the garage. This would result in an overpressure that could do considerable damage if it were to ignite at the time of this maximum volume. It is up to the code committees to decide whet her this would be a credible leak, but if it were, there should be preventions to keep the flammable mass from igniting.

More Details

Path Network Recovery Using Remote Sensing Data and Geospatial-Temporal Semantic Graphs

Mclendon, William; Brost, Randolph

Remote sensing systems produce large volumes of high-resolution images that are difficult to search. The GeoGraphy (pronounced Geo-Graph-y) framework [2, 20] encodes remote sensing imagery into a geospatial-temporal semantic graph representation to enable high level semantic searches to be performed. Typically scene objects such as buildings and trees tend to be shaped like blocks with few holes, but other shapes generated from path networks tend to have a large number of holes and can span a large geographic region due to their connectedness. For example, we have a dataset covering the city of Philadelphia in which there is a single road network node spanning a 6 mile x 8 mile region. Even a simple question such as "find two houses near the same street" might give unexpected results. More generally, nodes arising from networks of paths (roads, sidewalks, trails, etc.) require additional processing to make them useful for searches in GeoGraphy. We have assigned the term Path Network Recovery to this process. Path Network Recovery is a three-step process involving (1) partitioning the network node into segments, (2) repairing broken path segments interrupted by occlusions or sensor noise, and (3) adding path-aware search semantics into GeoQuestions. This report covers the path network recovery process, how it is used, and some example use cases of the current capabilities.

More Details

Digital droplet multiple displacement amplification (DDMDA) for whole genome sequencing of limited DNA samples

PLoS ONE

Meagher, Robert M.; Rhee, Minsoung R.; Light, Yooli K.; Singh, Anup K.

Multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently, the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.

More Details

Aerosol detection efficiency in inductively coupled plasma mass spectrometry

Spectrochimica Acta - Part B Atomic Spectroscopy

Hubbard, Joshua A.; Zigmond, Joseph

An electrostatic size classification technique was used to segregate particles of known composition prior to being injected into an inductively coupled plasma mass spectrometer (ICP-MS). Size-segregated particles were counted with a condensation nuclei counter as well as sampled with an ICP-MS. By injecting particles of known size, composition, and aerosol concentration into the ICP-MS, efficiencies of the order of magnitude aerosol detection were calculated, and the particle size dependencies for volatile and refractory species were quantified. Similar to laser ablation ICP-MS, aerosol detection efficiency was defined as the rate at which atoms were detected in the ICP-MS normalized by the rate at which atoms were injected in the form of particles. This method adds valuable insight into the development of technologies like laser ablation ICP-MS where aerosol particles (of relatively unknown size and gas concentration) are generated during ablation and then transported into the plasma of an ICP-MS. In this study, we characterized aerosol detection efficiencies of volatile species gold and silver along with refractory species aluminum oxide, cerium oxide, and yttrium oxide. Aerosols were generated with electrical mobility diameters ranging from 100 to 1000 nm. In general, it was observed that refractory species had lower aerosol detection efficiencies than volatile species, and there were strong dependencies on particle size and plasma torch residence time. Volatile species showed a distinct transition point at which aerosol detection efficiency began decreasing with increasing particle size. This critical diameter indicated the largest particle size for which complete particle detection should be expected and agreed with theories published in other works. Aerosol detection efficiencies also displayed power law dependencies on particle size. Aerosol detection efficiencies ranged from 10- 5 to 10- 11. Free molecular heat and mass transfer theory was applied, but evaporative phenomena were not sufficient to explain the dependence of aerosol detection on particle diameter. Additional work is needed to correlate experimental data with theory for metal-oxides where thermodynamic property data are sparse relative to pure elements. Lastly, when matrix effects and the diffusion of ions inside the plasma were considered, mass loading was concluded to have had an effect on the dependence of detection efficiency on particle diameter.

More Details

Used fuel extended storage security and safeguards by design roadmap

Durbin, S.; Lindgren, Eric; Jones, Robert; Ketusky, Edward; England, Jeffrey; Scaglione, John; Scherer, Carolynn; Sprinkle, James; Miller, Michael; Rauch, Eric; Dunn, T.

In the United States, spent nuclear fuel (SNF) is safely and securely stored in spent fuel pools and dry storage casks. The available capacity in spent fuel pools across the nuclear fleet has nearly reached a steady state value. The excess SNF continues to be loaded in dry storage casks. Fuel is expected to remain in dry storage for periods beyond the initial dry cask certification period of 20 years. Recent licensing renewals have approved an additional 40 years. This report identifies the current requirements and evaluation techniques associated with the safeguards and security of SNF dry cask storage. A set of knowledge gaps is identified in the current approaches. Finally, this roadmap identifies known knowledge gaps and provides a research path to deliver the tools and models needed to close the gaps and allow the optimization of the security and safeguards approaches for an interim spent fuel facility over the lifetime of the storage site.

More Details

A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

Mackinnon, Robert J.; Kuhlman, Kristopher L.

We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

More Details

2014 Zero Waste Strategic Plan Executive Summary

Wrons, Ralph J.

Sandia National Laboratories/New Mexico is located in Albuquerque, New Mexico, primarily on Department of Energy (DOE) permitted land on approximately 2,800 acres of Kirtland Air Force Base. There are approximately 5.5 million square feet of buildings, with a workforce of approximately 9200 personnel. Sandia National Laboratories Materials Sustainability and Pollution Prevention (MSP2) program adopted in 2008 an internal team goal for New Mexico site operations for Zero Waste to Landfill by 2025. Sandia solicited a consultant to assist in the development of a Zero Waste Strategic Plan. The Zero Waste Consultant Team selected is a partnership of SBM Management Services and Gary Liss & Associates. The scope of this Plan is non-hazardous solid waste and covers the life cycle of material purchases to the use and final disposal of the items at the end of their life cycle.

More Details

Contingency Contractor Optimization Phase 3 Sustainment Requirements Document Contingency Contractor Optimization Tool - Prototype

Bandlow, Alisa; Durfee, Justin D.; Frazier, Christopher R.; Jones, Katherine; Gearhart, Jared L.

This requirements document serves as an addendum to the Contingency Contractor Optimization Phase 2, Requirements Document [1] and Phase 3 Requirements Document [2]. The Phase 2 Requirements document focused on the high-level requirements for the tool. The Phase 3 Requirements document provided more detailed requirements to which the engineering prototype was built in Phase 3. This document will provide detailed requirements for features and enhancements being added to the production pilot in the Phase 3 Sustainment.

More Details

Contingency Contractor Optimization Phase 3 Sustainment Third-Party Software List - Contingency Contractor Optimization Tool - Prototype

Durfee, Justin D.; Frazier, Christopher R.; Bandlow, Alisa

The Contingency Contractor Optimization Tool - Prototype (CCOT-P) requires several third-party software packages. These are documented below for each of the CCOT-P elements: client, web server, database server, solver, web application and polling application.

More Details

Element Verification and Comparison in Sierra/Solid Mechanics Problems

Ohashi, Yuki; Roth, William

The goal of this project was to study the effects of element selection on the Sierra/SM solutions to five common solid mechanics problems. A total of nine element formulations were used for each problem. The models were run multiple times with varying spatial and temporal discretization in order to ensure convergence. The first four problems have been compared to analytical solutions, and all numerical results were found to be sufficiently accurate. The penetration problem was found to have a high mesh dependence in terms of element type, mesh discretization, and meshing scheme. Also, the time to solution is shown for each problem in order to facilitate element selection when computer resources are limited.

More Details

Operational Excellence through Schedule Optimization and Production Simulation of Application Specific Integrated Circuits

Flory, John A.; Foulk, James W.; Gauthier, John H.; Nelson, April M.; Miller, Steven P.

Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performance evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.

More Details

Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation

Seuss, John; Reno, Matthew J.; Broderick, Robert J.; Grijalva, Santiago

Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations are performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.

More Details
Results 42201–42400 of 99,299
Results 42201–42400 of 99,299