Publications

Results 42201–42300 of 99,299

Search results

Jump to search filters

High Performance Computing: Power Application Programming Interface Specification (V.1.3)

Foulk, James W.; Kelly, Suzanne M.; Foulk, James W.; Grant, Ryan; Olivier, Stephen L.; Levenhagen, Michael; Debonis, David

Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

More Details

Partitioning of Ionization and Displacement Kerma in Material Response Functions

Hehr, Brian D.

Calculations of total dose (relatable to heating) and ionizing dose (relatable to electron - hole pair formation) typically rely upon material kerma response functions and an assumption of charged particle equilibrium. Traditionally, kerma functions designed for use with the Sandia ASC NuGET code were created via the HEATR module of the NJOY code system, in which a simplifying monoatomic assumption in made. The purpose of this study is to relax that approximation through the use of binary collision simulation techniques, which can take into account the co - existence of multiple elements in a material. Specifically, the total, ionization, and displacement components of kerma are evaluated in silicon, gallium arsenide, gallium nitride, and indium phosphide using the TRIM and MARLOWE codes, and are compared against the equivalent NJOY - based functions. Based on the results, a binary collision - based methodology is proposed for extracting the partial kerma components of high - importance materials.

More Details

Visualizing Wind Farm Wakes Using SCADA Data

Martin, Shawn; Westergaard, Carsten H.; White, Jonathan R.; Karlson, Benjamin

As wind farms scale to include more and more turbines, questions about turbine wake interactions become increasingly important. Turbine wakes reduce wind speed and downwind turbines suffer decreased performance. The cumulative effect of the wakes throughout a wind farm will therefore decrease the performance of the entire farm. These interactions are dynamic and complicated, and it is difficult to quantify the overall effect of the wakes. This problem has attracted some attention in terms of computational modelling for siting turbines on new farms, but less attention in terms of empirical studies and performance validation of existing farms. In this report, Supervisory Control and Data Acquisition (SCADA) data from an existing wind farm is analyzed in order to explore methods for documenting wake interactions. Visualization techniques are proposed and used to analyze wakes in a 67 turbine farm. The visualizations are based on directional analysis using power measurements, and can be considered to be normalized capacity factors below rated power. Wind speed measurements are not used in the analysis except for data pre-processing. Four wake effects are observed; including wake deficit, channel speed up, and two potentially new effects, single and multiple shear point speed up. In addition, an attempt is made to quantify wake losses using the same SCADA data. Power losses for the specific wind farm investigated are relatively low, estimated to be in the range of 3-5%. Finally, a simple model based on the wind farm geometrical layout is proposed. Key parameters for the model have been estimated by comparing wake profiles at different ranges and making some ad hoc assumptions. A preliminary comparison of six selected profiles shows excellent agreement with the model. Where discrepancies are observed, reasonable explanations can be found in multi-turbine speedup effects and landscape features, which are yet to be modelled.

More Details

Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging

Dunlavy, Daniel M.; Chew, Peter A.

This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembled the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.

More Details

Analysis of a Full Scale Blowdown Due to a Mechanical Failure of a Pressure Relief Device in a Natural Gas Vehicle Maintenance Facility

Blaylock, Myra L.; Bozinoski, Radoslav; Ekoto, Isaac W.

A computational fluid dynamics (CFD) analysis of a natural gas vehicle experiencing a mechanical failure of a pressure relief device on a full CNG cylinder was completed to determine the resulting amount and location of flammable gas. The resulting overpressure if it were to ignite was also calculated. This study completes what is discussed in Ekoto et al. which covers other related leak scenarios. We are not determining whether or not this is a credible release, rather just showing the result of a possible worst case scenario. The Sandia National Laboratories computational tool Netflow was used to calculate the leak velocity and temperature. The in - house CFD code Fuego was used to determine the flow of the leak into the maintenance garage. A maximum flammable mass of 35 kg collected along the roof of the garage. This would result in an overpressure that could do considerable damage if it were to ignite at the time of this maximum volume. It is up to the code committees to decide whet her this would be a credible leak, but if it were, there should be preventions to keep the flammable mass from igniting.

More Details

Path Network Recovery Using Remote Sensing Data and Geospatial-Temporal Semantic Graphs

Mclendon, William; Brost, Randolph

Remote sensing systems produce large volumes of high-resolution images that are difficult to search. The GeoGraphy (pronounced Geo-Graph-y) framework [2, 20] encodes remote sensing imagery into a geospatial-temporal semantic graph representation to enable high level semantic searches to be performed. Typically scene objects such as buildings and trees tend to be shaped like blocks with few holes, but other shapes generated from path networks tend to have a large number of holes and can span a large geographic region due to their connectedness. For example, we have a dataset covering the city of Philadelphia in which there is a single road network node spanning a 6 mile x 8 mile region. Even a simple question such as "find two houses near the same street" might give unexpected results. More generally, nodes arising from networks of paths (roads, sidewalks, trails, etc.) require additional processing to make them useful for searches in GeoGraphy. We have assigned the term Path Network Recovery to this process. Path Network Recovery is a three-step process involving (1) partitioning the network node into segments, (2) repairing broken path segments interrupted by occlusions or sensor noise, and (3) adding path-aware search semantics into GeoQuestions. This report covers the path network recovery process, how it is used, and some example use cases of the current capabilities.

More Details

Digital droplet multiple displacement amplification (DDMDA) for whole genome sequencing of limited DNA samples

PLoS ONE

Meagher, Robert M.; Rhee, Minsoung R.; Light, Yooli K.; Singh, Anup K.

Multiple displacement amplification (MDA) is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples) before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA) technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently, the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet), ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli) compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.

More Details

Aerosol detection efficiency in inductively coupled plasma mass spectrometry

Spectrochimica Acta - Part B Atomic Spectroscopy

Hubbard, Joshua A.; Zigmond, Joseph

An electrostatic size classification technique was used to segregate particles of known composition prior to being injected into an inductively coupled plasma mass spectrometer (ICP-MS). Size-segregated particles were counted with a condensation nuclei counter as well as sampled with an ICP-MS. By injecting particles of known size, composition, and aerosol concentration into the ICP-MS, efficiencies of the order of magnitude aerosol detection were calculated, and the particle size dependencies for volatile and refractory species were quantified. Similar to laser ablation ICP-MS, aerosol detection efficiency was defined as the rate at which atoms were detected in the ICP-MS normalized by the rate at which atoms were injected in the form of particles. This method adds valuable insight into the development of technologies like laser ablation ICP-MS where aerosol particles (of relatively unknown size and gas concentration) are generated during ablation and then transported into the plasma of an ICP-MS. In this study, we characterized aerosol detection efficiencies of volatile species gold and silver along with refractory species aluminum oxide, cerium oxide, and yttrium oxide. Aerosols were generated with electrical mobility diameters ranging from 100 to 1000 nm. In general, it was observed that refractory species had lower aerosol detection efficiencies than volatile species, and there were strong dependencies on particle size and plasma torch residence time. Volatile species showed a distinct transition point at which aerosol detection efficiency began decreasing with increasing particle size. This critical diameter indicated the largest particle size for which complete particle detection should be expected and agreed with theories published in other works. Aerosol detection efficiencies also displayed power law dependencies on particle size. Aerosol detection efficiencies ranged from 10- 5 to 10- 11. Free molecular heat and mass transfer theory was applied, but evaporative phenomena were not sufficient to explain the dependence of aerosol detection on particle diameter. Additional work is needed to correlate experimental data with theory for metal-oxides where thermodynamic property data are sparse relative to pure elements. Lastly, when matrix effects and the diffusion of ions inside the plasma were considered, mass loading was concluded to have had an effect on the dependence of detection efficiency on particle diameter.

More Details
Results 42201–42300 of 99,299
Results 42201–42300 of 99,299