Publications

Results 52201–52400 of 99,299

Search results

Jump to search filters

Kokkos: Enabling manycore performance portability through polymorphic memory access patterns

Journal of Parallel and Distributed Computing

Trott, Christian R.

The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diverse manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.

More Details

Computer Science Research Institute (CSRI) Summer Proceedings 2013

Rajamanickam, Sivasankaran; Parks, Michael L.; Collis, Samuel S.

The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on computer science, computational science, and mathematics problems that are critical to the mission of the laboratories, the Department of Energy, and the United States. The CSRI provides a mechanism by which university researchers learn about and impact national— and global—scale problems while simultaneously bringing new ideas from the academic research community to bear on these important problems. A key component of CSRI programs over the last decade has been an active and productive summer program where students from around the country conduct internships at CSRI. Each student is paired with a Sandia staff member who serves as technical advisor and mentor. The goals of the summer program are to expose the students to research in mathematical and computer sciences at Sandia and to conduct a meaningful and impactful summer research project with their Sandia mentor. Every effort is made to align summer projects with the student's research objectives and all work is coordinated with the ongoing research activities of the Sandia mentor in alignment with Sandia technical thrusts. For the 2013 CSRI Proceedings, research articles have been organized into the following broad technical focus areas — Computational Mathematics and Algorithms, Combinatorial Algorithms and Visualization, Advanced Architectures and Systems Software, Computational Applications — which are well aligned with Sandia's strategic thrusts in computer and information sciences.

More Details

Extracting hidden messages in steganographic images

Digital Investigation

Quach, Tu T.

The eventual goal of steganalytic forensic is to extract the hidden messages embedded in steganographic images. A promising technique that addresses this problem partially is steganographic payload location, an approach to reveal the message bits, but not their logical order. It works by finding modified pixels, or residuals, as an artifact of the embedding process. This technique is successful against simple least-significant bit steganography and group-parity steganography. The actual messages, however, remain hidden as no logical order can be inferred from the located payload. This paper establishes an important result addressing this shortcoming: we show that the expected mean residuals contain enough information to logically order the located payload provided that the size of the payload in each stego image is not fixed. The located payload can be ordered as prescribed by the mean residuals to obtain the hidden messages without knowledge of the embedding key, exposing the vulnerability of these embedding algorithms. We provide experimental results to support our analysis.

More Details

Data free inference with processed data products

Statistics and Computing

Najm, Habib N.; Chowdhary, Kenny

Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

More Details

Superficial Carbon Dioxide Brayton Cycle Energy Conversion Research and Development Program

Bonano, Evaristo J.; Tillman, Jack; Meacham, Paul

A closed Brayton cycle recirculates the working fluid, and the turbine exhaust is used in a recuperating heat exchanger to heat the turbine feed. A "supercritical cycle' is a closed Brayton cycle in which the working fluid, such as supercritical carbon dioxide (s-0O2), is maintained near the critical point during the compression phase of the cycle. The key property of the fluid near its critical point is its higher gas density, closer to that of a liquid than of a gas, allowing for the pumping power in the compressor to be significantly reduced, which results in thermal efficiency that is significantly improved over the efficiency attainable in an ideal-gas Brayton cycle. Another advantage of using a supercritical cycle is that the overall footprint of the power-conversion system can be significantly reduced, as compared to the same power output of a steam-Rankine cycle, due to the high pressure in the system and resulting low volumetric flow rate. This allows for the heat-rejection heat exchanger and turbine to be orders of magnitude smaller than for similar power output steam-Rankine systems. Other potential advantages are the reduced use of water, not only due to the increased efficiency, but due also to the fact that the heat rejection temperature is significantly higher than for steam-Rankine systems, allowing for significant heat rejection directly to air. In 2006, Sandia National Laboratories (SNL), recognizing these potentially significant advantages of a higher efficiency power cycle, used internal funds to establish a testing capability and began partnering with the U.S. Department of Energy Office of Nuclear Energy to develop a laboratory-scale test assembly to show the viability of the underlying science and demonstrate system performance. Since that time, SNL has generated over 100 kW-hours of energy, verified cycle performance, and developed cycle controls and maintenance procedures. The test assembly has successfully operated in different configurations (simple Brayton, waste heat cycle, and recompression) and tested additives to the s-CO2 working fluid. However, challenges remain to confirm viability of existing components and suitability of materials, demonstrate that theoretical efficiencies are achievable, and integrate and scale up existing technologies to be suitable for a range of applications.

More Details

AlGaN composition dependence of the band offsets for epitaxial Gd2O3/AlxGa12xN (0 &lex &le0.67) heterostructures

Applied Physics Letters

Ihlefeld, Jon F.; Brumbach, Michael T.; Allerman, A.A.; Wheeler, David R.; Atcitty, Stanley

Gd2O3 films were prepared on (0001)-oriented AlxGa1-xN (0≤x≤0.67) thin film substrates via reactive molecular-beam epitaxy. X-ray diffraction revealed that these films possessed the cubic bixbyite structure regardless of substrate composition and were all 111-oriented with in-plane rotations to account for the symmetry difference between the oxide film and nitride epilayer. Valence band offsets were characterized by X-ray photoelectron spectroscopy and were determined to be 0.41±0.02eV, 0.17±0.02eV, and 0.06±0.03eV at the Gd2O3/AlxGa1-xN interfaces for x=0, 0.28, and 0.67, respectively.

More Details

Theoretical and experimental quantification of doubly and singly differential cross sections for electron-induced ionization of isolated tetrahydrofuran molecules

European Physical Journal D

Champion, Christophe; Quinto, Michele A.; Bug, Marion U.; Baek, Woon Y.; Weck, Philippe F.

Electron-induced ionization of the tetrahydrofuran molecule, the commonly used surrogate of the DNA sugar-phosphate backbone, is theoretically described in this study within the 1st Born approximation. Comparisons between theory and recent experiments are reported in terms of doubly and singly differential cross sections.

More Details

Albuquerque Regional Training: The Third Seminar on Surface Metrology for the Americas May 12-13 2014

Metrologist: NCSLI Worldwide News

Tran, Sophie M.; Tran, Hy

The Third Seminar on Surface Metrology for the Americas (SSMA) took place in Albuquerque, New Mexico May 12-13, 2014. The conference was at the Marriott Hotel, in the heart of Albuquerque Uptown, within walking distance of many fantastic restaurants. Why surface metrology? Ask Professor Chris Brown of Worcester Polytechnic Institute (WPI), the chair of the first two SSMAs in 2011 and 2012 and the chair of the ASME B46 committee on classification and designation of surface qualities, and Professor Brown responds: “Because surfaces cover everything.”

More Details

Experimental investigation of two-phase flow in rock salt

Malama, Bwalya; Howard, Clifford L.

This Test Plan describes procedures for conducting laboratory scale flow tests on intact, damaged, crushed, and consolidated crushed salt to measure the capillary pressure and relative permeability functions. The primary focus of the tests will be on samples of bedded geologic salt from the WIPP underground. However, the tests described herein are directly applicable to domal salt. Samples being tested will be confined by a range of triaxial stress states ranging from atmospheric pressure up to those approximating lithostatic. Initially these tests will be conducted at room temperature, but testing procedures and equipment will be evaluated to determine adaptability to conducting similar tests under elevated temperatures.

More Details

SDOE 650: System Architecture and Design

George, Colin B.

The proposed system is a test system that verifies the cables functionality in the expected environments defined in the ES. Verification methods include test, inspect, demonstrate, and analyze. Since we are defining the architecture for a test system we will focus on the customer expectations and requirements that will be satisfied or verified via testing

More Details

Technique for the estimation of surface temperatures from embedded temperature sensing for rapid, high energy surface deposition

Roberts, Scott A.; Watkins, Tyson R.; Schunk, Peter R.

Temperature histories on the surface of a body that has been subjected to a rapid, highenergy surface deposition process can be di cult to determine, especially if it is impossible to directly observe the surface or attach a temperature sensor to it. In this report, we explore two methods for estimating the temperature history of the surface through the use of a sensor embedded within the body very near to the surface. First, the maximum sensor temperature is directly correlated with the peak surface temperature. However, it is observed that the sensor data is both delayed in time and greatly attenuated in magnitude, making this approach unfeasible. Secondly, we propose an algorithm that involves tting the solution to a one-dimensional instantaneous energy solution problem to both the sensor data and to the results of a one-dimensional CVFEM code. This algorithm is shown to be able to estimate the surface temperature 20 C.

More Details

Sandia National Laboratories Small-Scale Sensitivity Testing (SSST) Report: Calcium Nitrate Mixtures with Various Fuels

Phillips, Jason J.

Based upon the presented sensitivity data for the examined calcium nitrate mixtures using sugar and sawdust, contact handling/mixing of these materials does not present hazards greater than those occurring during handling of dry PETN powder. The aluminized calcium nitrate mixtures present a known ESD fire hazard due to the fine aluminum powder fuel. These mixtures may yet present an ESD explosion hazard, though this has not been investigated at this time. The detonability of these mixtures will be investigated during Phase III testing.

More Details

Topology for Statistical Modeling of Petascale Data

Bennett, Janine C.; Pebay, Philippe P.; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Maurice

This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

More Details

Guidelines for effective radiation transport for cable SGEMP modeling

Drumm, Clifton R.; Fan, Wesley C.; Turner, C.D.

This report describes experiences gained in performing radiation transport computations with the SCEPTRE radiation transport code for System Generated ElectroMagnetic Pulse (SGEMP) applications. SCEPTRE is a complex code requiring a fairly sophisticated user to run the code effectively, so this report provides guidance for analysts interested in performing these types of calculations. One challenge in modeling coupled photon/electron transport for SGEMP is to provide a spatial mesh that is sufficiently resolved to accurately model surface charge emission and charge deposition near material interfaces. The method that has been most commonly used to date to compute cable SGEMP typically requires a sub-micron mesh size near material interfaces, which may be difficult for meshing software to provide for complex geometries. We present here an alternative method for computing cable SGEMP that appears to substantially relax this requirement. The report also investigates the effect of refining the energy mesh and increasing the order of the angular approximation to provide some guidance on determining reasonable parameters for the energy/angular approximation needed for x-ray environments. Conclusions for γ-ray environments may be quite different and will be treated in a subsequent report. In the course of the energy-mesh refinement studies, a bug in the cross-section generation software was discovered that may cause underprediction of the result by as much as an order of magnitude for the test problem studied here, when the electron energy group widths are much smaller than those for the photons. Results will be presented and compared using cross sections generated before and after the fix. We also describe adjoint modeling, which provides sensitivity of the total charge drive to the source energy and angle of incidence, which is quite useful for comparing the effect of changing the source environment and for determining most stressing angle of incidence and source energy. This report focusses on cable SGEMP applications, but many of the conclusions will be directly applicable for box Internal ElectroMagnetic Pulse (IEMP) modeling as well.

More Details

III-Nitride Nanowire Lasers

Wright, Jeremy B.

In recent years there has been a tremendous interest in nanoscale optoelectronic devices. Among these devices are semiconductor nanowires whose diameters range from 10-100 nm. To date, nanowires have been grown using many semiconducting material systems and have been utilized as light emitting diodes, photodetectors, and solar cells. Nanowires possess a relatively large index contrast relative to their dielectric environment and can be used as lasers. A key gure of merit that allows for nanowire lasing is the relatively high optical con nement factor. In this work, I discuss the optical characterization of 3 types of III-nitride nanowire laser devices. Two devices were designed to reduce the number of lasing modes to achieve singlemode operation. The third device implements low-group velocity mode lasing with a photonic crystal constructed of an array of nanowires. Single-mode operation is necessary in any application where high beam quality and single frequency operation is required. III-Nitride nanowire lasers typically operate in a combined multi-longitudinal and multi-transverse mode state. Two schemes are introduced here for controlling the optical modes and achieving single-mode op eration. The rst method involves reducing the diameter of individual nanowires to the cut-o condition, where only one optical mode propagates in the wire. The second method employs distributed feedback (DFB) to achieve single-mode lasing by placing individual GaN nanowires onto substrates with etched gratings. The nanowire-grating substrate acted as a distributed feedback mirror producing single mode operation at 370 nm with a mode suppression ratio (MSR) of 17 dB. The usage of lasers for solid state lighting has the potential to further reduce U.S. lighting energy usage through an increase in emitter e ciency. Advances in nanowire fabrication, speci cally a two-step top-down approach, have allowed for the demonstration of a multi-color array of lasers on a single chip that emit vertically. By tuning the geometrical properties of the individual lasers across the array, each individual nanowire laser produced a di erent emission wavelength yielding a near continuum of laser wavelengths. I successfully fabricated an array of emitters spanning a bandwidth of 60 nm on a single chip. This was achieved in the blue-violet using III-nitride photonic crystal nanowire lasers.

More Details

Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

Griffith, Daniel; Myrent, Noah J.; Barrett, Natalie C.; Adams, Douglas E.

Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by the presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability, revenue, and overall profit.

More Details

SCEPTRE 1.4 Quick Start Guide

Drumm, Clifton R.; Bohnhoff, William J.; Fan, Wesley C.; Pautz, Shawn D.; Valdez, Greg D.

This report provides a summary of notes for building and running the Sandia Computational Engine for Particle Transport for Radiation Effects (SCEPTRE) code. SCEPTRE is a general purpose C++ code for solving the Boltzmann transport equation in serial or parallel using unstructured spatial finite elements, multigroup energy treatment, and a variety of angular treatments including discrete ordinates. Either the first-order form of the Boltzmann equation or one of the second-order forms may be solved. SCEPTRE requires a small number of open-source Third Party Libraries (TPL) to be available, and example scripts for building these TPL’s are provided. The TPL’s needed by SCEPTRE are Trilinos, boost, and netcdf. SCEPTRE uses an autoconf build system, and a sample configure script is provided. Running the SCEPTRE code requires that the user provide a spatial finite-elements mesh in Exodus format and a cross section library in a format that will be described. SCEPTRE uses an xml-based input, and several examples will be provided.

More Details

A comparison of parallelization strategies for the Material Point Method

11th World Congress on Computational Mechanics, WCCM 2014, 5th European Conference on Computational Mechanics, ECCM 2014 and 6th European Conference on Computational Fluid Dynamics, ECFD 2014

Ruggirello, Kevin P.; Schumacher, Shane C.

Recently the Lagrangian Material Point Method (MPM) [1] has been integrated into the Eulerian finite volume shock physics code CTH [2] at Sandia National Laboratories. CTH has the capabilities of adaptive mesh refinement (AMR), multiple materials and numerous material models for equation of state, strength, and failure. In order to parallelize the MPM in CTH two different approaches were tested. The first was a ghost particle concept, where the MPM particles are mirrored onto neighboring processors in order to correctly assemble the mesh boundary values on the grid. The second approach exchanges the summed mesh values at processor boundaries without the use of ghost particles. Both methods have distinct advantages for parallelization. These parallelization approaches were tested for both strong and weak scaling. This paper will compare the parallel scaling efficiency, and memory requirements of both approaches for parallelizing the MPM.

More Details

Aging Assessment of an Oak Ridge National Laboratory High Flux Isotope Reactor (HFIR) Service Cable

Bernstein, Robert; Celina, Mathew C.; Redline, Erica; Von White II, Gregory

Nuclear energy is one industry where aging of safety-related materials and components is of great concern. Many U.S. nuclear power plants are approaching, or have already exceeded, 40 years of age. Analysis comparing the cost of new plant construction versus long-term operation under extended plant licensing through 60 years strongly favors the latter option. To ensure the safe, reliable, and cost-effective long-term operation of nuclear power plants, many systems, structures, and components must be evaluated. Furthermore, as new analytical techniques and testing approaches are developed, it is imperative that we also validate, and if necessary, improve upon the previously employed Institute of Electrical and Electronic Engineers (IEEE) qualification standards originally written in 1974. Fortunately, this daunting task has global support, particularly in light of the new social and political climate surrounding nuclear energy in a post-Fukushima era.

More Details

Dish Stirling High Performance Thermal Storage FY14Q3 Quad Chart

Andraka, Charles E.

The project goals are: demonstrate the feasibility of significant thermal storage for dish Stirling systems to leverage their existing high performance to greater capacity; demonstrate key components of a latent storage and transport system enabling on-dish storage with low exergy losses; and provide technology path to a 25kWe system with 6 hours of storage.

More Details

Thermal boundary conductance accumulation and spectral phonon transmission across interfaces: experimental measurements across metal/native oxide/Si and metal/sapphire interfaces

Nature Communications

Ihlefeld, Jon F.; Brown-Shaklee, Harlan J.; Cheaito, Ramez; Gaskins, John T.; Caplan, Matthew E.; Donovan, Brian F.; Foley, Brian M.; Giri, Ashutosh; Duda, John C.; Szwejkowski, Chester J.; Constantin, Costel; Hopkins, Patrick E.

Abstract not provided.

Choreographer Pre-Testing Code Analysis and Operational Testing

Fritz, David J.; Harrison, Christopher B.; Perr, C.W.; Hurd, Steven A.

Choreographer is a "moving target defense system", designed to protect against attacks aimed at IP addresses without corresponding domain name system (DNS) lookups. It coordinates actions between a DNS server and a Network Address Translation (NAT) device to regularly change which publicly available IP addresses' traffic will be routed to the protected device versus routed to a honeypot. More details about how Choreographer operates can be found in Section 2: Introducing Choreographer. Operational considerations for the successful deployment of Choreographer can be found in Section 3. The Testing & Evaluation (T&E) for Choreographer involved 3 phases: Pre-testing, Code Analysis, and Operational Testing. Pre-testing, described in Section 4, involved installing and configuring an instance of Choreographer and verifying it would operate as expected for a simple use case. Our findings were that it was simple and straightforward to prepare a system for a Choreographer installation as well as configure Choreographer to work in a representative environment. Code Analysis, described in Section 5, consisted of running a static code analyzer (HP Fortify) and conducting dynamic analysis tests using the Valgrind instrumentation framework. Choreographer performed well, such that only a few errors that might possibly be problematic in a given operating situation were identified. Operational Testing, described in Section 6, involved operating Choreographer in a representative environment created through EmulyticsTM . Depending upon the amount of server resources dedicated to Choreographer vis-á-vis the amount of client traffic handled, Choreographer had varying degrees of operational success. In an environment with a poorly resourced Choreographer server and as few as 50-100 clients, Choreographer failed to properly route traffic over half the time. Yet, with a well-resourced server, Choreographer handled over 1000 clients without missrouting. Choreographer demonstrated sensitivity to low-latency connections as well as high volumes of traffic. In addition, depending upon the frequency of new connection requests and the size of the address range that Choreographer has to work with, it is possible for all benefits of Choreographer to be ameliorated by its need to allow DNS servers rather than the end client to make DNS requests. Conclusions and Recommendations, listed in Section 7, address the need to understand the specific use case where Choreographer would be deployed to assess whether there would be problems resulting from the operational considerations described in Section 3 or performance concerns from the results of Operational Testing in Section 6. Deployed in an appropriate architecture with sufficiently light traffic volumes and a well-provisioned server, it is quite likely that Choreographer would perform satisfactorily. Thus, we recommend further detailed testing, to potentially include Red Team testing, at such time a specific use case is identified

More Details

Proposing an Abstracted Interface and Protocol for Computer Systems

Resnick, David R.; Ignatowski, Mike

While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions.

More Details

Evaluation of Glare at the Ivanpah Solar Electric Generating System

Ho, Clifford K.; Sims, Cianan; Christian, Josh

The Ivanpah Solar Electric Generating System (ISEGS), located on I - 15 about 40 miles (60 km) south of Las Vegas, NV, consists of three power towers 459 ft (140 m) tall and over 170,000 reflective heliostats with a rated capacity of 390 MW. Reports of glare from the plant have been submitted by pilots and air traffic controllers and recorded by the Aviation Safety Reporting System and the California Energy Commission since 2013. Aerial and ground - based surveys of the glare were conducted in April, 2014, to identify the cause and to quantify the irradiance and potential ocular impact s of the glare . Results showed that the intense glare viewed from the airspace above ISEGS was caused by he liostats in standby mode that were aimed to the side of the receiver. Evaluation of the glare showed that the retinal irradiance and subtended source angle of the glare from the heliostats in standby were sufficient to cause significant ocular impact (pot ential for after - image) up to a distance of %7E6 miles (10 km), but the values were below the threshold for permanent eye damage . Glare from the receivers had a low potential for after - image at all ground - based monitoring locations outside of the site bound aries. A Letter to Airmen has been issued by the Federal Aviation Administration to notify pilots of the potential glare hazards. Additional measures to mitigate the potential impacts of glare from ISGES are also presented and discussed. This page intentionally left blank

More Details

Memorandum of Understanding

Siple, Bud H.

A Memorandum of Understanding establishes a clear understanding of how an agreement is going to be implemented. The Memorandum of Understanding allows all involved to specifically understand that they are agreeing to the same thing and the terms are clearly identified. It also includes the clear distinction of functions and the level of involvement of the agencies involved. Specifically, a Memorandum of Understanding gives a chance to all of those involved in the agreement to see on paper as to what they all have agreed to.

More Details

Examples Performance Testing Templates

Siple, Bud H.

The purpose of this Performance Testing Program Plan is to identify the process and phased approach that will be implemented at Site XYZ . The purpose of the testing program at Site XYZ is specifically designed to evaluate the effectiveness of systems that are employed at this site. This plan defines tasks to be accomplished to ensure that performance testing is conducted as effectively and efficiently as possible.

More Details

Proposing an Abstracted Interface and Protocol for Computer Systems

Resnick, David R.; Ignatowski, Mike

While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices becomes increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years, including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions

More Details

Experimental Datasets for Release to The Technology Cooperation Program CP 5-2-2012 from Sandia National Laboratories

Arunajatesan, Srinivasan

The datasets being released consist of cavity configurations for which measurements were made in the Sandia Trisonic Wind Tunnel (TWT) facility. The cavities were mounted on the walls (ceiling/floor) of the wind tunnel, with the approach flow boundary layer thickness dictated by the run-length from the settling chamber of the tunnel. No measurements of the boundary layer for the different cases were made explicitly. However, prior measurements of the boundary layer have been made and simulations of the tunnel from the settling chamber on have shown that this method yields the correct boundary layer thickness at the leading edge of the cavity. The measurements focused on the cavity flow field itself and the cavity wall pressures. For each of the cases, the stagnation conditions are prescribed in order to obtain the correct inflow conditions upstream of the cavity. The wind tunnel contours have been approved for public release and will be made available also.

More Details
Results 52201–52400 of 99,299
Results 52201–52400 of 99,299