A streamline upwind Petrov-Galerkin finite element method is presented for the case of a reacting mixture of thermally-perfect gases, using chemical non-equilibrium. Details of the stabilization scheme and nonlinear solution are presented. The authors have independently implemented the proposed algorithm in two separate codes, for both single temperature and and two temperature models. Example problems invoving a cylinder in Mach 20 crossflow, as well as a three-dimensional blunt nosetip are shown and compared to established codes.
The peridynamic theory of mechanics attempts to unite the mathematical modeling of continuous media, cracks, and particles within a single framework. It does this by replacing the partial differential equations of the classical theory of solid mechanics with integral or integro-differential equations. These equations are based on a model of internal forces within a body in which material points interact with each other directly over finite distances. The classical theory of solid mechanics is based on the assumption of a continuous distribution of mass within a body. It further assumes that all internal forces are contact forces that act across zero distance. The mathematical description of a solid that follows from these assumptions relies on partial differential equations that additionally assume sufficient smoothness of the deformation for the PDEs to make sense in either their strong or weak forms. The classical theory has been demonstrated to provide a good approximation to the response of real materials down to small length scales, particularly in single crystals, provided these assumptions are met. Nevertheless, technology increasingly involves the design and fabrication of devices at smaller and smaller length scales, even interatomic dimensions. Therefore, it is worthwhile to investigate whether the classical theory can be extended to permit relaxed assumptions of continuity, to include the modeling of discrete particles such as atoms, and to allow the explicit modeling of nonlocal forces that are known to strongly influence the behavior of real materials.
This paper applies a pragmatic approach to validation of a fire-dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. The validation problem involves experimental and predicted steady-state temperatures of a calorimeter in a wind-driven hydrocarbon pool fire. Significant aleatory and epistemic sources of uncertainty in the experiments and simulations exist and are transformed to a common basis of interval uncertainty for aggregation and comparison purposes. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model substantiation or refutation, take place in "real space" as opposed to "transform space" where various transform measuresof discrepancy between experiment and simulation results are calculated and assessed. The versatile model validation approach handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties (discrete and continuous) from multiple correlated and uncorrelated source types, including 1) experimental variability from multiple repeat experiments, 2) uncertainty of experimental inputs, 3) experimental output measurement uncertainties, 4) uncertainties that arise in data processing and inference from raw simulation and experiment outputs, 5) parameter and model-form uncertainties intrinsictothe model, and 6) numerical solution uncertainty from model discretization effects. Copyright Clearance Center, Inc.
In this work we describe a new parallel lattice (PL) filter topology for electrically coupled AlN microresonator based filters. While 4th order, narrow percent bandwidth (0.03%) parallel filters based on high impedance (11 kΩ) resonators have been previously demonstrated at 20 MHz [1], in this work we realize low insertion loss PL filters at 400-500 MHz with termination impedances from 50 to 150 Ω and much wider percent bandwidths, up to 5.3%. Obtaining high percent bandwidth is a major challenge in microresonator based filters given the relatively low piezoelectric coupling coefficients, kt2, when compared to bulk (BAW) and surface (SAW) acoustic wave filter materials.
We are concerned with transportation accidents and the subsequent fire. Progress is currently being made on a unique capability to model these very challenging events. We have identified Smoothed Particle Hydrodynamics (SPH) as a good method to employ for the impact dynamics of the fluid. SPH is capable of modeling viscous and inertial effects for these impacts for short times. We have also identified our fire code Lagrangian/Eulerian (L/E) particle capability as an excellent method for fuel transport and spray modeling. This fire code can also model the subsequent fire, including details of the heat and mass transfer necessary for thermal environment predictions. These two methods (SPH and L/E) employ disparate but complimentary length and timescales for the calculation, and are suited for coupling given adequate attention to relevant details. Length and timescale interactions are important considerations when joining the two capabilities. Coupling methodologies have been shown to be important to the model accuracy. Focusing on the transfer methods and spatial resolution, a notional impact problem is examined. The outcome helps to quantify the importance of various methods and to better understand the behavior of these modeling methods in a representative environment.
Motivated by the needs of seismic inversion and building on our prior experience for fluid-dynamics systems, we present a high-order discontinuous Galerkin (DG) Runge-Kutta method applied to isotropic, linearized elasto-dynamics. Unlike other DG methods recently presented in the literature, our method allows for inhomogeneous material variations within each element that enables representation of realistic earth models — a feature critical for future use in seismic inversion. Likewise, our method supports curved elements and hybrid meshes that include both simplicial and nonsimplicial elements. We demonstrate the capabilities of this method through a series of numerical experiments including hybrid mesh discretizations of the Marmousi2 model as well as a modified Marmousi2 model with a oscillatory ocean bottom that is exactly captured by our discretization.
Archimedes’ genius was derived in no small part from his ability to effortlessly interpret problems in both geometric and mechanical ways. We explore, in a modern context, the application of mechanical reasoning to geometric problem solving. The general form of this inherently Archimedean approach is described and it’s specific use is demonstrated with regard to the problem of finding the geodesics of a surface. Archimedes’ approach to thinking about problems may be his greatest contribution, and in that spirit we present some work related to teaching Archimedes’ ideas at an elementary level. The aim is to cultivate the same sort of creative problem solving employed by Archimedes, in young students with nascent mechanical reasoning skills.
To study the rebound of a sphere colliding against a flat wall, a test setup was developed where the sphere is suspended with strings as a pendulum, elevated, and gravity-released to impact the wall. The motion of the sphere was recorded with a highspeed camera and traced with an image-processing program. From the speed of the sphere before and after each collision, the coefficient of restitution was computed, and shown to be a function of impact speed as predicted analytically.
The working of induced voltage alteration (IVA) techniques and its major developments in areas of hardware for analysis, electrical biasing, detection advances, resolution improvements, and future possibilities, is discussed. IVA technique uses either a scanning electron microscope's (SEM) electron beam or a scanning optical microscope's (SOM) laser beam as the external stimulus. The other IVA techniques were developed using different localized stimuli, with the same sensitive biasing approach. The IVA techniques takes advantage of the strong signal response of CMOS devices when operated as current-to-voltage converters. To improve the biasing approach, externally induced voltage alterations (XIVA) was introduced, in which an ac choke circuit acts as a constant-voltage source. Synchronization with device operation also allows specific vectors to be analyzed using local photocurrent and thermal stimulus.
This paper presents object-oriented design patterns in the context of object construction and destruction. The examples leverage the newly supported object-oriented features of Fortran 2003. We describe from the client perspective two patterns articulated by Gamma et al. [1]: ABSTRACT FACTORY and FACTORY METHOD. We also describe from the implementation perspective one new pattern: the OBJECT pattern. We apply the Gamma et al. patterns to solve a partial differential equation, and we discuss applying the new pattern to a quantum vortex dynamics code. Finally, we address consequences and describe the use of the patterns in two open-source software projects: ForTrilinos and Morfeus.
To study the rebound of a sphere colliding against a flat wall, a test setup was developed where the sphere is suspended with strings as a pendulum, elevated, and gravity-released to impact the wall. The motion of the sphere was recorded with a highspeed camera and traced with an image-processing program. From the speed of the sphere before and after each collision, the coefficient of restitution was computed, and shown to be a function of impact speed as predicted analytically.
Sandia collects environmental data to determine and report the impact of existing SNL/NM operations on the environment. Sandia’s environmental programs include air and water quality, environmental monitoring and surveillance, and activities associated with the National Environmental Policy Act (NEPA). Sandia’s objective is to maintain compliance with federal, state, and local requirements, and to affect the corporate culture so that environmental compliance practices continue to be an integral part of operations.
Quasi-static experimental techniques for fracture toughness have been well developed and end notched flexure (ENF) technique has become a typical method to determined mode-II fracture toughness. ENF technique also has been implemented to high-rate testing using SHPB (Split Hopkinson Pressure Bar) technique for dynamic fracture characterization of composites. In general, the loading condition in dynamic characterization needs to be carefully verified that forces are balanced if same equations are used to calculate the fracture toughness. In this study, we employed highly sensitive polyvinylidene fluoride (PVDF) force transducers to measure the forces on the front wedge and back spans of the three-point bending setup. High rate digital image correlation (DIC) was also conducted to investigate the stress wave propagation during the dynamic loading. After careful calibration, the PVDF film transducer was made into small square pieces that are embedded on the front loading wedge and back supporting spans. Outputs from the three PVDF transducers as well as the strain gage on the transmission bar are recorded. The DIC result shows the transverse wave front propagates from the wedge towards the supports. If the crack starts to propagate before reaching force balance, numerical simulation, such as finite element analysis, should be implemented together with the dynamic experimental data to determine the mode-II fracture toughness.
In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: 1. rotation and moment estimation at connection points; 2. providing substructure Ritz vectors that adequately span the connection motion space; and 3. adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: 1. designating response sensor locations; 2. designating force input locations; 3. physical design of the transmission simulator; and 4. modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.
This abstract explores the potential advantages of discontinuous Galerkin (DG) methods for the time-domain inversion of media parameters within the earth’s interior. In particular, DG methods enable local polynomial refinement to better capture localized geological features within an area of interest while also allowing the use of unstructured meshes that can accurately capture discontinuous material interfaces. This abstract describes our initial findings when using DG methods combined with Runge-Kutta time integration and adjoint-based optimization algorithms for full-waveform inversion. Our initial results suggest that DG methods allow great flexibility in matching the media characteristics (faults, ocean bottom and salt structures) while also providing higher fidelity representations in target regions.
Quasi-static experimental techniques for fracture toughness have been well developed and end notched flexure (ENF) technique has become a typical method to determined mode-II fracture toughness. ENF technique also has been implemented to high-rate testing using SHPB (Split Hopkinson Pressure Bar) technique for dynamic fracture characterization of composites. In general, the loading condition in dynamic characterization needs to be carefully verified that forces are balanced if same equations are used to calculate the fracture toughness. In this study, we employed highly sensitive polyvinylidene fluoride (PVDF) force transducers to measure the forces on the front wedge and back spans of the three-point bending setup. High rate digital image correlation (DIC) was also conducted to investigate the stress wave propagation during the dynamic loading. After careful calibration, the PVDF film transducer was made into small square pieces that are embedded on the front loading wedge and back supporting spans. Outputs from the three PVDF transducers as well as the strain gage on the transmission bar are recorded. The DIC result shows the transverse wave front propagates from the wedge towards the supports. If the crack starts to propagate before reaching force balance, numerical simulation, such as finite element analysis, should be implemented together with the dynamic experimental data to determine the mode-II fracture toughness.
The interaction of light with nanostructured metal leads to a number of fascinating phenomena, including plasmon oscillations that can be harnessed for a variety of cutting-edge applications. Plasmon oscillation modes are the collective oscillation of free electrons in metals under incident light. Previously, surface plasmon modes have been used for communication, sensing, nonlinear optics and novel physics studies. In this report, we describe the scientific research completed on metal-dielectric plasmonic films accomplished during a multi-year Purdue Excellence in Science and Engineering Graduate Fellowship sponsored by Sandia National Laboratories. A variety of plasmonic structures, from random 2D metal-dielectric films to 3D composite metal-dielectric films, have been studied in this research for applications such as surface-enhanced Raman sensing, tunable superlenses with resolutions beyond the diffraction limit, enhanced molecular absorption, infrared obscurants, and other real-world applications.
As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.
Sandia National Laboratories Wind Technology Department is investigating the feasibility of using local wind resources to meet the requirements of Executive Order 13423 and DOE Order 430.2B. These Orders, along with the DOE TEAM initiative, identify the use of on-site renewable energy projects to meet specified renewable energy goals over the next 3 to 5 years. A temporary 30-meter meteorological tower was used to perform interim monitoring while the National Environmental Policy Act (NEPA) process for the larger Wind Feasibility Project ensued. This report presents the analysis of the data collected from the 30-meter meteorological tower.
This document is the final SAND Report for the LDRD Project 105877 - 'Novel Diagnostic for Advanced Measurements of Semiconductor Devices Exposed to Adverse Environments' - funded through the Nanoscience to Microsystems investment area. Along with the continuous decrease in the feature size of semiconductor device structures comes a growing need for inspection tools with high spatial resolution and high sample throughput. Ideally, such tools should be able to characterize both the surface morphology and local conductivity associated with the structures. The imaging capabilities and wide availability of scanning electron microscopes (SEMs) make them an obvious choice for imaging device structures. Dopant contrast from pn junctions using secondary electrons in the SEM was first reported in 1967 and more recently starting in the mid-1990s. However, the serial acquisition process associated with scanning techniques places limits on the sample throughput. Significantly improved throughput is possible with the use of a parallel imaging scheme such as that found in photoelectron emission microscopy (PEEM) and low energy electron microscopy (LEEM). The application of PEEM and LEEM to device structures relies on contrast mechanisms that distinguish differences in dopant type and concentration. Interestingly, one of the first applications of PEEM was a study of the doping of semiconductors, which showed that the PEEM contrast was very sensitive to the doping level and that dopant concentrations as low as 10{sup 16} cm{sup -3} could be detected. More recent PEEM investigations of Schottky contacts were reported in the late 1990s by Giesen et al., followed by a series of papers in the early 2000s addressing doping contrast in PEEM by Ballarotto and co-workers and Frank and co-workers. In contrast to PEEM, comparatively little has been done to identify contrast mechanisms and assess the capabilities of LEEM for imaging semiconductor device strictures. The one exception is the work of Mankos et al., who evaluated the impact of high-throughput requirements on the LEEM designs and demonstrated new applications of imaging modes with a tilted electron beam. To assess its potential as a semiconductor device imaging tool and to identify contrast mechanisms, we used LEEM to investigate doped Si test structures. In section 2, Imaging Oxide-Covered Doped Si Structures Using LEEM, we show that the LEEM technique is able to provide reasonably high contrast images across lateral pn junctions. The observed contrast is attributed to a work function difference ({Delta}{phi}) between the p- and n-type regions. However, because the doped regions were buried under a thermal oxide ({approx}3.5 nm thick), e-beam charging during imaging prevented quantitative measurements of {Delta}{phi}. As part of this project, we also investigated a series of similar test structures in which the thermal oxide was removed by a chemical etch. With the oxide removed, we obtained intensity-versus-voltage (I-V) curves through the transition from mirror to LEEM mode and determined the relative positions of the vacuum cutoffs for the differently doped regions. Although the details are not discussed in this report, the relative position in voltage of the vacuum cutoffs are a direct measure of the work function difference ({Delta}{phi}) between the p- and n-doped regions.
The next generation of capability-class, massively parallel processing (MPP) systems is expected to have hundreds of thousands to millions of processors, In such environments, it is critical to have fault-tolerance mechanisms, including checkpoint/restart, that scale with the size of applications and the percentage of the system on which the applications execute. For application-driven, periodic checkpoint operations, the state-of-the-art does not provide a scalable solution. For example, on today's massive-scale systems that execute applications which consume most of the memory of the employed compute nodes, checkpoint operations generate I/O that consumes nearly 80% of the total I/O usage. Motivated by this observation, this project aims to improve I/O performance for application-directed checkpoints through the use of lightweight storage architectures and overlay networks. Lightweight storage provide direct access to underlying storage devices. Overlay networks provide caching and processing capabilities in the compute-node fabric. The combination has potential to signifcantly reduce I/O overhead for large-scale applications. This report describes our combined efforts to model and understand overheads for application-directed checkpoints, as well as implementation and performance analysis of a checkpoint service that uses available compute nodes as a network cache for checkpoint operations.
Phononic crystals (or acoustic crystals) are the acoustic wave analogue of photonic crystals. Here a periodic array of scattering inclusions located in a homogeneous host material forbids certain ranges of acoustic frequencies from existence within the crystal, thus creating what are known as acoustic (or phononic) bandgaps. The vast majority of phononic crystal devices reported prior to this LDRD were constructed by hand assembling scattering inclusions in a lossy viscoelastic medium, predominantly air, water or epoxy, resulting in large structures limited to frequencies below 1 MHz. Under this LDRD, phononic crystals and devices were scaled to very (VHF: 30-300 MHz) and ultra (UHF: 300-3000 MHz) high frequencies utilizing finite difference time domain (FDTD) modeling, microfabrication and micromachining technologies. This LDRD developed key breakthroughs in the areas of micro-phononic crystals including physical origins of phononic crystals, advanced FDTD modeling and design techniques, material considerations, microfabrication processes, characterization methods and device structures. Micro-phononic crystal devices realized in low-loss solid materials were emphasized in this work due to their potential applications in radio frequency communications and acoustic imaging for medical ultrasound and nondestructive testing. The results of the advanced modeling, fabrication and integrated transducer designs were that this LDRD produced the 1st measured phononic crystals and phononic crystal devices (waveguides) operating in the VHF (67 MHz) and UHF (937 MHz) frequency bands and established Sandia as a world leader in the area of micro-phononic crystals.
The need to improve the radiation detection architecture has given rise to increased concern over the potential of equipment or procedures to violate the Fourth Amendment. Protecting the rights guaranteed by the Constitution is a foremost value of every government agency. However, protecting U.S. residents and assets from potentially catastrophic threats is also a crucial role of government. In the absence of clear precedent, the fear of potentially violating rights could lead to the rejection of effective and reasonable means that could reduce risks, possibly savings lives and assets. The goal of this document is not to apply case law to determine what the precedent may be if it exists, but rather provide a detailed outline that defines searches and seizures, identifies what precedent exists and what precedent doesn't exist, and explore what the existing (and non-existing) precedent means for the use of radiation detection used inside the nation's borders.
A gradient array apparatus was constructed for the study of porous polymers produced using the process of chemically-induced phase separation (CIPS). The apparatus consisted of a 60 element, two-dimensional array in which a temperature gradient was placed in the y-direction and composition was varied in the x-direction. The apparatus allowed for changes in opacity of blends to be monitored as a function of temperature and cure time by taking images of the array with time. The apparatus was validated by dispense a single blend composition into all 60 wells of the array and curing them for 24 hours and doing the experiment in triplicate. Variations in micron scale phase separation were readily observed as a function of both curing time and temperature and there was very good well-to-well consistency as well as trial-to-trial consistency. Poragen of samples varying with respect to cure temperature was removed and SEM images were obtained. The results obtained showed that cure temperature had a dramatic affect on sample morphology, and combining data obtained from visual observations made during the curing process with SEM data can enable a much better understanding of the CIPS process and provide predictive capability through the relatively facile generation of composition-process-morphology relationships. Data quality could be greatly enhanced by making further improvements in the apparatus. The primary improvements contemplated include the use of a more uniform light source, an optical table, and a CCD camera with data analysis software. These improvements would enable quantification of the amount of scattered light generated from individual elements as a function of cure time. In addition to the gradient array development, porous composites were produced by incorporating metal particles into a blend of poragen, epoxy resin, and crosslinker. The variables involved in the experiment were metal particle composition, primary metal particle size, metal concentration, and poragen composition. A total of 16 different porous composites were produced and characterized using SEM. In general, the results showed that pore morphology and the distribution of metal particles was dependent on multiple factors. For example, the use of silver nanoparticles did not significantly affect pore morphology for composites derived from decanol as the poragen, but exceptionally large pores were obtained with the use of decane as the poragen. With regard to the effect of metal particle size, silver nanoparticles were essentially exclusively dispered in the polymer matrix while silver microparticles were found in pores. For nickel particles, both nanoparticles and microparticles were largely dispersed in the polymer matrix and not in the pores.
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.
A permeability model for hydrogen transport in a porous material is successfully applied to both laboratory-scale and vehicle-scale sodium alanate hydrogen storage systems. The use of a Knudsen number dependent relationship for permeability of the material in conjunction with a constant area fraction channeling model is shown to accurately predict hydrogen flow through the reactors. Generally applicable model parameters were obtained by numerically fitting experimental measurements from reactors of different sizes and aspect ratios. The degree of channeling was experimentally determined from the measurements and found to be 2.08% of total cross-sectional area. Use of this constant area channeling model and the Knudsen dependent Young & Todd permeability model allows for accurate prediction of the hydrogen uptake performance of full-scale sodium alanate and similar metal hydride systems.
We describe breakthrough results obtained in a feasibility study of a fundamentally new architecture for air-cooled heat exchangers. A longstanding but largely unrealized opportunity in energy efficiency concerns the performance of air-cooled heat exchangers used in air conditioners, heat pumps, and refrigeration equipment. In the case of residential air conditioners, for example, the typical performance of the air cooled heat exchangers used for condensers and evaporators is at best marginal from the standpoint the of achieving maximum the possible coefficient of performance (COP). If by some means it were possible to reduce the thermal resistance of these heat exchangers to a negligible level, a typical energy savings of order 30% could be immediately realized. It has long been known that a several-fold increase in heat exchanger size, in conjunction with the use of much higher volumetric flow rates, provides a straight-forward path to this goal but is not practical from the standpoint of real world applications. The tension in the market place between the need for energy efficiency and logistical considerations such as equipment size, cost and operating noise has resulted in a compromise that is far from ideal. This is the reason that a typical residential air conditioner exhibits significant sensitivity to reductions in fan speed and/or fouling of the heat exchanger surface. The prevailing wisdom is that little can be done to improve this situation; the 'fan-plus-finned-heat-sink' heat exchanger architecture used throughout the energy sector represents an extremely mature technology for which there is little opportunity for further optimization. But the fact remains that conventional fan-plus-finned-heat-sink technology simply doesn't work that well. Their primary physical limitation to performance (i.e. low thermal resistance) is the boundary layer of motionless air that adheres to and envelops all surfaces of the heat exchanger. Within this boundary layer region, diffusive transport is the dominant mechanism for heat transfer. The resulting thermal bottleneck largely determines the thermal resistance of the heat exchanger. No one has yet devised a practical solution to the boundary layer problem. Another longstanding problem is inevitable fouling of the heat exchanger surface over time by particulate matter and other airborne contaminants. This problem is especially important in residential air conditioner systems where often little or no preventative maintenance is practiced. The heat sink fouling problem also remains unsolved. The third major problem (alluded to earlier) concerns inadequate airflow to heat exchanger resulting from restrictions on fan noise. The air-cooled heat exchanger described here solves all of the above three problems simultaneously. The 'Air Bearing Heat Exchanger' provides a several-fold reduction in boundary layer thickness, intrinsic immunity to heat sink fouling, and drastic reductions in noise. It is also very practical from the standpoint of cost, complexity, ruggedness, etc. Successful development of this technology is also expected to have far reaching impact in the IT sector from the standpointpoint of solving the 'Thermal Brick Wall' problem (which currently limits CPU clocks speeds to {approx}3 GHz), and increasing concern about the the electrical power consumption of our nation's information technology infrastructure.
This report examines the interactions involved with flashover along a surface in high density electronegative gases. The focus is on fast ionization processes rather than the later time ionic drift or thermalization of the discharge. A kinetic simulation of the gas and surface is used to examine electron multiplication and includes gas collision, excitation and ionization, and attachment processes, gas photoionization and surface photoemission processes, as well as surface attachment. These rates are then used in a 1.5D fluid ionization wave (streamer) model to study streamer propagation with and without the surface in air and in SF6. The 1.5D model therefore includes rates for all these processes. To get a better estimate for the behavior of the radius we have studied radial expansion of the streamer in air and in SF6. The focus of the modeling is on voltage and field level changes (with and without a surface) rather than secondary effects, such as, velocities or changes in discharge path. An experiment has been set up to carry out measurements of threshold voltages, streamer velocities, and other discharge characteristics. This setup includes both electrical and photographic diagnostics (streak and framing cameras). We have observed little change in critical field levels (where avalanche multiplication sets in) in the gas alone versus with the surface. Comparisons between model calculations and experimental measurements are in agreement with this. We have examined streamer sustaining fields (field which maintains ionization wave propagation) in the gas and on the surface. Agreement of the gas levels with available literature is good and agreement between experiment and calculation is good also. Model calculations do not indicate much difference between the gas alone versus the surface levels. Experiments have identified differences in velocity between streamers on the surface and in the gas alone (the surface values being larger).
Understanding the physics of phonon transport at small length scales is increasingly important for basic research in nanoelectronics, optoelectronics, nanomechanics, and thermoelectrics. We conducted several studies to develop an understanding of phonon behavior in very small structures. This report describes the modeling, experimental, and fabrication activities used to explore phonon transport across and along material interfaces and through nanopatterned structures. Toward the understanding of phonon transport across interfaces, we computed the Kapitza conductance for {Sigma}29(001) and {Sigma}3(111) interfaces in silicon, fabricated the interfaces in single-crystal silicon substrates, and used picosecond laser pulses to image the thermal waves crossing the interfaces. Toward the understanding of phonon transport along interfaces, we designed and fabricated a unique differential test structure that can measure the proportion of specular to diffuse thermal phonon scattering from silicon surfaces. Phonon-scale simulation of the test ligaments, as well as continuum scale modeling of the complete experiment, confirmed its sensitivity to surface scattering. To further our understanding of phonon transport through nanostructures, we fabricated microscale-patterned structures in diamond thin films.
In a globalized world, dramatic changes within any one nation causes ripple or even tsunamic effects within neighbor nations and nations geographically far removed. Multinational interventions to prevent or mitigate detrimental changes can easily cause secondary unintended consequences more detrimental and enduring than the feared change instigating the intervention. This LDRD research developed the foundations for a flexible geopolitical and socioeconomic simulation capability that focuses on the dynamic national security implications of natural and man-made trauma for a nation-state and the states linked to it through trade or treaty. The model developed contains a database for simulating all 229 recognized nation-states and sovereignties with the detail of 30 economic sectors including consumers and natural resources. The model explicitly simulates the interactions among the countries and their governments. Decisions among governments and populations is based on expectation formation. In the simulation model, failed expectations are used as a key metric for tension across states, among ethnic groups, and between population factions. This document provides the foundational documentation for the model.
While climate-change models have done a reasonable job of forecasting changes in global climate conditions over the past decades, recent data indicate that actual climate change may be much more severe. To better understand some of the potential economic impacts of these severe climate changes, Sandia economists estimated the impacts to the U.S. economy of climate change-induced impacts to U.S. precipitation over the 2010 to 2050 time period. The economists developed an impact methodology that converts changes in precipitation and water availability to changes in economic activity, and conducted simulations of economic impacts using a large-scale macroeconomic model of the U.S. economy.
This report gives an overview of the types of economic methodologies and models used by Sandia economists in their consequence analysis work for the National Infrastructure Simulation & Analysis Center and other DHS programs. It describes the three primary resolutions at which analysis is conducted (microeconomic, mesoeconomic, and macroeconomic), the tools used at these three levels (from data analysis to internally developed and publicly available tools), and how they are used individually and in concert with each other and other infrastructure tools.
A global partnership between nuclear energy supplier nations and user nations could enable the safe and secure expansion of nuclear power throughout the world. Although it is likely that supplier nations and their industries would be anxious to sell reactors and fuel services as part of this partnership, their commitment to close the fuel cycle (i.e., permanently take back fuel and high-level waste) remains unclear. At the 2007 Waste Management Symposia in Tucson, Arizona, USA, a distinguished international panel explored fuel take back and waste disposal from the perspective of current and prospective user nations. This paper reports on the findings of that panel and presents a path for policy makers to move forward with the partnership vision.
This research utilizes a method for calculating an atomic-scale deformation gradient within the framework of continuum mechanics using atomistic simulations to examine bicrystal grain boundaries subjected to shear loading. We calculate the deformation gradient, its rotation tensor from polar decomposition, and estimates of lattice curvature and vorticity for thin equilibrium bicrystal geometries deformed at low temperature. These simulations reveal pronounced deformation fields that exist in small regions surrounding the grain boundary, and demonstrate the influence of interfacial structure on mechanical behavior for the thin models investigated. Our results also show that more profound insight is gained concerning inelastic grain boundary phenomena by analyzing the deformed structures with regard to these continuum mechanical metrics.
This work explores how the high-load limits of HCCI are affected by fuel autoignition reactivity, EGR quality/composition, and EGR unmixedness for naturally aspirated conditions. This is done for PRF80 and PRF60. The experiments were conducted in a singlecylinder HCCI research engine (0.98 liters) with a CR = 14 piston installed. By operating at successively higher engine loads, five load-limiting factors were identified for these fuels: 1) Residual-NOx-induced run-away advancement of the combustion phasing, 2) EGR-NOx- induced run-away, 3) EGR-NOx/wall-heating induced run-away 4) EGR-induced oxygen deprivation, and 5) excessive partial-burn occurrence due to EGR unmixedness. The actual load-limiting factor is dependent on the autoignition reactivity of the fuel, the EGR quality level (where high quality refers to the absence of trace species like NO, HC and CO, i.e. simulated EGR), the level of EGR unmixedness, and the selected pressurerise rate (PRR). For a reactive fuel like PRF60, large amounts of EGR are required to control the combustion phasing. Therefore, for operation with simulated EGR, the maximum IMEP becomes limited by the available oxygen. When real EGR (with trace species) is used instead of the simulated EGR, the maximum IMEP becomes limited by EGR-NOx/wall-heating induced runaway. For the moderately reactive PRF80 operated with simulated EGR, the maximum IMEP becomes limited by residual-NOx-induced run-away. Furthermore, operation with real EGR lowers the maximum steady IMEP because of EGR-NOx-induced run-away. This is similar to PRF60. Finally, the data show that EGR/fresh-gas unmixedness can lead to a substantial reduction of the maximum stable IMEP for operation with a low PRR. This happens because the EGR unmixedness causes occasional partial-burn cycles due to excessive combustion-phasing retard for cycles that induct substantially higher-thanaverage level of EGR gases.
A planar temperature imaging diagnostic has been developed and applied to an investigation of naturally occurring thermal stratification in an HCCI engine. Natural thermal stratification is critical for high-load HCCI operation because it slows the combustion heat release; however, little is known about its development or distribution. A tracer-based single-line PLIF imaging technique was selected for its good precision and simplicity. Temperature-map images were derived from the PLIF images, based on the temperature sensitivity of the fluorescence signal of the toluene tracer added to the fuel. A well premixed intake charge assured that variations in fuel/air mixture did not affect the signal. Measurements were made in a single-cylinder optically accessible HCCI research engine (displacement = 0.98 liters) at a typical 1200 rpm operating condition. Since natural thermal stratification develops prior to autoignition, all measurements were made for motored operation. Calibrations were performed in-situ, by varying the intake temperature and pressure over a wide range. Although the absolute accuracy is limited by the pressure-derived temperatures used for calibration, an uncertainty analysis shows that the precision of the diagnostic for determining temperature variations at a given condition is very good. Application of the diagnostic provided temperature-map images that showed a progressive development of natural thermal stratification in the bulk gas through the latter compression stroke and early expansion strokes. Applying a PDF analysis with corrections for measurement uncertainties provided additional quantitative results. The data show a clear trend of going from virtually no stratification at 305° CA (55° bTDC), to significant inhomogeneities at TDC. Near TDC, the images show distinct hotter and colder pockets with a turbulent structure. Images were also acquired across the charge from the mid-plane to outer boundary layer at 330° CA and TDC. They show an increase in thermal stratification and a change of its structure in the outer boundary layer, and they provide a measure of the boundary-layer thickness. Where possible, results were compared with previous fired-engine and modeling data, and good agreement was found.
Shadowgraph/schlieren imaging techniques have often been used for flow visualization of reacting and non-reacting systems. In this paper we show that high-speed shadowgraph visualization in a high-pressure chamber can also be used to identify cool-flame and high-temperature combustion regions of diesel sprays, thereby providing insight into the time sequence of diesel ignition and combustion. When coupled to simultaneous high-speed Mie-scatter imaging, chemiluminescence imaging, pressure measurement, and spatially-integrated jet luminosity measurements by photodiode, the shadowgraph visualization provides further information about spray penetration after vaporization, spatial location of ignition and high-temperature combustion, and inactive combustion regions where problematic unburned hydrocarbons exist. Examples of the joint application of high-speed diagnostics include transient non-reacting and reacting injections, as well as multiple injections. Shadowgraph and schlieren image processing steps required to account for variations of refractive index within the high-temperature combustion vessel gases are also shown.
An order-of-convergence (with respect to a path-length parameter) verification study is undertaken for an implementation of the condensed-history algorithm in a Monte Carlo electron transport code. "Condensed- history" refers to simulating the cumulative effects of the electron without modeling each individual collision. A 1992 paper by Larsen derived the expected order of convergence for a few mathematical models of this type of algorithm. We examine the order of convergence of a condensed-history algorithm based on that used in the Integrated TIGER Series (as applied to electron albedo problems) in the presence of Monte Carlo uncertainty.
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, SC '09
León, Edgar A.; Riesen, Rolf; Maccabe, Arthur B.; Bridges, Patrick G.
Instruction-level simulation is necessary to evaluate new architectures. However, single-node simulation cannot predict the behavior of a parallel application on a supercomputer. We present a scalable simulator that couples a cycle-accurate node simulator with a supercomputer network model. Our simulator executes individual instances of IBM's Mambo PowerPC simulator on hundreds of cores. We integrated a NIC emulator into Mambo and model the network instead of fully simulating it. This decouples the individual node simulators and makes our design scalable. Our simulator runs unmodified parallel message-passing applications on hundreds of nodes. We can change network and detailed node parameters, inject network traffic directly into caches, and use different policies to decide when that is an advantage. This paper describes our simulator in detail, evaluates it, and demonstrates its scalability. We show its suitability for architecture research by evaluating the impact of cache injection on parallel application performance. Copyright 2009 ACM.
Negative valve overlap (NVO) is a valve strategy employed to retain and recompress residual burned gases to assist HCCI combustion, particularly in the difficult regime of low-load operation. NVO allows the retention of large quantities of hot residual burned gases as well as the possibility of fuel addition for combustion control purposes. Reaction of fuel injected during NVO increases charge temperature, but in addition could produce reformed fuel species that may affect main combustion phasing. The strategy holds potential for controlling and extending low-load HCCI combustion. The goal of this work is to demonstrate the feasibility of applying two-wavelength PLIF of 3-pentanone to obtain simultaneous, in-cylinder temperature and composition images during different parts of the HCCI/NVO cycle. Measurements are recorded during the intake and main compression strokes, as well as during the more challenging periods of NVO recompression and re-expansion. To improve measurement quality, effects of diagnostic uncertainty and fluorescence interference are quantified. Temperature, fuel, and EGR images are captured for a range of NVO operating conditions, including main and NVO fuel-injection timings as well total load. The results demonstrate that the diagnostic is capable of providing information useful for the study of HCCI/NVO engine operation.
Polymer foams are used as encapsulants to provide mechanical, electrical, and thermal isolation for engineered systems. In fire environments, the incident heat flux to a system or structure can cause foams to decompose. Commonly used foams, such as polyurethanes, often liquefy and flow during decomposition, and evolved gases can cause pressurization and ultimately failure of sealed containers. In systems safety and hazard analyses, numerical models are used to predict heat transfer to encapsulated objects or through structures. The thermo-mechanical response of systems involving coupled foam decomposition, liquefaction, and flow can be difficult to predict. Predicting pressurization of sealed systems is particularly challenging. To mitigate the issues caused by liquefaction and flow, hybrid polyurethane cyanate ester foams have been developed that have good adhesion and mechanical properties similar to currently used polyurethane and epoxy foams. The hybrid foam decomposes predictably during decomposition. It forms approximately 50 percent by weight char during decomposition in nitrogen. The foam does not liquefy. The charring nature of the hybrid foam has several advantages with respect to modeling heat transfer and pressurization. Those advantages are illustrated by results from recent radiant heat transfer experiments involving encapsulated objects, as well as results from numerical simulations of those experiments.
Results from an experimental study of the aerodynamic and aeroacoustic properties of a flatback version of the TU Delft DU97-W-300 airfoil are presented for a chord Reynolds number of 3 × 106. The data were gathered in the Virginia Tech Stability Wind Tunnel, which uses a special aeroacoustic test section to enable measurements of airfoil self-noise. Corrected wind tunnel aerodynamic measurements for the DU97-W-300 are compared to previous solid wall wind tunnel data and are shown to give good agreement. Aeroacoustic data are presented for the flatback airfoil, with a focus on the amplitude and frequency of noise associated with the vortex-shedding tone from the blunt trailing edge wake. The effect of a splitter plate attachment on both drag and noise is also presented. Computational Fluid Dynamics predictions of the aerodynamic properties of both the unmodified DU97-W-300 and the flatback version are compared to the experimental data.
An order-of-convergence (with respect to a path-length parameter) verification study is undertaken for an implementation of the condensed-history algorithm in a Monte Carlo electron transport code. "Condensed- history" refers to simulating the cumulative effects of the electron without modeling each individual collision. A 1992 paper by Larsen derived the expected order of convergence for a few mathematical models of this type of algorithm. We examine the order of convergence of a condensed-history algorithm based on that used in the Integrated TIGER Series (as applied to electron albedo problems) in the presence of Monte Carlo uncertainty.
The well-known "sweep" algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the "forward" and "symmetric" solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems.