This report summarizes a brief and unsuccessful attempt to grow indium nitride via the electrochemical solution growth method and a modification thereof. Described in this report is a brief effort using a $50,000 LDRD award to explore the possibilities of applying the Electrochemical Solution Growth (ESG) technique to the growth of indium nitride (InN). The ability to grow bulk InN would be exciting from a scientific perspective, and a commercial incentive lies in the potential of extending the ESG technique to grow homogeneous, bulk alloys of In{sub x}Ga{sub 1-x}N for light emitting diodes (LEDs) operating in the green region of the spectrum. Indium nitride is the most difficult of the III-nitrides to grow due to its very high equilibrium vapor pressure of nitrogen1. It is several orders of magnitude higher than for gallium nitride or aluminum nitride. InN has a bandgap energy of 0.7eV, and achieving its growth in bulk for large area, high quality substrates would permit the fabrication of LEDs operating in the infrared. By alloying with GaN and AlN, the bulk material used as substrates would enable high efficiency emission wavelengths that could be tailored all the way through the deep ultraviolet. In addition, InN has been shown to have very high electronic mobilities (2700 cm{sup 2}/V s), making it a promising material for transistors and even terahertz emitters. Several attempts at synthesizing InN have been made by several groups. It was shown that metallic indium does not interact with unactivated nitrogen even at very high temperatures. Thus sets up an incompatibility between the precursors in all growth methods: a tradeoff between thermally activating the nitrogen-containing precursor and the low decomposition temperature of solid InN. We have been working to develop a novel growth technique that circumvents the difficulties of other bulk growth techniques by precipitating the column III nitrides from a solvent, such as a molten chloride salt, that provides an excellent host environment for the gallium nitride and indium nitride precursors. In particular, we have found that molten halide salts can solubilize both gallium (Ga{sup 3+}) and nitride (N{sup 3-}) ions without reacting with them to the extent that they are no longer available for reaction with each other. Literature reports indicate measured nitride ion concentrations in LiCl at 650 C as high as 10 mol% - a sufficient concentration to yield growth rates on the order of 0.1 to {approx}1 mm/hr under diffusion-limited growth conditions. Also, molten salts are compatible with the 400-1200 C temperatures likely to be necessary for growth of high-quality single-crystal III-nitrides. Since they can be worked with at (or close to) atmospheric pressure, scalability is not a problem and manufacturability issues are thus minimized, including capital equipment costs. Although the III-nitrides cannot be float-zone refined to remove impurities due to their high melting temperatures and vapor pressures, the salts can be, thus reducing sources of impurities before growth begins. Finally, the molten salts offer a number of pathways to improve the solubility and control the growth of the III-nitrides by functioning as an electrolyte in electrochemical processes. We have already demonstrated growth of wurtzite GaN particles ranging from 0.2 to 0.9 mm in two hours in our laboratory using these techniques. It was the goal of this work to extend this ESG approach to the growth of indium nitride. The hope was that the abundance of the activated form of nitrogen, namely the triply-charged nitride ion (N{sup -3}) would enable the facile growth of InN in solution at low temperatures.
2-Chloroethyl phenyl sulfide (CEPS), a surrogate compound of the chemical warfare agent sulfur mustard, was examined using thermal desorption coupled gas chromatography-mass spectrometry (TD/GC-MS) and multivariate analysis. This work describes a novel method of producing multiway data using a stepped thermal desorption. Various multivariate analysis schemes were employed to analyze the data. These methods may be able to discern different sources of CEPS. In addition, CEPS was applied to cotton, nylon, polyester, and silk swatches. These swatches were placed in controlled humidity chambers maintained at 23%, 56%, and 85% relative humidity. At regular intervals, samples were removed from each test swatch, and the samples analyzed using TD/GC-MS. The results were compared across fabric substrate and humidity.
Rates of reactions can be expressed as dn/dt = kcf(n) where n is moles of reaction, k is a rate constant, c is a proportionality constant, and f(n) is a function of the properties of the sample. When the instrument time constant, ?, and k are sufficiently comparable that measured rates are significantly affected by instrument response, correction for instrument response must be done to obtain accurate reaction kinetics. Correction for instrument response has previously been done by truncating early data or by use of the Tian equation. Both methods can lead to significant errors. We describe a method for simultaneous determination of ?, k, and c by fitting equations describing the combined instrument response and rate law to rates observed as a function of time. The method was tested with data on the heat rate from acid-catalyzed hydrolysis of sucrose.
Near-field scanning microwave microscopy is employed for quantitative imaging at 4 GHz of the local impedance for monolayer and few-layer graphene. The microwave response of graphene is found to be thickness dependent and determined by the local sheet resistance of the graphene flake. Calibration of the measurement system and knowledge of the probe geometry allows evaluation of the AC impedance for monolayer and few-layer graphene, which is found to be predominantly active. The use of localized evanescent electromagnetic field in our experiment provides a promising tool for investigations of plasma waves in graphene with wave numbers determined by the spatial spectrum of the near-field. By using near-field microwave microscopy one can perform simultaneous imaging of location, geometry, thickness, and distribution of electrical properties of graphene without a need for device fabrication.
The annual program report provides detailed information about all aspects of the Sandia National Laboratories, California (SNL/CA) Waste Management Program. It functions as supporting documentation to the SNL/CA Environmental Management System Program Manual. This annual program report describes the activities undertaken during the past year, and activities planned in future years to implement the Waste Management (WM) Program, one of six programs that supports environmental management at SNL/CA.
The Arquin Corporation has developed a new method of constructing CMU (concrete masonry unit) walls. This new method uses polymer spacers connected to steel wires that serve as reinforcing as well as a means of accurately placing the spacers so that the concrete block can be dry stacked. The hollows of the concrete block are then filled with grout. As part of a New Mexico Small Business Assistance Program (NMSBA), Sandia National Laboratories conducted a series of tests that dynamically loaded wall segments to compare the performance of walls constructed using the Arquin method to a more traditional method of constructing CMU walls. A total of four walls were built, two with traditional methods and two with the Arquin method. Two of the walls, one traditional and one Arquin, had every third cell filled with grout. The remaining two walls, one traditional and one Arquin, had every cell filled with grout. The walls were dynamically loaded with explosive forces. No significant difference was noted between the performance of the walls constructed by the Arquin method when compared to the walls constructed by the traditional method.
In this report, we examine the propagation of tensile waves of finite deformation in rubbers through experiments and analysis. Attention is focused on the propagation of one-dimensional dispersive and shock waves in strips of latex and nitrile rubber. Tensile wave propagation experiments were conducted at high strain-rates by holding one end fixed and displacing the other end at a constant velocity. A high-speed video camera was used to monitor the motion and to determine the evolution of strain and particle velocity in the rubber strips. Analysis of the response through the theory of finite waves and quantitative matching between the experimental observations and analytical predictions was used to determine an appropriate instantaneous elastic response for the rubbers. This analysis also yields the tensile shock adiabat for rubber. Dispersive waves as well as shock waves are also observed in free-retraction experiments; these are used to quantify hysteretic effects in rubber.
The formation of silica scale is a problem for thermoelectric power generating facilities, and this study investigated the potential for removal of silica by means of chemical coagulation from source water before it is subjected to mineral concentration in cooling towers. In Phase I, a screening of many typical as well as novel coagulants was carried out using concentrated cooling tower water, with and without flocculation aids, at concentrations typical for water purification with limited results. In Phase II, it was decided that treatment of source or make up water was more appropriate, and that higher dosing with coagulants delivered promising results. In fact, the less exotic coagulants proved to be more efficacious for reasons not yet fully determined. Some analysis was made of the molecular nature of the precipitated floc, which may aid in process improvements. In Phase III, more detailed study of process conditions for aluminum chloride coagulation was undertaken. Lime-soda water softening and the precipitation of magnesium hydroxide were shown to be too limited in terms of effectiveness, speed, and energy consumption to be considered further for the present application. In Phase IV, sodium aluminate emerged as an effective coagulant for silica, and the most attractive of those tested to date because of its availability, ease of use, and low requirement for additional chemicals. Some process optimization was performed for coagulant concentration and operational pH. It is concluded that silica coagulation with simple aluminum-based agents is effective, simple, and compatible with other industrial processes.
The Technical Area V (TA-V) Seismic Assessment Report was commissioned as part of Sandia National Laboratories (SNL) Self Assessment Requirement per DOE O 414.1, Quality Assurance, for seismic impact on existing facilities at Technical Area-V (TA-V). SNL TA-V facilities are located on an existing Uniform Building Code (UBC) Seismic Zone IIB Site within the physical boundary of the Kirtland Air Force Base (KAFB). The document delineates a summary of the existing facilities with their safety-significant structure, system and components, identifies DOE Guidance, conceptual framework, past assessments and the present Geological and Seismic conditions. Building upon the past information and the evolution of the new seismic design criteria, the document discusses the potential impact of the new standards and provides recommendations based upon the current International Building Code (IBC) per DOE O 420.1B, Facility Safety and DOE G 420.1-2, Guide for the Mitigation of Natural Phenomena Hazards for DOE Nuclear Facilities and Non-Nuclear Facilities.
We describe a method that enables Monte Carlo calculations to automatically achieve a user-prescribed error of representation for numerical results. Our approach is to iteratively adapt Monte Carlo functional-expansion tallies (FETs). The adaptivity is based on assessing the cellwise 2-norm of error due to both functional-expansion truncation and statistical uncertainty. These error metrics have been detailed by others for one-dimensional distributions. We extend their previous work to threedimensional distributions and demonstrate the use of these error metrics for adaptivity. The method examines Monte Carlo FET results, estimates truncation and uncertainty error, and suggests a minimumrequired expansion order and run time to achieve the desired level of error. Iteration is required for results to converge to the desired error. Our implementation of adaptive FETs is observed to converge to reasonable levels of desired error for the representation of four distributions. In practice, some distributions and desired error levels may require prohibitively large expansion orders and/or Monte Carlo run times.
The interaction of light with nanostructured metal leads to a number of fascinating phenomena, including plasmon oscillations that can be harnessed for a variety of cutting-edge applications. Plasmon oscillation modes are the collective oscillation of free electrons in metals under incident light. Previously, surface plasmon modes have been used for communication, sensing, nonlinear optics and novel physics studies. In this report, we describe the scientific research completed on metal-dielectric plasmonic films accomplished during a multi-year Purdue Excellence in Science and Engineering Graduate Fellowship sponsored by Sandia National Laboratories. A variety of plasmonic structures, from random 2D metal-dielectric films to 3D composite metal-dielectric films, have been studied in this research for applications such as surface-enhanced Raman sensing, tunable superlenses with resolutions beyond the diffraction limit, enhanced molecular absorption, infrared obscurants, and other real-world applications.
A multi-group cross section collapsing code, YGROUP, has been developed to speed up deterministic particle transport simulations by reducing the number of discrete energy groups while maintaining computational transport accuracy. The YGROUP code leverages previous studies based on the "contributon" approach to automate group selection. First, forward and adjoint deterministic transport calculations are performed on a smaller problem model, or on one section of a large problem model representative of problem physics using a fine group structure. Then, the calculated forward flux and adjoint function moments are used by YGROUP to collapse the fine group cross section library and generate a problem-dependent broad group cross section library. Finally, the broad group library is used for new transport calculations on the full scale/refined problem model. YGROUP provides several weighting options to collapse the cross section library, including flat, flux, and contributon (the product of forward flux and scalar adjoint moments). Users can also specify fine groups in specific energy ranges of interest to be reserved after collapsing. YGROUP also can be used to evaluate the Feynman-Y asymptote characterizing neutron multiplicity.
As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.
Sandia National Laboratories Wind Technology Department is investigating the feasibility of using local wind resources to meet the requirements of Executive Order 13423 and DOE Order 430.2B. These Orders, along with the DOE TEAM initiative, identify the use of on-site renewable energy projects to meet specified renewable energy goals over the next 3 to 5 years. A temporary 30-meter meteorological tower was used to perform interim monitoring while the National Environmental Policy Act (NEPA) process for the larger Wind Feasibility Project ensued. This report presents the analysis of the data collected from the 30-meter meteorological tower.
This document is the final SAND Report for the LDRD Project 105877 - 'Novel Diagnostic for Advanced Measurements of Semiconductor Devices Exposed to Adverse Environments' - funded through the Nanoscience to Microsystems investment area. Along with the continuous decrease in the feature size of semiconductor device structures comes a growing need for inspection tools with high spatial resolution and high sample throughput. Ideally, such tools should be able to characterize both the surface morphology and local conductivity associated with the structures. The imaging capabilities and wide availability of scanning electron microscopes (SEMs) make them an obvious choice for imaging device structures. Dopant contrast from pn junctions using secondary electrons in the SEM was first reported in 1967 and more recently starting in the mid-1990s. However, the serial acquisition process associated with scanning techniques places limits on the sample throughput. Significantly improved throughput is possible with the use of a parallel imaging scheme such as that found in photoelectron emission microscopy (PEEM) and low energy electron microscopy (LEEM). The application of PEEM and LEEM to device structures relies on contrast mechanisms that distinguish differences in dopant type and concentration. Interestingly, one of the first applications of PEEM was a study of the doping of semiconductors, which showed that the PEEM contrast was very sensitive to the doping level and that dopant concentrations as low as 10{sup 16} cm{sup -3} could be detected. More recent PEEM investigations of Schottky contacts were reported in the late 1990s by Giesen et al., followed by a series of papers in the early 2000s addressing doping contrast in PEEM by Ballarotto and co-workers and Frank and co-workers. In contrast to PEEM, comparatively little has been done to identify contrast mechanisms and assess the capabilities of LEEM for imaging semiconductor device strictures. The one exception is the work of Mankos et al., who evaluated the impact of high-throughput requirements on the LEEM designs and demonstrated new applications of imaging modes with a tilted electron beam. To assess its potential as a semiconductor device imaging tool and to identify contrast mechanisms, we used LEEM to investigate doped Si test structures. In section 2, Imaging Oxide-Covered Doped Si Structures Using LEEM, we show that the LEEM technique is able to provide reasonably high contrast images across lateral pn junctions. The observed contrast is attributed to a work function difference ({Delta}{phi}) between the p- and n-type regions. However, because the doped regions were buried under a thermal oxide ({approx}3.5 nm thick), e-beam charging during imaging prevented quantitative measurements of {Delta}{phi}. As part of this project, we also investigated a series of similar test structures in which the thermal oxide was removed by a chemical etch. With the oxide removed, we obtained intensity-versus-voltage (I-V) curves through the transition from mirror to LEEM mode and determined the relative positions of the vacuum cutoffs for the differently doped regions. Although the details are not discussed in this report, the relative position in voltage of the vacuum cutoffs are a direct measure of the work function difference ({Delta}{phi}) between the p- and n-doped regions.
The next generation of capability-class, massively parallel processing (MPP) systems is expected to have hundreds of thousands to millions of processors, In such environments, it is critical to have fault-tolerance mechanisms, including checkpoint/restart, that scale with the size of applications and the percentage of the system on which the applications execute. For application-driven, periodic checkpoint operations, the state-of-the-art does not provide a scalable solution. For example, on today's massive-scale systems that execute applications which consume most of the memory of the employed compute nodes, checkpoint operations generate I/O that consumes nearly 80% of the total I/O usage. Motivated by this observation, this project aims to improve I/O performance for application-directed checkpoints through the use of lightweight storage architectures and overlay networks. Lightweight storage provide direct access to underlying storage devices. Overlay networks provide caching and processing capabilities in the compute-node fabric. The combination has potential to signifcantly reduce I/O overhead for large-scale applications. This report describes our combined efforts to model and understand overheads for application-directed checkpoints, as well as implementation and performance analysis of a checkpoint service that uses available compute nodes as a network cache for checkpoint operations.
Phononic crystals (or acoustic crystals) are the acoustic wave analogue of photonic crystals. Here a periodic array of scattering inclusions located in a homogeneous host material forbids certain ranges of acoustic frequencies from existence within the crystal, thus creating what are known as acoustic (or phononic) bandgaps. The vast majority of phononic crystal devices reported prior to this LDRD were constructed by hand assembling scattering inclusions in a lossy viscoelastic medium, predominantly air, water or epoxy, resulting in large structures limited to frequencies below 1 MHz. Under this LDRD, phononic crystals and devices were scaled to very (VHF: 30-300 MHz) and ultra (UHF: 300-3000 MHz) high frequencies utilizing finite difference time domain (FDTD) modeling, microfabrication and micromachining technologies. This LDRD developed key breakthroughs in the areas of micro-phononic crystals including physical origins of phononic crystals, advanced FDTD modeling and design techniques, material considerations, microfabrication processes, characterization methods and device structures. Micro-phononic crystal devices realized in low-loss solid materials were emphasized in this work due to their potential applications in radio frequency communications and acoustic imaging for medical ultrasound and nondestructive testing. The results of the advanced modeling, fabrication and integrated transducer designs were that this LDRD produced the 1st measured phononic crystals and phononic crystal devices (waveguides) operating in the VHF (67 MHz) and UHF (937 MHz) frequency bands and established Sandia as a world leader in the area of micro-phononic crystals.
The need to improve the radiation detection architecture has given rise to increased concern over the potential of equipment or procedures to violate the Fourth Amendment. Protecting the rights guaranteed by the Constitution is a foremost value of every government agency. However, protecting U.S. residents and assets from potentially catastrophic threats is also a crucial role of government. In the absence of clear precedent, the fear of potentially violating rights could lead to the rejection of effective and reasonable means that could reduce risks, possibly savings lives and assets. The goal of this document is not to apply case law to determine what the precedent may be if it exists, but rather provide a detailed outline that defines searches and seizures, identifies what precedent exists and what precedent doesn't exist, and explore what the existing (and non-existing) precedent means for the use of radiation detection used inside the nation's borders.
A gradient array apparatus was constructed for the study of porous polymers produced using the process of chemically-induced phase separation (CIPS). The apparatus consisted of a 60 element, two-dimensional array in which a temperature gradient was placed in the y-direction and composition was varied in the x-direction. The apparatus allowed for changes in opacity of blends to be monitored as a function of temperature and cure time by taking images of the array with time. The apparatus was validated by dispense a single blend composition into all 60 wells of the array and curing them for 24 hours and doing the experiment in triplicate. Variations in micron scale phase separation were readily observed as a function of both curing time and temperature and there was very good well-to-well consistency as well as trial-to-trial consistency. Poragen of samples varying with respect to cure temperature was removed and SEM images were obtained. The results obtained showed that cure temperature had a dramatic affect on sample morphology, and combining data obtained from visual observations made during the curing process with SEM data can enable a much better understanding of the CIPS process and provide predictive capability through the relatively facile generation of composition-process-morphology relationships. Data quality could be greatly enhanced by making further improvements in the apparatus. The primary improvements contemplated include the use of a more uniform light source, an optical table, and a CCD camera with data analysis software. These improvements would enable quantification of the amount of scattered light generated from individual elements as a function of cure time. In addition to the gradient array development, porous composites were produced by incorporating metal particles into a blend of poragen, epoxy resin, and crosslinker. The variables involved in the experiment were metal particle composition, primary metal particle size, metal concentration, and poragen composition. A total of 16 different porous composites were produced and characterized using SEM. In general, the results showed that pore morphology and the distribution of metal particles was dependent on multiple factors. For example, the use of silver nanoparticles did not significantly affect pore morphology for composites derived from decanol as the poragen, but exceptionally large pores were obtained with the use of decane as the poragen. With regard to the effect of metal particle size, silver nanoparticles were essentially exclusively dispered in the polymer matrix while silver microparticles were found in pores. For nickel particles, both nanoparticles and microparticles were largely dispersed in the polymer matrix and not in the pores.
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.
A permeability model for hydrogen transport in a porous material is successfully applied to both laboratory-scale and vehicle-scale sodium alanate hydrogen storage systems. The use of a Knudsen number dependent relationship for permeability of the material in conjunction with a constant area fraction channeling model is shown to accurately predict hydrogen flow through the reactors. Generally applicable model parameters were obtained by numerically fitting experimental measurements from reactors of different sizes and aspect ratios. The degree of channeling was experimentally determined from the measurements and found to be 2.08% of total cross-sectional area. Use of this constant area channeling model and the Knudsen dependent Young & Todd permeability model allows for accurate prediction of the hydrogen uptake performance of full-scale sodium alanate and similar metal hydride systems.
We describe breakthrough results obtained in a feasibility study of a fundamentally new architecture for air-cooled heat exchangers. A longstanding but largely unrealized opportunity in energy efficiency concerns the performance of air-cooled heat exchangers used in air conditioners, heat pumps, and refrigeration equipment. In the case of residential air conditioners, for example, the typical performance of the air cooled heat exchangers used for condensers and evaporators is at best marginal from the standpoint the of achieving maximum the possible coefficient of performance (COP). If by some means it were possible to reduce the thermal resistance of these heat exchangers to a negligible level, a typical energy savings of order 30% could be immediately realized. It has long been known that a several-fold increase in heat exchanger size, in conjunction with the use of much higher volumetric flow rates, provides a straight-forward path to this goal but is not practical from the standpoint of real world applications. The tension in the market place between the need for energy efficiency and logistical considerations such as equipment size, cost and operating noise has resulted in a compromise that is far from ideal. This is the reason that a typical residential air conditioner exhibits significant sensitivity to reductions in fan speed and/or fouling of the heat exchanger surface. The prevailing wisdom is that little can be done to improve this situation; the 'fan-plus-finned-heat-sink' heat exchanger architecture used throughout the energy sector represents an extremely mature technology for which there is little opportunity for further optimization. But the fact remains that conventional fan-plus-finned-heat-sink technology simply doesn't work that well. Their primary physical limitation to performance (i.e. low thermal resistance) is the boundary layer of motionless air that adheres to and envelops all surfaces of the heat exchanger. Within this boundary layer region, diffusive transport is the dominant mechanism for heat transfer. The resulting thermal bottleneck largely determines the thermal resistance of the heat exchanger. No one has yet devised a practical solution to the boundary layer problem. Another longstanding problem is inevitable fouling of the heat exchanger surface over time by particulate matter and other airborne contaminants. This problem is especially important in residential air conditioner systems where often little or no preventative maintenance is practiced. The heat sink fouling problem also remains unsolved. The third major problem (alluded to earlier) concerns inadequate airflow to heat exchanger resulting from restrictions on fan noise. The air-cooled heat exchanger described here solves all of the above three problems simultaneously. The 'Air Bearing Heat Exchanger' provides a several-fold reduction in boundary layer thickness, intrinsic immunity to heat sink fouling, and drastic reductions in noise. It is also very practical from the standpoint of cost, complexity, ruggedness, etc. Successful development of this technology is also expected to have far reaching impact in the IT sector from the standpointpoint of solving the 'Thermal Brick Wall' problem (which currently limits CPU clocks speeds to {approx}3 GHz), and increasing concern about the the electrical power consumption of our nation's information technology infrastructure.
This report examines the interactions involved with flashover along a surface in high density electronegative gases. The focus is on fast ionization processes rather than the later time ionic drift or thermalization of the discharge. A kinetic simulation of the gas and surface is used to examine electron multiplication and includes gas collision, excitation and ionization, and attachment processes, gas photoionization and surface photoemission processes, as well as surface attachment. These rates are then used in a 1.5D fluid ionization wave (streamer) model to study streamer propagation with and without the surface in air and in SF6. The 1.5D model therefore includes rates for all these processes. To get a better estimate for the behavior of the radius we have studied radial expansion of the streamer in air and in SF6. The focus of the modeling is on voltage and field level changes (with and without a surface) rather than secondary effects, such as, velocities or changes in discharge path. An experiment has been set up to carry out measurements of threshold voltages, streamer velocities, and other discharge characteristics. This setup includes both electrical and photographic diagnostics (streak and framing cameras). We have observed little change in critical field levels (where avalanche multiplication sets in) in the gas alone versus with the surface. Comparisons between model calculations and experimental measurements are in agreement with this. We have examined streamer sustaining fields (field which maintains ionization wave propagation) in the gas and on the surface. Agreement of the gas levels with available literature is good and agreement between experiment and calculation is good also. Model calculations do not indicate much difference between the gas alone versus the surface levels. Experiments have identified differences in velocity between streamers on the surface and in the gas alone (the surface values being larger).
Understanding the physics of phonon transport at small length scales is increasingly important for basic research in nanoelectronics, optoelectronics, nanomechanics, and thermoelectrics. We conducted several studies to develop an understanding of phonon behavior in very small structures. This report describes the modeling, experimental, and fabrication activities used to explore phonon transport across and along material interfaces and through nanopatterned structures. Toward the understanding of phonon transport across interfaces, we computed the Kapitza conductance for {Sigma}29(001) and {Sigma}3(111) interfaces in silicon, fabricated the interfaces in single-crystal silicon substrates, and used picosecond laser pulses to image the thermal waves crossing the interfaces. Toward the understanding of phonon transport along interfaces, we designed and fabricated a unique differential test structure that can measure the proportion of specular to diffuse thermal phonon scattering from silicon surfaces. Phonon-scale simulation of the test ligaments, as well as continuum scale modeling of the complete experiment, confirmed its sensitivity to surface scattering. To further our understanding of phonon transport through nanostructures, we fabricated microscale-patterned structures in diamond thin films.
In a globalized world, dramatic changes within any one nation causes ripple or even tsunamic effects within neighbor nations and nations geographically far removed. Multinational interventions to prevent or mitigate detrimental changes can easily cause secondary unintended consequences more detrimental and enduring than the feared change instigating the intervention. This LDRD research developed the foundations for a flexible geopolitical and socioeconomic simulation capability that focuses on the dynamic national security implications of natural and man-made trauma for a nation-state and the states linked to it through trade or treaty. The model developed contains a database for simulating all 229 recognized nation-states and sovereignties with the detail of 30 economic sectors including consumers and natural resources. The model explicitly simulates the interactions among the countries and their governments. Decisions among governments and populations is based on expectation formation. In the simulation model, failed expectations are used as a key metric for tension across states, among ethnic groups, and between population factions. This document provides the foundational documentation for the model.
While climate-change models have done a reasonable job of forecasting changes in global climate conditions over the past decades, recent data indicate that actual climate change may be much more severe. To better understand some of the potential economic impacts of these severe climate changes, Sandia economists estimated the impacts to the U.S. economy of climate change-induced impacts to U.S. precipitation over the 2010 to 2050 time period. The economists developed an impact methodology that converts changes in precipitation and water availability to changes in economic activity, and conducted simulations of economic impacts using a large-scale macroeconomic model of the U.S. economy.
A streamline upwind Petrov-Galerkin finite element method is presented for the case of a reacting mixture of thermally-perfect gases, using chemical non-equilibrium. Details of the stabilization scheme and nonlinear solution are presented. The authors have independently implemented the proposed algorithm in two separate codes, for both single temperature and and two temperature models. Example problems invoving a cylinder in Mach 20 crossflow, as well as a three-dimensional blunt nosetip are shown and compared to established codes.
The peridynamic theory of mechanics attempts to unite the mathematical modeling of continuous media, cracks, and particles within a single framework. It does this by replacing the partial differential equations of the classical theory of solid mechanics with integral or integro-differential equations. These equations are based on a model of internal forces within a body in which material points interact with each other directly over finite distances. The classical theory of solid mechanics is based on the assumption of a continuous distribution of mass within a body. It further assumes that all internal forces are contact forces that act across zero distance. The mathematical description of a solid that follows from these assumptions relies on partial differential equations that additionally assume sufficient smoothness of the deformation for the PDEs to make sense in either their strong or weak forms. The classical theory has been demonstrated to provide a good approximation to the response of real materials down to small length scales, particularly in single crystals, provided these assumptions are met. Nevertheless, technology increasingly involves the design and fabrication of devices at smaller and smaller length scales, even interatomic dimensions. Therefore, it is worthwhile to investigate whether the classical theory can be extended to permit relaxed assumptions of continuity, to include the modeling of discrete particles such as atoms, and to allow the explicit modeling of nonlocal forces that are known to strongly influence the behavior of real materials.
This paper applies a pragmatic approach to validation of a fire-dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. The validation problem involves experimental and predicted steady-state temperatures of a calorimeter in a wind-driven hydrocarbon pool fire. Significant aleatory and epistemic sources of uncertainty in the experiments and simulations exist and are transformed to a common basis of interval uncertainty for aggregation and comparison purposes. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model substantiation or refutation, take place in "real space" as opposed to "transform space" where various transform measuresof discrepancy between experiment and simulation results are calculated and assessed. The versatile model validation approach handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties (discrete and continuous) from multiple correlated and uncorrelated source types, including 1) experimental variability from multiple repeat experiments, 2) uncertainty of experimental inputs, 3) experimental output measurement uncertainties, 4) uncertainties that arise in data processing and inference from raw simulation and experiment outputs, 5) parameter and model-form uncertainties intrinsictothe model, and 6) numerical solution uncertainty from model discretization effects. Copyright Clearance Center, Inc.
In this work we describe a new parallel lattice (PL) filter topology for electrically coupled AlN microresonator based filters. While 4th order, narrow percent bandwidth (0.03%) parallel filters based on high impedance (11 kΩ) resonators have been previously demonstrated at 20 MHz [1], in this work we realize low insertion loss PL filters at 400-500 MHz with termination impedances from 50 to 150 Ω and much wider percent bandwidths, up to 5.3%. Obtaining high percent bandwidth is a major challenge in microresonator based filters given the relatively low piezoelectric coupling coefficients, kt2, when compared to bulk (BAW) and surface (SAW) acoustic wave filter materials.
We are concerned with transportation accidents and the subsequent fire. Progress is currently being made on a unique capability to model these very challenging events. We have identified Smoothed Particle Hydrodynamics (SPH) as a good method to employ for the impact dynamics of the fluid. SPH is capable of modeling viscous and inertial effects for these impacts for short times. We have also identified our fire code Lagrangian/Eulerian (L/E) particle capability as an excellent method for fuel transport and spray modeling. This fire code can also model the subsequent fire, including details of the heat and mass transfer necessary for thermal environment predictions. These two methods (SPH and L/E) employ disparate but complimentary length and timescales for the calculation, and are suited for coupling given adequate attention to relevant details. Length and timescale interactions are important considerations when joining the two capabilities. Coupling methodologies have been shown to be important to the model accuracy. Focusing on the transfer methods and spatial resolution, a notional impact problem is examined. The outcome helps to quantify the importance of various methods and to better understand the behavior of these modeling methods in a representative environment.
Single-cell analysis offers a promising method of studying cellular functions including investigation of mechanisms of host-pathogen interaction. We are developing a microfluidic platform that integrates single-cell capture along with an optimized interface for high-resolution fluorescence microscopy. The goal is to monitor, using fluorescent reporter constructs and labeled antibodies, the early events in signal transduction in innate immunity pathways of macrophages and other immune cells. The work presented discusses the development of the single-cell capture device, the iCellator chip, that isolates, captures, and exposes cells to pathogenic insults. We have successfully monitored the translocation of NF-κB, a transcription factor, from the cytoplasm to the nucleus after lipopolysaccharide (LPS) stimulation of RAW264.7 macrophages.
Motivated by the needs of seismic inversion and building on our prior experience for fluid-dynamics systems, we present a high-order discontinuous Galerkin (DG) Runge-Kutta method applied to isotropic, linearized elasto-dynamics. Unlike other DG methods recently presented in the literature, our method allows for inhomogeneous material variations within each element that enables representation of realistic earth models — a feature critical for future use in seismic inversion. Likewise, our method supports curved elements and hybrid meshes that include both simplicial and nonsimplicial elements. We demonstrate the capabilities of this method through a series of numerical experiments including hybrid mesh discretizations of the Marmousi2 model as well as a modified Marmousi2 model with a oscillatory ocean bottom that is exactly captured by our discretization.
Archimedes’ genius was derived in no small part from his ability to effortlessly interpret problems in both geometric and mechanical ways. We explore, in a modern context, the application of mechanical reasoning to geometric problem solving. The general form of this inherently Archimedean approach is described and it’s specific use is demonstrated with regard to the problem of finding the geodesics of a surface. Archimedes’ approach to thinking about problems may be his greatest contribution, and in that spirit we present some work related to teaching Archimedes’ ideas at an elementary level. The aim is to cultivate the same sort of creative problem solving employed by Archimedes, in young students with nascent mechanical reasoning skills.
To study the rebound of a sphere colliding against a flat wall, a test setup was developed where the sphere is suspended with strings as a pendulum, elevated, and gravity-released to impact the wall. The motion of the sphere was recorded with a highspeed camera and traced with an image-processing program. From the speed of the sphere before and after each collision, the coefficient of restitution was computed, and shown to be a function of impact speed as predicted analytically.
The working of induced voltage alteration (IVA) techniques and its major developments in areas of hardware for analysis, electrical biasing, detection advances, resolution improvements, and future possibilities, is discussed. IVA technique uses either a scanning electron microscope's (SEM) electron beam or a scanning optical microscope's (SOM) laser beam as the external stimulus. The other IVA techniques were developed using different localized stimuli, with the same sensitive biasing approach. The IVA techniques takes advantage of the strong signal response of CMOS devices when operated as current-to-voltage converters. To improve the biasing approach, externally induced voltage alterations (XIVA) was introduced, in which an ac choke circuit acts as a constant-voltage source. Synchronization with device operation also allows specific vectors to be analyzed using local photocurrent and thermal stimulus.
This paper presents object-oriented design patterns in the context of object construction and destruction. The examples leverage the newly supported object-oriented features of Fortran 2003. We describe from the client perspective two patterns articulated by Gamma et al. [1]: ABSTRACT FACTORY and FACTORY METHOD. We also describe from the implementation perspective one new pattern: the OBJECT pattern. We apply the Gamma et al. patterns to solve a partial differential equation, and we discuss applying the new pattern to a quantum vortex dynamics code. Finally, we address consequences and describe the use of the patterns in two open-source software projects: ForTrilinos and Morfeus.
To study the rebound of a sphere colliding against a flat wall, a test setup was developed where the sphere is suspended with strings as a pendulum, elevated, and gravity-released to impact the wall. The motion of the sphere was recorded with a highspeed camera and traced with an image-processing program. From the speed of the sphere before and after each collision, the coefficient of restitution was computed, and shown to be a function of impact speed as predicted analytically.
This report gives an overview of the types of economic methodologies and models used by Sandia economists in their consequence analysis work for the National Infrastructure Simulation & Analysis Center and other DHS programs. It describes the three primary resolutions at which analysis is conducted (microeconomic, mesoeconomic, and macroeconomic), the tools used at these three levels (from data analysis to internally developed and publicly available tools), and how they are used individually and in concert with each other and other infrastructure tools.
Sandia collects environmental data to determine and report the impact of existing SNL/NM operations on the environment. Sandia’s environmental programs include air and water quality, environmental monitoring and surveillance, and activities associated with the National Environmental Policy Act (NEPA). Sandia’s objective is to maintain compliance with federal, state, and local requirements, and to affect the corporate culture so that environmental compliance practices continue to be an integral part of operations.
Quasi-static experimental techniques for fracture toughness have been well developed and end notched flexure (ENF) technique has become a typical method to determined mode-II fracture toughness. ENF technique also has been implemented to high-rate testing using SHPB (Split Hopkinson Pressure Bar) technique for dynamic fracture characterization of composites. In general, the loading condition in dynamic characterization needs to be carefully verified that forces are balanced if same equations are used to calculate the fracture toughness. In this study, we employed highly sensitive polyvinylidene fluoride (PVDF) force transducers to measure the forces on the front wedge and back spans of the three-point bending setup. High rate digital image correlation (DIC) was also conducted to investigate the stress wave propagation during the dynamic loading. After careful calibration, the PVDF film transducer was made into small square pieces that are embedded on the front loading wedge and back supporting spans. Outputs from the three PVDF transducers as well as the strain gage on the transmission bar are recorded. The DIC result shows the transverse wave front propagates from the wedge towards the supports. If the crack starts to propagate before reaching force balance, numerical simulation, such as finite element analysis, should be implemented together with the dynamic experimental data to determine the mode-II fracture toughness.
In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: 1. rotation and moment estimation at connection points; 2. providing substructure Ritz vectors that adequately span the connection motion space; and 3. adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: 1. designating response sensor locations; 2. designating force input locations; 3. physical design of the transmission simulator; and 4. modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.
Quasi-static experimental techniques for fracture toughness have been well developed and end notched flexure (ENF) technique has become a typical method to determined mode-II fracture toughness. ENF technique also has been implemented to high-rate testing using SHPB (Split Hopkinson Pressure Bar) technique for dynamic fracture characterization of composites. In general, the loading condition in dynamic characterization needs to be carefully verified that forces are balanced if same equations are used to calculate the fracture toughness. In this study, we employed highly sensitive polyvinylidene fluoride (PVDF) force transducers to measure the forces on the front wedge and back spans of the three-point bending setup. High rate digital image correlation (DIC) was also conducted to investigate the stress wave propagation during the dynamic loading. After careful calibration, the PVDF film transducer was made into small square pieces that are embedded on the front loading wedge and back supporting spans. Outputs from the three PVDF transducers as well as the strain gage on the transmission bar are recorded. The DIC result shows the transverse wave front propagates from the wedge towards the supports. If the crack starts to propagate before reaching force balance, numerical simulation, such as finite element analysis, should be implemented together with the dynamic experimental data to determine the mode-II fracture toughness.
This research utilizes a method for calculating an atomic-scale deformation gradient within the framework of continuum mechanics using atomistic simulations to examine bicrystal grain boundaries subjected to shear loading. We calculate the deformation gradient, its rotation tensor from polar decomposition, and estimates of lattice curvature and vorticity for thin equilibrium bicrystal geometries deformed at low temperature. These simulations reveal pronounced deformation fields that exist in small regions surrounding the grain boundary, and demonstrate the influence of interfacial structure on mechanical behavior for the thin models investigated. Our results also show that more profound insight is gained concerning inelastic grain boundary phenomena by analyzing the deformed structures with regard to these continuum mechanical metrics.
This work explores how the high-load limits of HCCI are affected by fuel autoignition reactivity, EGR quality/composition, and EGR unmixedness for naturally aspirated conditions. This is done for PRF80 and PRF60. The experiments were conducted in a singlecylinder HCCI research engine (0.98 liters) with a CR = 14 piston installed. By operating at successively higher engine loads, five load-limiting factors were identified for these fuels: 1) Residual-NOx-induced run-away advancement of the combustion phasing, 2) EGR-NOx- induced run-away, 3) EGR-NOx/wall-heating induced run-away 4) EGR-induced oxygen deprivation, and 5) excessive partial-burn occurrence due to EGR unmixedness. The actual load-limiting factor is dependent on the autoignition reactivity of the fuel, the EGR quality level (where high quality refers to the absence of trace species like NO, HC and CO, i.e. simulated EGR), the level of EGR unmixedness, and the selected pressurerise rate (PRR). For a reactive fuel like PRF60, large amounts of EGR are required to control the combustion phasing. Therefore, for operation with simulated EGR, the maximum IMEP becomes limited by the available oxygen. When real EGR (with trace species) is used instead of the simulated EGR, the maximum IMEP becomes limited by EGR-NOx/wall-heating induced runaway. For the moderately reactive PRF80 operated with simulated EGR, the maximum IMEP becomes limited by residual-NOx-induced run-away. Furthermore, operation with real EGR lowers the maximum steady IMEP because of EGR-NOx-induced run-away. This is similar to PRF60. Finally, the data show that EGR/fresh-gas unmixedness can lead to a substantial reduction of the maximum stable IMEP for operation with a low PRR. This happens because the EGR unmixedness causes occasional partial-burn cycles due to excessive combustion-phasing retard for cycles that induct substantially higher-thanaverage level of EGR gases.
A planar temperature imaging diagnostic has been developed and applied to an investigation of naturally occurring thermal stratification in an HCCI engine. Natural thermal stratification is critical for high-load HCCI operation because it slows the combustion heat release; however, little is known about its development or distribution. A tracer-based single-line PLIF imaging technique was selected for its good precision and simplicity. Temperature-map images were derived from the PLIF images, based on the temperature sensitivity of the fluorescence signal of the toluene tracer added to the fuel. A well premixed intake charge assured that variations in fuel/air mixture did not affect the signal. Measurements were made in a single-cylinder optically accessible HCCI research engine (displacement = 0.98 liters) at a typical 1200 rpm operating condition. Since natural thermal stratification develops prior to autoignition, all measurements were made for motored operation. Calibrations were performed in-situ, by varying the intake temperature and pressure over a wide range. Although the absolute accuracy is limited by the pressure-derived temperatures used for calibration, an uncertainty analysis shows that the precision of the diagnostic for determining temperature variations at a given condition is very good. Application of the diagnostic provided temperature-map images that showed a progressive development of natural thermal stratification in the bulk gas through the latter compression stroke and early expansion strokes. Applying a PDF analysis with corrections for measurement uncertainties provided additional quantitative results. The data show a clear trend of going from virtually no stratification at 305° CA (55° bTDC), to significant inhomogeneities at TDC. Near TDC, the images show distinct hotter and colder pockets with a turbulent structure. Images were also acquired across the charge from the mid-plane to outer boundary layer at 330° CA and TDC. They show an increase in thermal stratification and a change of its structure in the outer boundary layer, and they provide a measure of the boundary-layer thickness. Where possible, results were compared with previous fired-engine and modeling data, and good agreement was found.
Shadowgraph/schlieren imaging techniques have often been used for flow visualization of reacting and non-reacting systems. In this paper we show that high-speed shadowgraph visualization in a high-pressure chamber can also be used to identify cool-flame and high-temperature combustion regions of diesel sprays, thereby providing insight into the time sequence of diesel ignition and combustion. When coupled to simultaneous high-speed Mie-scatter imaging, chemiluminescence imaging, pressure measurement, and spatially-integrated jet luminosity measurements by photodiode, the shadowgraph visualization provides further information about spray penetration after vaporization, spatial location of ignition and high-temperature combustion, and inactive combustion regions where problematic unburned hydrocarbons exist. Examples of the joint application of high-speed diagnostics include transient non-reacting and reacting injections, as well as multiple injections. Shadowgraph and schlieren image processing steps required to account for variations of refractive index within the high-temperature combustion vessel gases are also shown.
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, SC '09
León, Edgar A.; Riesen, Rolf; Maccabe, Arthur B.; Bridges, Patrick G.
Instruction-level simulation is necessary to evaluate new architectures. However, single-node simulation cannot predict the behavior of a parallel application on a supercomputer. We present a scalable simulator that couples a cycle-accurate node simulator with a supercomputer network model. Our simulator executes individual instances of IBM's Mambo PowerPC simulator on hundreds of cores. We integrated a NIC emulator into Mambo and model the network instead of fully simulating it. This decouples the individual node simulators and makes our design scalable. Our simulator runs unmodified parallel message-passing applications on hundreds of nodes. We can change network and detailed node parameters, inject network traffic directly into caches, and use different policies to decide when that is an advantage. This paper describes our simulator in detail, evaluates it, and demonstrates its scalability. We show its suitability for architecture research by evaluating the impact of cache injection on parallel application performance. Copyright 2009 ACM.
Negative valve overlap (NVO) is a valve strategy employed to retain and recompress residual burned gases to assist HCCI combustion, particularly in the difficult regime of low-load operation. NVO allows the retention of large quantities of hot residual burned gases as well as the possibility of fuel addition for combustion control purposes. Reaction of fuel injected during NVO increases charge temperature, but in addition could produce reformed fuel species that may affect main combustion phasing. The strategy holds potential for controlling and extending low-load HCCI combustion. The goal of this work is to demonstrate the feasibility of applying two-wavelength PLIF of 3-pentanone to obtain simultaneous, in-cylinder temperature and composition images during different parts of the HCCI/NVO cycle. Measurements are recorded during the intake and main compression strokes, as well as during the more challenging periods of NVO recompression and re-expansion. To improve measurement quality, effects of diagnostic uncertainty and fluorescence interference are quantified. Temperature, fuel, and EGR images are captured for a range of NVO operating conditions, including main and NVO fuel-injection timings as well total load. The results demonstrate that the diagnostic is capable of providing information useful for the study of HCCI/NVO engine operation.
Polymer foams are used as encapsulants to provide mechanical, electrical, and thermal isolation for engineered systems. In fire environments, the incident heat flux to a system or structure can cause foams to decompose. Commonly used foams, such as polyurethanes, often liquefy and flow during decomposition, and evolved gases can cause pressurization and ultimately failure of sealed containers. In systems safety and hazard analyses, numerical models are used to predict heat transfer to encapsulated objects or through structures. The thermo-mechanical response of systems involving coupled foam decomposition, liquefaction, and flow can be difficult to predict. Predicting pressurization of sealed systems is particularly challenging. To mitigate the issues caused by liquefaction and flow, hybrid polyurethane cyanate ester foams have been developed that have good adhesion and mechanical properties similar to currently used polyurethane and epoxy foams. The hybrid foam decomposes predictably during decomposition. It forms approximately 50 percent by weight char during decomposition in nitrogen. The foam does not liquefy. The charring nature of the hybrid foam has several advantages with respect to modeling heat transfer and pressurization. Those advantages are illustrated by results from recent radiant heat transfer experiments involving encapsulated objects, as well as results from numerical simulations of those experiments.
Results from an experimental study of the aerodynamic and aeroacoustic properties of a flatback version of the TU Delft DU97-W-300 airfoil are presented for a chord Reynolds number of 3 × 106. The data were gathered in the Virginia Tech Stability Wind Tunnel, which uses a special aeroacoustic test section to enable measurements of airfoil self-noise. Corrected wind tunnel aerodynamic measurements for the DU97-W-300 are compared to previous solid wall wind tunnel data and are shown to give good agreement. Aeroacoustic data are presented for the flatback airfoil, with a focus on the amplitude and frequency of noise associated with the vortex-shedding tone from the blunt trailing edge wake. The effect of a splitter plate attachment on both drag and noise is also presented. Computational Fluid Dynamics predictions of the aerodynamic properties of both the unmodified DU97-W-300 and the flatback version are compared to the experimental data.
An order-of-convergence (with respect to a path-length parameter) verification study is undertaken for an implementation of the condensed-history algorithm in a Monte Carlo electron transport code. "Condensed- history" refers to simulating the cumulative effects of the electron without modeling each individual collision. A 1992 paper by Larsen derived the expected order of convergence for a few mathematical models of this type of algorithm. We examine the order of convergence of a condensed-history algorithm based on that used in the Integrated TIGER Series (as applied to electron albedo problems) in the presence of Monte Carlo uncertainty.
The well-known "sweep" algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the "forward" and "symmetric" solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems.
Electromagnetic shielding (EMS) requirements become more demanding as isolation requirements exceed 100dB in advanced S-band transceiver designs. Via-hole fences have served such designs well in low temperature cofired ceramic (LTCC) modules when used in 2-3 rows, depending on requirements. Replacing these vias with slots through the full thickness of a tape layer has been modeled and shown to improve isolation. We expand on a technique for replacing these rows of full tape thickness features (FTTF) with a single row of stacked walls which, by sequential punching, can be continuous, providing a solid Faraday cage board element with leak-free seams. We discuss the material incompatibilities and manufacturing considerations that need to be addressed for such structures and show preliminary implementations. We will compare construction of multilayer and single layer designs.
Recent papers have argued for the benefit of a tighter integration of the disciplines of human factors (HF) and human reliability analysis (HRA). While both disciplines are concerned with human performance, HF uses performance data to prescribe optimal human-machine interface (HMI) design, while HRA applies human performance principles and data to model the probabilistic risk of human activities. An overlap between the two disciplines is hindered by the seeming incompatibility of their respective data needs. For example, while HF studies produce data especially about the efficacy of particular system designs, these efficacy data are rarely framed in such a way as to provide the magnitude of the performance effect in terms of human error. While qualitative insights for HRA result from the HF studies, the HF studies often fail to produce data that inform the quantification of human error. In this paper, the author presents a review of the data requirements for HRA and offers suggestions on how to piggyback HRA data collection on existing HF studies. HRA data requirements include specific parameters such as the effect size of the human performance increment or degradation observed and classification of the human performance according to a simple set of performance shaping factors.
Recent advances in nanoparticle inks have enabled inkjet printing of metal traces and interconnects with very low (100-200°C) process temperatures. This has enabled integration of printable electronics such as antennas and radio frequency identification (RFID) tags with polyimide, teflon, PCBs, and other low temperature substrates. We discuss here printing of nanoparticle inks for three dimensional interconnects, and the apparent mechanism of nanoparticle ink conductivity development at these low process temperatures.
This paper describes a set of critical experiments that were done to gather benchmark data on the effects of rhodium in critical systems. Approach-to-critical experiments with arrays of low-enriched water-moderated and -reflected fuel were performed with rhodium foils sandwiched between the fuel pellets in some of the fuel elements. The results of the experiments are compared with results from two Monte Carlo codes using cross sections from ENDF/B-V, ENDF/B-VI, and ENDF/B-VII.
This paper applies a pragmatic interval-based approach to validation of a fire dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. Significant aleatory and epistemic sources of uncertainty exist in the experiments and simulations. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model affirmation or refutation, take place in "real space" as opposed to "difference space" where subtractive differences between experiments and simulations are assessed. The versatile model validation framework handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties from multiple correlated and uncorrelated source types, including: • experimental variability from multiple repeat experiments • uncertainty of experimental inputs • experimental output measurement uncertainties • uncertainties that arise in data processing and inference from raw simulation and experiment outputs • parameter and model-form uncertainties intrinsic to the model • numerical solution uncertainty from model discretization effects. The framework and procedures of the model validation methodology are here applied to a difficult validation problem involving experimental and predicted calorimeter temperatures in a wind-driven hydrocarbon pool fire.
The significant growth in wind turbine installations in the past few years has fueled new scenarios that envision even larger expansion of U.S. wind electricity generation from the current 1.5% to 20% by 2030. Such goals are achievable and would reduce carbon dioxide emissions and energy dependency on foreign sources. In conjunction with such growth are the enhanced opportunities for manufacturers, developers, and researchers to participate in this renewable energy sector. Ongoing research activities at the National Renewable Energy Laboratory and Sandia National Laboratories will continue to contribute to these opportunities. This paper focuses on describing the current research efforts at Sandia's wind energy department, which are primarily aimed at developing large rotors that are lighter, more reliable and produce more energy.
The significant growth in wind turbine installations in the past few years has fueled new scenarios that envision even larger expansion of U.S. wind electricity generation from the current 1.5% to 20% by 2030. Such goals are achievable and would reduce carbon dioxide emissions and energy dependency on foreign sources. In conjunction with such growth are the enhanced opportunities for manufacturers, developers, and researchers to participate in this renewable energy sector. Ongoing research activities at the National Renewable Energy Laboratory and Sandia National Laboratories will continue to contribute to these opportunities. This paper focuses on describing the current research efforts at Sandia's wind energy department, which are primarily aimed at developing large rotors that are lighter, more reliable and produce more energy.