Distributed systems programming for HPC system management
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Previous wind tunnel experiments up to Mach 3 have provided fluctuating wall-pressure spectra beneath a supersonic turbulent boundary layer, which essentially are flat at low frequency and do not exhibit the theorized {psi}{sup 2} dependence. The flat portion of the spectrum extends over two orders of magnitude and represents structures reaching at least 100 {delta} in scale, raising questions about their physical origin. The spatial coherence required over these long lengths may arise from very-large-scale structures that have been detected in turbulent boundary layers due to groupings of hairpin vortices. To address this hypothesis, data have been acquired from a dense spanwise array of fluctuating wall pressure sensors, then invoking Taylor's Hypothesis and low-pass filtering the data allows the temporal signals to be converted into a spatial map of the wall pressure field. This reveals streaks of instantaneously correlated pressure fluctuations elongated in the streamwise direction and exhibiting spanwise alternation of positive and negative events that meander somewhat in tandem. As the low-pass filter cutoff is lowered, the fluctuating pressure magnitude of the coherent structures diminishes while their length increases.
Abstract not provided.
A novel multiphase shock tube has recently been developed to study particle dynamics in gas-solid flows having particle volume fractions that reside between the dilute and granular regimes. The method for introducing particles into the tube involves the use of a gravity-fed contoured particle seeder, which is capable of producing dense fields of spatially isotropic particles. The facility is capable of producing planar shocks having a maximum shock Mach number of about 2.1 that propagate into air at initially ambient conditions. The primary purpose of this new facility is to provide high fidelity data of shock-particle interactions in flows having particle volume fractions of about 1 to 50%. To achieve this goal, the facility drives a planar shock into a spatially isotropic field, or curtain, of particles. Experiments are conducted for two configurations where the particle curtain is either parallel to the spanwise, or the streamwise direction. Arrays of high-frequency-response pressure transducers are placed near the particle curtain to measure the attenuation and shape change of the shock owing to its interaction with the dense gas particle field. In addition, simultaneous high-speed imaging is used to visualize the impact of the shock on the particle curtain and to measure the particle motion induced downstream of the shock.
Design calculations for NIF convergent ablator experiments will be described. The convergent ablator experiments measure the implosion trajectory, velocity, and ablation rate of an x-ray driven capsule and are a important component of the U. S. National Ignition Campaign at NIF. The design calculations are post-processed to provide simulations of the key diagnostics: (1) Dante measurements of hohlraum x-ray flux and spectrum, (2) streaked radiographs of the imploding ablator shell, (3) wedge range filter measurements of D-He3 proton output spectra, and (4) GXD measurements of the imploded core. The simulated diagnostics will be compared to the experimental measurements to provide an assessment of the accuracy of the design code predictions of hohlraum radiation temperature, capsule ablation rate, implosion velocity, shock flash areal density, and x-ray bang time. Post-shot versions of the design calculations are used to enhance the understanding of the experimental measurements and will assist in choosing parameters for subsequent shots and the path towards optimal ignition capsule tuning.
We report on the use of thin ({approx}30 micron) photopatterned polymer membranes for on-line preconcentration of single- or double-stranded DNA samples prior to electrophoretic analysis. Shaped UV laser light is used to quickly ({approx}10 seconds) polymerize a highly crosslinked polyacrylamide plug. By applying an electric field across the membrane, DNA from a dilute sample can be concentrated into a narrow zone (<100 micron wide) at the outside edge of the membrane. The field at the membrane can then be reversed, allowing the narrow plug to be cleanly injected into a separation channel filled with a sieving polymer for analysis. Concentration factors >100 are possible, increasing the sensitivity of analysis for dilute samples. We have fabricated both neutral membranes (purely size-based exclusion) as well as anionic membranes (size and charge exclusion), and characterized the rate of preconcentration as well as the efficiency of injection from both types of membrane, for DNA, ranging from a 20 base ssDNA oligonucleotide to >14 kbp dsDNA. We have also investigated the effects of concentration polarization on device performance for the charged membrane. Advantages of the membrane preconcentration approach include the simplicity of device fabrication and operation, and the generic (non-sequence specific) nature of DNA capture, which is useful for complex or poorly characterized samples where a specific capture sequence is not present. The membrane preconcentration approach is well suited to simple single-level etch glass chips, with no need for patterned electrodes, integrated heaters, valves, or other elements requiring more complex chip fabrication. Additionally, the ability to concentrate multiple charged analytes into a narrow zone enables a variety of assay functionalities, including enzyme-based and hybridization-based analyses.
The emerging field of metagenomics seeks to assess the genetic diversity of complex mixed populations of bacteria, such as those found at different sites within the human body. A single person's mouth typically harbors up to 100 bacterial species, while surveys of many people have found more than 700 different species, of which {approx}50% have never been cultivated. In typical metagenomics studies, the cells themselves are destroyed in the process of gathering sequence information, and thus the connection between genotype and phenotype is lost. A great deal of sequence information may be generated, but it is impossible to assign any given sequence to a specific cell. We seek non-destructive, culture-independent means of gathering sequence information from selected individual cells from mixed populations. As a first step, we have developed a microfluidic device for concentrating and specifically labeling bacteria from a mixed population. Bacteria are electrophoretically concentrated against a photopolymerized membrane element, and then incubated with a specific fluorescent label, which can include antibodies as well as specific or non-specific nucleic acid stains. Unbound stain is washed away, and the labeled bacteria are released from the membrane. The stained cells can then be observed via epifluorescence microscopy, or counted via flow cytometry. We have tested our device with three representative bacteria from the human microbiome: E. coli (gut, Gram-negative), Lactobacillus acidophilus (mouth, Gram-positive), and Streptococcus mutans (mouth, Gram-positive), with results comparable to off-chip labeling techniques.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A bubble in an acoustic field experiences a net 'Bjerknes' force from the nonlinear coupling of its radial oscillations with the oscillating buoyancy force. It is typically assumed that the bubble's net terminal velocity can be found by considering a spherical bubble with the imposed 'Bjerknes stresses'. We have analyzed the motion of such a bubble using a rigorous perturbation approach and found that one must include a term involving an effective mass flux through the bubble that arises from the time average of the second-order nonlinear terms in the kinematic boundary condition. The importance of this term is governed by the dimensionless parameter {alpha} = R{sup 2} {phi}/R{sup 2} {phi} {nu}.-{nu}, where R is the bubble radius, {phi} is the driving frequency, and {nu} is the liquid kinematic viscosity. If {alpha} is large, this term is unimportant, but if {alpha} is small, this term is the dominant factor in determining the terminal velocity.
Understanding charge transport processes at a molecular level using computational techniques is currently hindered by a lack of appropriate models for incorporating anisotropic electric fields, as occur at charged fluid/solid interfaces, in molecular dynamics (MD) simulations. In this work, we develop a model for including electric fields in MD using an atomistic-to-continuum framework. Our model represents the electric potential on a finite element mesh satisfying a Poisson equation with source terms determined by the distribution of the atomic charges. The method is verified using simulations where analytical solutions are known or comparisons can be made to existing techniques. A Calculation of a salt water solution in a silicon nanochannel is performed to demonstrate the method in a target scientific application.
ACM Computer Communication Review
Abstract not provided.
Abstract not provided.
The structure of turbulence in an oscillating channel flow with near-sinusoidal fluctuations in bulk velocity is investigated. Phase-locked particle-image velocimetry data in the streamwise/wall-normal plane are interrogated to reveal the phase-modulation of two-point velocity correlation functions and of linear stochastic estimates of the velocity fluctuation field given the presence of a vortex in the logarithmic region of the boundary layer. The results reveal the periodic modulation of turbulence structure between large-scale residual disturbances, relaminarization during periods of strong acceleration, and a quasi-steady flow with evidence of hairpin vortices which is established late in the acceleration phase and persists through much of the deceleration period.
Abstract not provided.
Abstract not provided.
The main result we will present is a 2k-approximation algorithm for the following 'k-hypergraph demand matching' problem: given a set system with sets of size <=k, where sets have profits & demands and vertices have capacities, find a max-profit subsystem whose demands do not exceed the capacities. The main tool is an iterative way to explicitly build a decomposition of the fractional optimum as 2k times a convex combination of integral solutions. If time permits we'll also show how the approach can be extended to a 3-approximation for 2-column sparse packing. The second result is tight w.r.t the integrality gap, and the first is near-tight as a gap lower bound of 2(k-1+1/k) is known.
Journal of Applied Physics
Abstract not provided.
Canadian Journal of Physics
Abstract not provided.
The Z Pulsed Power Facility at Sandia National Laboratories in Albuquerque, New Mexico, USA is one of the world's premier high energy density physics facilities. The Z Facility derives its name from the z-pinch phenomena which is a type of plasma confinement system that uses the electrical current in the plasma to generate a magnetic field that compresses it. Z refers to the direction of current flow, the z axis in a three dimensional Cartesian coordinate system. The multiterawatt, multimegajoule electrical pulse the Facility produces is 100-400 nanoseconds in time. Research and development programs currently being conducted on the Z Facility include inertial confinement fusion, dynamic material properties, laboratory astrophysics and radiation effects. The Z Facility vacuum system consists of two subsystems, center section and load diagnostics. Dry roughing pumps and cryogenic high vacuum pumps are used to evacuate the 40,000 liter, 200 square meter center section of the facility where the experimental load is located. Pumping times on the order of two hours are required to reduce the pressure from atmospheric to 10{sup -5} Torr. The center section is cycled from atmosphere to high vacuum for each experiment. The facility is capable of conducting one to two experiments per day. Numerous smaller vacuum pumping systems are used to evacuate load diagnostics. The megajoules of energy released during an experiment causes damage to the Facility that presents numerous challenges for reliable operation of the vacuum system.
This presentation briefly describes the ongoing study of fuel cell systems on-board a commercial airplane. Sandia's current project is focused on Proton Exchange Membrane (PEM) fuel cells applied to specific on-board electrical power needs. They are trying to understand how having a fuel cell on an airplane would affect overall performance. The fuel required to accomplish a mission is used to quantify the performance. Our analysis shows the differences between the base airplane and the airplane with the fuel cell. There are many ways of designing a system, depending on what you do with the waste heat. A system that requires ram air cooling has a large mass penalty due to increased drag. The bottom-line impact can be expressed as additional fuel required to complete the mission. Early results suggest PEM fuel cells can be used on airplanes with manageable performance impact if heat is rejected properly. For PEMs on aircraft, we are continuing to perform: (1) thermodynamic analysis (investigate configurations); (2) integrated electrical design (with dynamic modeling of the micro grid); (3) hardware assessment (performance, weight, and volume); and (4) galley and peaker application.
We present the bandwidth enhancement of an EAM monolithically integrated with two mutually injection-locked lasers. An improvement in the modulation efficiency and bandwidth are shown with mutual injection locking.
Advances in electrochemical energy storage science require the development of new or the refinement of existing in situ probes that can be used to establish structure - activity relationships for technologically relevant materials. The drive to develop reversible, high capacity electrodes from nanoscale building blocks creates an additional requirement for high spatial resolution probes to yield information of local structural, compositional, and electronic property changes as a function of the storage state of a material. In this paper, we describe a method for deconstructing a lithium ion battery positive electrode into its basic constituents of ion insertion host particles and a carbon current collector. This model system is then probed in an electrochemical environment using a combination of atomic force microscopy and tunneling spectroscopy to correlate local activity with morphological and electronic configurational changes. Cubic spinel Li{sub 1+x}Mn{sub 2-x}O{sub 4} nanoparticles are grown on graphite surfaces using vacuum deposition methods. The structure and composition of these particles are determined using transmission electron microscopy and Auger microprobe analysis. The response of these particles to initial de-lithiation, along with subsequent electrochemical cycling, is tracked using scanning probe microscopy techniques in polar aprotic electrolytes (lithium hexafluorophosphate in ethylene carbonate:diethylcarbonate). The relationship between nanoparticle size and reversible ion insertion activity will be a specific focus of this paper.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We have found computationally that, at sufficiently high currents, half of the neutrons produced by a deuterium z pinch are thermonuclear in origin. Early experiments below 1-MA current found that essentially all of the neutrons produced by a deuterium pinch are not thermonuclear, but are initiated by an instability that creates beam-target neutrons. Many subsequent authors have supported this result while others have claimed that pinch neutrons are thermonuclear. To resolve this issue, we have conducted fully kinetic, collisional, and electromagnetic simulations of the complete time evolution of a deuterium pinch. We find that at 1-MA pinch currents, most of the neutrons are, indeed, beam-target in origin. At much higher current, half of the neutrons are thermonuclear and half are beam-target driven by instabilities that produce a power law fall off in the ion energy distribution function at large energy. The implications for fusion energy production with such pinches are discussed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review B
Abstract not provided.
Recent copper wire array shots on Z, when spectroscopically analyzed on a spatially-averaged basis, appear to have achieved ion densities near 10{sup 21} cm{sup -3}, electron temperatures of 1.25 keV, and K-shell radiating participation of 70-85% of the load mass. However, pinhole images of the shots reveal considerable structure, including several well-defined intensely radiating 'bright spots', which may be due to enhanced density, temperature, or some combination of the two. We have analyzed these individual spots on selected shots, using line-outs of their spectrum and inferred powers based on their images. We compare the properties of these spots (are they dense, hot, or both?), and examine their effect on inferring the radiating mass.
Abstract not provided.
The parameterization of the fluxes of heat and salt across double-diffusive interfaces is of interest in geophysics, astrophysics, and engineering. The present work is a parametric study of these fluxes using one-dimensional-turbulence (ODT) simulations. Its main distinction is that it considers a parameter space larger than previous studies. Specifically, this work considers the effect on the fluxes of the stability parameter R{sub {rho}}, Rayleigh number Ra, Prandtl number, Lewis number, and Richardson number. The ratio Ra/R{sub {rho}} is found to be a dominant parameter. Here Ra/R{sub {rho}} can be seen as a ratio of destabilizing and stabilizing effects. Trends predicted by the simulations are in good agreement with previous models and available measurements.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In modeling thermal transport in nanoscale systems, classical molecular dynamics (MD) explicitly represents phonon modes and scattering mechanisms, but electrons and their role in energy transport are missing. Furthermore, the assumption of local equilibrium between ions and electrons often fails at the nanoscale. We have coupled MD (implemented in the LAMMPS MD package) with a partial differential equation based representation of the electrons (implemented using finite elements). The coupling between the subsystems occurs via a local version of the two-temperature model. Key parameters of the model are calculated using the Time Dependent Density Functional Theory with either explicit or implicit energy flow. We will discuss application of this work in the context of the US DOE Center for Integrated Nanotechnologies (CINT).
A MELCOR model has been developed to simulate a pressurized water reactor (PWR) 17 x 17 assembly in a spent fuel pool rack cell undergoing severe accident conditions. To the extent possible, the MELCOR model reflects the actual geometry, materials, and masses present in the experimental arrangement for the Sandia Fuel Project (SFP). The report presents an overview of the SFP experimental arrangement, the MELCOR model specifications, demonstration calculation results, and the input model listing.
Most of the regulatory agencies world-wide require that containers used for the transportation of natural UF6 and depleted UF6 must survive a fully-engulfing fire environment for 30 minutes as described in 10CFR71 and in TS-R-1. The primary objective of this project is to examine the thermo-mechanical performance of 48Y transportation cylinders when exposed to the regulatory hypothetical fire environment without the thermal protection that is currently used for shipments in those countries where required. Several studies have been performed in which UF6 cylinders have been analyzed to determine if the thermal protection currently used on UF6 cylinders of type 48Y is necessary for transport. However, none of them could clearly confirm neither the survival nor the failure of the 48Y cylinder when exposed to the regulatory fire environment without the additional thermal protection. A consortium of five companies that move UF6 is interested in determining if 48Y cylinders can be shipped without the thermal protection that is currently used. Sandia National Laboratories has outlined a comprehensive testing and analysis project to determine if these shipping cylinders are capable of withstanding the regulatory thermal environment without additional thermal protection. Sandia-developed coupled physics codes will be used for the analyses that are planned. A series of destructive and non-destructive tests will be performed to acquire the necessary material and behavior information to benchmark the models and to answer the question about the ability of these containers to survive the fire environment. Both the testing and the analysis phases of this project will consider the state of UF6 under thermal and pressure loads as well as the weakening of the steel container due to the thermal load. Experiments with UF6 are also planned to collect temperature- and pressure-dependent thermophysical properties of this material.
Abstract not provided.
We report on the host-guest interactions between metal-organic frameworks (MOFs) with various profiles and highly polarizable molecules (iodine), with emphasis on identifying preferential sorption sites in these systems. Radioactive iodine 129I, along with other volatile radionuclides (3H, 14C, Xe and Kr), represents a relevant component in the off-gas resulted during nuclear fuel reprocessing. Due to its very long half-life, 15.7 x 106 years, and potential health risks in humans, its efficient capture and long-term storage is of great importance. The leading iodine capture technology to date is based on trapping iodine in silver-exchanged mordenite. Our interests are directed towards improving existent capturing technologies, along with developing novel materials and alternative waste forms. Herein we report the first study that systematically monitors iodine loading onto MOFs, an emerging new class of porous solid-state materials. In this context, MOFs are of particular interest as: (i) they serve as ideal high capacity storage media, (ii) they hold potential for the selective adsorption from complex streams, due to their high versatility and tunability. This work highlights studies on both newly developed in our lab, and known highly porous MOFs that all possess distinct characteristics (specific surface area, pore volume, pore size, and dimension of the window access to the pore). The materials were loaded to saturation, where elemental iodine was introduced from solution, as well as from vapor phase. Uptakes in the range of {approx}125-150 wt% I2 sorbed were achieved, indicating that these materials outperform all other solid adsorbents to date in terms of overall capacity. Additionally, the loaded materials can be efficiently encapsulated in stable waste forms, including as low temperature sintering glasses. Ongoing studies are focused on gathering qualitative information with respect to localizing the physisorbed iodine molecules within the frameworks: X-ray single-crystal analyses, in conjunction with high pressure differential pair distribution function (d-PDF) studies aimed to identify preferential sites in the pores, and improve MOFs robustness. Furthermore, durability studies on the iodine loaded MOFs and subsequent waste forms include thermal analyses, SEM/EDS elemental mapping, and leach-durability testing. We anticipate for this in-depth analysis to further aid the design of advanced materials, capable to address major hallmarks: safe capture, stability and durability over extended timeframes.
Gas puff z-pinch experiments have been proposed for the refurbished Z (ZR) facility for CY2011. Previous gas puff experiments [Coverdale et. al., Phys. Plasmas 14, 056309, 2007] on pre-refurbishment Z established a world record for laboratory fusion neutron yield. New experiments would establish ZR gas puff capability for x-ray and neutron production and could surpass previous yields. We present validation of ALEGRA simulations against previous Z experiments including X-ray and neutron yield, modeling of gas puff implosion dynamics for new gas puff nozzle designs, and predictions of X-ray and neutron yields for the proposed gas puff experiments.
Advanced Functional Materials
Abstract not provided.
We are interested in utilizing the thermo-switchable properties of precursor poly(p-phenylene vinylene) (PPV) polymers to develop capacitor dielectrics that will fail at specific temperatures due to the material irreversibly switching from an insulator to a conducting polymer. By utilizing different leaving groups on the polymer main chain, the temperature at which the polymer transforms into a conductor can be varied over a range of temperatures. Electrical characterization of thin-film capacitors prepared from several precursor PPV polymers indicates that these materials have good dielectric properties until they reach elevated temperatures, at which point conjugation of the polymer backbone effectively disables the device. Here, we present the synthesis, dielectric processing, and electrical characterization of a new thermo-switchable polymer dielectric.
High frequency irradiance variability measured on the ground is caused by the formation, dissipation, and passage of clouds in the sky. If we can identify and associate different cloud types/patterns from satellite imagery, we may be able to predict irradiance variability in areas lacking sensors. With satellite imagery covering the entire U.S., this allows for more accurate integration planning and power flow modeling over wide areas. Satellite imagery from southern Nevada was analyzed at 15 minute intervals over a year. Methods for image stabilization, cloud detection, and textural classification of clouds were developed and tested. High Performance Computing parallel processing algorithms were also investigated and tested. Artificial Neural Networks using imagery as inputs were trained on ground-based measurements of irradiance to model the variability and were tested to show some promise as a means for predicting irradiance variability.
Abstract not provided.
Abstract not provided.
Antineutrino detection using inverse beta decay conversion has demonstrated the capability to measure nuclear reactor power and fissile material content for nuclear safeguards. Current efforts focus on aboveground deployment scenarios, for which highly efficient capture and identification of neutrons is needed to measure the anticipated antineutrino event rates in an elevated background environment. In this submission, we report on initial characterization of a new scintillation-based segmented design that uses layers of ZnS:Ag/{sup 6}LiF and an integrated readout technique to capture and identify neutrons created in the inverse beta decay reaction. Laboratory studies with multiple organic scintillator and ZnS:Ag/{sup 6}LiF configurations reliably identify {sup 6}Li neutron captures in 60 cm-long segments using pulse shape discrimination.
Abstract not provided.
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Abstract not provided.
Measure
Abstract not provided.
Abstract not provided.
The Office of Secretary of Defense (OSD) Power Surety Task Force was officially created in early 2008, after nearly two years of work in demand reduction and renewable energy technologies to support the Warfighter in Theater. The OSD Power Surety Task Force is tasked with identifying efficient energy solutions that support mission requirements. Spray foam insulation demonstrations were recently expanded beyond field structures to include military housing at Ft. Belvoir. Initial results to using the foam in both applications are favorable. This project will address the remaining key questions: (1) Can this technology help to reduce utility costs for the Installation Commander? (2) Is the foam cost effective? (3) What application differences in housing affect those key metrics? The critical need for energy solutions in Hawaii and the existing relationships among Sandia, the Department of Defense (DOD), the Department of Energy (DOE), and Forest City, make this location a logical choice for a foam demonstration. This project includes application and analysis of foam to a residential duplex at the Waikulu military community on Oahu, Hawaii, as well as reference to spray foam applied to a PACOM facility and additional foamed units on Maui, conducted during this project phase. This report concludes the analysis and describes the utilization of foam insulation at military housing in Hawaii and the subsequent data gathering and analysis.
A new method for including electrode plasma effects in particle-in-cell simulation of high power devices is presented. It is not possible to resolve the plasma Debye length, {lambda}{sub D} {approx} 1 {mu}m, but using an explicit, second-order, energy-conserving particle pusher avoids numerical heating at large {delta}x/{lambda}{sub D} >> 1. Non-physical plasma oscillations are mitigated with Coulomb collisions and a damped particle pusher. A series of 1-D simulations show how plasma expansion varies with cell size. This reveals another important scale length, {lambda}{sub E} = T/(eE), where E is the normal electric field in the first vacuum cell in front of the plasma, and T is the plasma temperature. For {delta}x/{lambda}{sub E} < {approx}1, smooth, physical plasma expansion is observed. However, if {delta}x/{lambda}{sub E} >> 1, the plasma 'expands' in abrupt steps, driven by a numerical instability. For parameters of interest, {lambda}{sub E} << 100 {mu}m. It is not feasible to use cell sizes small enough to avoid this instability in large 3-D simulations.
This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.
The objective of this project, which was supported by the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) Chemical and Biological Division (CBD), was to investigate options for the decontamination of the exteriors and interiors of vehicles in the civilian setting in order to restore those vehicles to normal use following the release of a highly toxic chemical. The decontamination of vehicles is especially challenging because they often contain sensitive electronic equipment, multiple materials some of which strongly adsorb chemical agents, and in the case of aircraft, have very rigid material compatibility requirements (i.e., they cannot be exposed to reagents that may cause even minor corrosion). A systems analysis approach was taken examine existing and future civilian vehicle decontamination capabilities.
Abstract not provided.
We use a recently developed hybrid numerical technique [MacMeccan et al. (2009)] that combines a lattice-Boltzmann (LB) fluid solver with a finite element (FE) solid-phase solver to study suspensions of elastic capsules. The LB method recovers the Navier-Stokes hydrodynamics, while the linear FE method models the deformation of fluid-filled elastic capsules for moderate levels of deformation. The simulation results focus on accurately describing the suspension rheology, including the particle pressure, and relating these changes to changes in the microstructure. Simulations are performed with hundreds of particles in unbounded shear allowing an accurate description of the bulk suspension rheology and microstructure. In contrast to rigid spherical particles, elastic capsules are capable of producing normal stresses in the dilute limit. For dense suspensions, the first normal stress difference is of particular interest. The first normal stress difference, which is negative for dense rigid spherical suspensions, undergoes a sign change at moderate levels of deformation of the suspended capsules.
The first part of this talk provides a basic introduction to the building blocks of domain decomposition solvers. Specific details are given for both the classical overlapping Schwarz (OS) algorithm and a recent iterative substructuring (IS) approach called balancing domain decomposition by constraints (BDDC). A more recent hybrid OS-IS approach is also described. The success of domain decomposition solvers depends critically on the coarse space. Similarities and differences between the coarse spaces for OS and BDDC approaches are discussed, along with how they can be obtained from discrete harmonic extensions. Connections are also made between coarse spaces and multiscale modeling approaches from computational mechanics. As a specific example, details are provided on constructing coarse spaces for incompressible fluid problems. The next part of the talk deals with a variety of implementation details for domain decomposition solvers. These include mesh partitioning options, local and global solver options, reducing the coarse space dimension, dealing with constraint equations, residual weighting to accelerate the convergence of OS methods, and recycling of Krylov spaces to efficiently solve problems with multiple right hand sides. Some potential bottlenecks and remedies for domain decomposition solvers are also discussed. The final part of the talk concerns some recent theoretical advances, new algorithms, and open questions in the analysis of domain decomposition solvers. The focus will be primarily on the work of the speaker and his colleagues on elasticity, fluid mechanics, problems in H(curl), and the analysis of subdomains with irregular boundaries.
Abstract not provided.
Abstract not provided.
Abstract not provided.
SIAM Journal on Numerical Analysis
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present the results of the first stage of a two-stage evaluation of open source visual analytics packages. This stage is a broad feature comparison over a range of open source toolkits. Although we had originally intended to restrict ourselves to comparing visual analytics toolkits, we quickly found that very few were available. So we expanded our study to include information visualization, graph analysis, and statistical packages. We examine three aspects of each toolkit: visualization functions, analysis capabilities, and development environments. With respect to development environments, we look at platforms, language bindings, multi-threading/parallelism, user interface frameworks, ease of installation, documentation, and whether the package is still being actively developed.
The shear webs and laminates of core panels of wind turbine blades must be designed to avoid panel buckling while minimizing blade weight. Typically, buckling resistance is evaluated by consideration of the load-deflection behavior of a blade using finite element analysis (FEA) or full-scale static loading of a blade to failure under a simulated extreme loading condition. This paper examines an alternative means for evaluating blade buckling resistance using non-destructive modal tests or FEA. In addition, panel resonances can be utilized for structural health monitoring by observing changes in the modal parameters of these panel resonances, which are only active in a portion of the blade that is susceptible to failure. Additionally, panel resonances are considered for updating of panel laminate model parameters by correlation with test data. During blade modal tests conducted at Sandia Labs, a series of panel modes with increasing complexity was observed. This paper reports on the findings of these tests, describes potential ways to utilize panel resonances for blade evaluation, health monitoring, and design, and reports recent numerical results to evaluate panel resonances for use in blade structural health assessment.
Abstract not provided.
Abstract not provided.
Silver nanomaterials have significant application resulting from their optical properties related to surface enhanced Raman spectroscopy, high electrical conductivity, and anti-microbial impact. A 'green chemistry' synthetic approach for silver nanomaterials minimizes the environmental impact of silver synthesis, as well as lowers the toxicity of the reactive agents. Biopolymers have long been used for stabilization of silver nanomaterials during synthesis, and include gum Arabic, heparin, and common starch. Maltodextrin is a processed derivative of starch with lower molecular weight and an increase in the number of reactive reducing aldehyde groups, and serves as a suitable single reactant for the formation of metallic silver. Silver nanomaterials can be formed under either a thermal route at neutral pH in water or by reaction at room temperature under more alkaline conditions. Deposited silver materials are formed on substrates from near neutral pH solutions at low temperatures near 50 C. Experimental conditions based on material concentrations, pH and reaction time are investigated for development of deposited films. Deposit morphology and optical properties are characterized using SEM and UV-vis techniques. Silver nanoparticles are generated under alkaline conditions by a dissolution-reduction method from precipitated silver (II) oxide. Synthesis conditions were explored for the rapid development of stable silver nanoparticle dispersions. UV-vis absorption spectra, powder X-ray diffraction (PXRD), dynamic light scattering (DLS), and transmission electron microscopy (TEM) techniques were used to characterize the nanoparticle formation kinetics and the influence of reaction conditions. The adsorbed content of the maltodextrin was characterized using thermogravimetric analysis (TGA).
Surfpack is a library of multidimensional function approximation methods useful for efficient surrogate-based sensitivity/uncertainty analysis or calibration/optimization. I will survey current Surfpack meta-modeling capabilities for continuous variables and describe recent progress generalizing to both continuous and categorical factors, including relevant test problems and analysis comparisons.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Clinical Infectious Diseases
Abstract not provided.
Journal of Applied Physics
Abstract not provided.
Abstract not provided.
The development of functionalized polymer dielectrics based on poly(norbornene) and poly(PhONDI) (PhONDI = N-phenyl-7-oxanorbornene-5,6-dicarboximide) is presented. Functionalization of the polymer backbones by the thiol-ene reaction was examined to determine if thiol addition improved dielectric properties. Poly(norbornene) was not amenable to functionalization due to the propensity to crosslink under the reaction conditions studied. Poly(PhONDI) could be successfully functionalized, and the functionalized polymer was found to have increased breakdown strength as well as improved solution stability. Initial studies on the development of thiol-functionalized silica/poly(PhONDI) nanocomposites and their dielectric properties will also be discussed.
Journal of Materials Processing Technology
Abstract not provided.
Novel low loss photopatternable matrix materials for IR metamaterial applications were synthesized using the ring opening metathesis polymerization reaction (ROMP) of norbornene followed by a partial hydrogenation to remove most of the IR absorbing olefin groups which absorb in the 8-12 {micro}m range. Photopatterning was achieved via crosslinking of the remaining olefin groups with alpha, omega-dithiols via the thiol-ene coupling reaction. Since ROMP is a living polymerization the molecular weight of the polymer can be controlled simply by varying the ratio of catalyst to monomer. In order to determine the optimum photopattenable IR matrix material we varied the amount of olefin remaining after the partial hydrogenation. Hydrogenation was accomplished using tosyl hydrazide. The degree of hydrogenation can be controlled by altering the reaction time or reaction stoichiometry and the by-products can be easily removed during workup by precipitation into ethanol. Several polymers have been prepared using this reduction scheme including two polymers which had 54% and 68% olefin remaining. Free standing films (approx. 12 {micro}m) were prepared from the 68% olefin material using draw-down technique and subsequently irradiated with a UV lamp (365 nm) for thirty minutes to induce crosslinking via thiol-ene reaction. After crosslinking, the olefin IR-absorption band disappeared and the Tg of the matrix material increased; both desirable properties for IR metamaterial applications. The polymer system has inherent photopatternable behavior primarily because of solubility differences between the pre-polymer and cross-linked matrix. Photopatterned structures using the 54% as well as the 68% olefin material were easily obtained. The synthesis, processing, and IR absorption data and the ramifications to dielectric metamaterials will be discussed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Silicon microfabrication has seen many decades of development, yet the structural reliability of microelectromechanical systems (MEMS) is far from optimized. The fracture strength of Si MEMS is limited by a combination of poor toughness and nanoscale etch-induced defects. A MEMS-based microtensile technique has been used to characterize the fracture strength distributions of both standard and custom microfabrication processes. Recent improvements permit 1000's of test replicates, revealing subtle but important deviations from the commonly assumed 2-parameter Weibull statistical model. Subsequent failure analysis through a combination of microscopy and numerical simulation reveals salient aspects of nanoscale flaw control. Grain boundaries, for example, suffer from preferential attack during etch-release thereby forming failure-critical grain-boundary grooves. We will discuss ongoing efforts to quantify the various factors that affect the strength of polycrystalline silicon, and how weakest-link theory can be used to make worst-case estimates for design.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The International Perspectives on Mitigating Laboratory Biorisks workshop, held at the Renaissance Polat Istanbul Hotel in Istanbul, Republic of Turkey, from October 25 to 27, 2010, sought to promote discussion between experts and stakeholders from around the world on issues related to the management of biological risk in laboratories. The event was organized by Sandia National Laboratories International Biological Threat Reduction program, on behalf of the US Department of State Biosecurity Engagement Program and the US Department of Defense Cooperative Biological Engagement Program. The workshop came about as a response to US Under Secretary of State Ellen O. Tauscher's statements in Geneva on December 9, 2009, during the Annual Meeting of the States Parties to the Biological Weapons Convention (BWC). Pursuant to those remarks, the workshop was intended to provide a forum for interested countries to share information on biorisk management training, standards, and needs. Over the course of the meeting's three days, participants discussed diverse topics such as the role of risk assessment in laboratory biorisk management, strategies for mitigating risk, measurement of performance and upkeep, international standards, training and building workforce competence, and the important role of government and regulation. The meeting concluded with affirmations of the utility of international cooperation in this sphere and recognition of positive prospects for the future. The workshop was organized as a series of short presentations by international experts on the field of biorisk management, followed by breakout sessions in which participants were divided into four groups and urged to discuss a particular topic with the aid of a facilitator and a set of guiding questions. Rapporteurs were present during the plenary session as well as breakout sessions and in particular were tasked with taking notes during discussions and reporting back to the assembled participants a brief summary of points discussed. The presentations and breakout sessions were divided into five topic areas: 'Challenges in Biorisk Management,' 'Risk Assessment and Mitigation Measures,' 'Biorisk Management System Performance,' 'Training,' and 'National Oversight and Regulations.' The topics and questions were chosen by the organizers through consultation with US Government sponsors. The Chattham House Rule on non-attribution was in effect during question and answer periods and breakout session discussions.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Ongoing simulations of low-altitude airbursts from hypervelocity asteroid impacts have led to a re-evaluation of the impact hazard that accounts for the enhanced damage potential relative to the standard point-source approximations. Computational models demonstrate that the altitude of maximum energy deposition is not a good estimate of the equivalent height of a point explosion, because the center of mass of an exploding projectile maintains a significant fraction of its initial momentum and is transported downward in the form of a high-temperature jet of expanding gas. This 'fireball' descends to a depth well beneath the burst altitude before its velocity becomes subsonic. The time scale of this descent is similar to the time scale of the explosion itself, so the jet simultaneously couples both its translational and its radial kinetic energy to the atmosphere. Because of this downward flow, larger blast waves and stronger thermal radiation pulses are experienced at the surface than would be predicted for a nuclear explosion of the same yield at the same burst height. For impacts with a kinetic energy below some threshold value, the hot jet of vaporized projectile loses its momentum before it can make contact with the Earth's surface. The 1908 Tunguska explosion is the largest observed example of this first type of airburst. For impacts above the threshold, the fireball descends all the way to the ground, where it expands radially, driving supersonic winds and radiating thermal energy at temperatures that can melt silicate surface materials. The Libyan Desert Glass event, 29 million years ago, may be an example of this second, larger, and more destructive type of airburst. The kinetic energy threshold that demarcates these two airburst types depends on asteroid velocity, density, strength, and impact angle. Airburst models, combined with a reexamination of the surface conditions at Tunguska in 1908, have revealed that several assumptions from the earlier analyses led to erroneous conclusions, resulting in an overestimate of the size of the Tunguska event. Because there is no evidence that the Tunguska fireball descended to the surface, the yield must have been about 5 megatons or lower. Better understanding of airbursts, combined with the diminishing number of undiscovered large asteroids, leads to the conclusion that airbursts represent a large and growing fraction of the total impact threat.
To extend the backlighting capabilities for Sandia's Z-Accelerator, Z-Petawatt, a laser which can provide laser pulses of 500 fs length and up to 120 J (100TW target area) or up to 450 J (Z / Petawatt target area) has been built over the last years. The main mission of this facility focuses on the generation of high energy X-rays, such as tin Ka at 25 keV in ultra-short bursts. Achieving 25 keV radiographs with decent resolution and contrast required addressing multiple problems such as blocking of hot electrons, minimization of the source, development of suitable filters, and optimization of laser intensity. Due to the violent environment inside of Z, an additional very challenging task is finding massive debris and radiation protection measures without losing the functionality of the backlighting system. We will present the first experiments on 25 keV backlighting including an analysis of image quality and X-ray efficiency.
Numerical simulations [S.A. Slutz et al Phys. Plasmas 17, 056303 (2010)] indicate that fuel magnetization and preheat could enable cylindrical liner implosions to become an efficient means to generate fusion conditions. A series of simulations has been performed to study the stability of magnetically driven liner implosions. These simulations exhibit the initial growth and saturation of an electro-thermal instability. The Rayleigh-Taylor instability further amplifies the resultant density perturbations developing a spectrum of modes initially peaked at short wavelengths. With time the spectrum of modes evolves towards longer wavelengths developing an inverse cascade. The effects of mode coupling, the radial dependence of the magnetic pressure, and the initial surface roughness will be discussed.
Abstract not provided.
This report is the final summation of Sandia's Grand Challenge LDRD project No.119351, 'Network Discovery, Characterization and Prediction' (the 'NGC') which ran from FY08 to FY10. The aim of the NGC, in a nutshell, was to research, develop, and evaluate relevant analysis capabilities that address adversarial networks. Unlike some Grand Challenge efforts, that ambition created cultural subgoals, as well as technical and programmatic ones, as the insistence on 'relevancy' required that the Sandia informatics research communities and the analyst user communities come to appreciate each others needs and capabilities in a very deep and concrete way. The NGC generated a number of technical, programmatic, and cultural advances, detailed in this report. There were new algorithmic insights and research that resulted in fifty-three refereed publications and presentations; this report concludes with an abstract-annotated bibliography pointing to them all. The NGC generated three substantial prototypes that not only achieved their intended goals of testing our algorithmic integration, but which also served as vehicles for customer education and program development. The NGC, as intended, has catalyzed future work in this domain; by the end it had already brought in, in new funding, as much funding as had been invested in it. Finally, the NGC knit together previously disparate research staff and user expertise in a fashion that not only addressed our immediate research goals, but which promises to have created an enduring cultural legacy of mutual understanding, in service of Sandia's national security responsibilities in cybersecurity and counter proliferation.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities:a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA. © 2010 Springer-Verlag Berlin Heidelberg.
Small
Multilayered polymer capsules attract significant research attention and are proposed as candidate materials for diverse biomedical applications, from targeted drug delivery to microencapsulated catalysis and sensors. Despite tremendous efforts, the studies which extend beyond proof of concept and report on the use of polymer capsules in drug delivery are few, as are the developments in encapsulated catalysis with the use of these carriers. In this Concept article, the recent successes of poly(methacrylic acid) hydrogel capsules as carrier vessels for delivery of therapeutic cargo, creation of microreactors, and assembly of sub-compartmentalized cell mimics are discussed. The developed technologies are outlined, successful applications of these capsules are highlighted, capsules properties which contribute to their performance in diverse applications are discussed, and further directions and plausible developments in the field are suggested. © Copyright 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
Time-resolved picosecond pure-rotational coherent anti-Stokes Raman spectroscopy is demonstrated for thermometry and species concentration determination in flames. Time-delaying the probe pulse enables successful suppression of unwanted signals. A theoretical model is under development. ©2010 Optical Society of America.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
The first demonstration of a silicon microring modulator with both an integrated resistive heater and diode-based temperature sensor is shown. The temperature-sensor exhibits a linear response for more than an 85 °C external temperature range. ©2010 Optical Society of America.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
A novel silicon microdisk modulator with "error-free" ∼3 femtojoule/bit modulation at 12.5Gbs has been demonstrated. Modulation with a 1 volt swing allows for compatibility with current and future digital logic CMOS electronics. ©2010 IEEE.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
We have successfully designed, built and operated a microlaser based on a AlGaInAs multiple quantum well (MQW) semiconductor saturable absorber (SESA). Optical characterization of the semiconductor absorber, as well as, the microlaser output is presented. © 2010 Ontical Society of America.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
A 1.550 μm OPCPA utilizing a dual wavelength pumping scheme has been constructed. The system incorporates LBO and KTA for the first and second-stage amplifiers. Peak powers >310GW (60mJ, 180fs) at 10Hz have been achieved. ©2010 Optical Society of America.
Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference: 2010 Laser Science to Photonic Applications, CLEO/QELS 2010
We review three photofragmentation detection approaches, describing the detection of (1) vapor-phase mercuric chloride by photofragment emission, (2) vapor-phase nitro-containing compounds by photofragmentation-ionization, and (3) surface-bound organophosphonate compounds by photofragmentation-laser-induced fluorescence. © 2010 Optical Society of America.
Optics Express
In addition to fiber nonlinearity, fiber dispersion plays a significant role in spectral broadening of incoherent continuous-wave light. In this paper we have performed a numerical analysis of spectral broadening of incoherent light based on a fully stochastic model. Under a wide range of operating conditions, these numerical simulations exhibit striking features such as damped oscillatory spectral broadening (during the initial stages of propagation), and eventual convergence to a stationary, steady state spectral distribution at sufficiently long propagation distances. In this study we analyze the important role of fiber dispersion in such phenomena. We also demonstrate an analytical rate equation expression for spectral broadening. © 2010 Optical Society of America.
Dalton Transactions
An iron-based ionic liquid, Fe((OHCH2CH2) 2NH)6(CF3SO3)3, is synthesized in a single-step complexation reaction. Infrared and Raman data suggest NH(CH2CH2OH)2 primarily coordinates to Fe(iii) through alcohol groups. The compound has Tg and Td values of -64°C and 260°C, respectively. Cyclic voltammetry reveals quasi-reversible Fe(iii)/Fe(ii) reduction waves. © 2010 The Royal Society of Chemistry.
Optimization Methods and Software
In this paper, we explore hybrid parallel global optimization using Dividing Rectangles (DIRECT) and asynchronous generating set search (GSS). Both DIRECT and GSS are derivative-free and so require only objective function values; this makes these methods applicable to a wide variety of science and engineering problems. DIRECT is a global search method that strategically divides the search space into ever-smaller rectangles, sampling the objective function at the centre point for each rectangle. GSS is a local search method that samples the objective function at trial points around the current best point, i.e. the point with the lowest function value. Latin hypercube sampling can be used to seed GSS with a good starting point. Using a set of global optimization test problems, we compare the parallel performance of DIRECT and GSS with hybrids that combine the two methods. Our experiments suggest that the hybrid methods are much faster than DIRECT and scale better when more processors are added. This improvement in performance is achieved without any sacrifice in the quality of the solution - the hybrid methods find the global optimum whenever DIRECT does. © 2010 Taylor & Francis.
Mechanical Systems and Signal Processing
This work presents time-frequency signal processing methods for detecting and characterizing nonlinearity in transient response measurements. The methods are intended for systems whose response becomes increasingly linear as the response amplitude decays. The discrete Fourier transform of the response data is found with various sections of the initial response set to zero. These frequency responses, dubbed zeroed early-time fast Fourier transforms (ZEFFTs), acquire the usual shape of linear frequency response functions (FRFs) as more of the initial nonlinear response is nullified. Hence, nonlinearity is evidenced by a qualitative change in the shape of the ZEFFT as the length of the initial nullified section is varied. These spectra are shown to be sensitive to nonlinearity, revealing its presence even if it is active in only the first few cycles of a response, as may be the case with macro-slip in mechanical joints. They also give insight into the character of the nonlinearity, potentially revealing nonlinear energy transfer between modes or the modal amplitudes below which a system behaves linearly. In some cases one can identify a linear model from the late time, linear response, and use it to reconstruct the response that the system would have executed at previous times if it had been linear. This gives an indication of the severity of the nonlinearity and its effect on the measured response. The methods are demonstrated on both analytical and experimental data from systems with slip and impact nonlinearities. © 2010 Elsevier Ltd. All rights reserved.
IEEE Transactions on Plasma Science
Prior to this research, we have developed high-gain GaAs photoconductive semiconductor switches (PCSSs) to trigger 50-300 kV high-voltage switches (HVSs). We have demonstrated that PCSSs can trigger a variety of pulsed-power switches operating at 50300 kV by locating the trigger generator (TG) directly at the HVS. This was demonstrated for two types of dc-charged trigatrons and two types of field distortion midplane switches, including a ±100 kVDC switch produced by the High Current Electronics Institute used in the linear transformer driver. The lowest rms jitter obtained from triggering an HVS with a PCSS was 100 ps from a 300 kV pulse-charged trigatron. PCSSs are the key component in these independently timed fiber-optically controlled low jitter TGs for HVSs. TGs are critical subsystems for reliable and efficient pulsed-power facilities because they control the timing synchronization and amplitude variation of multiple pulse-forming lines that combine to produce the total system output. Future facility-scale pulsed-power systems are even more dependent on triggering, as they are composed of many more triggered HVSs, and they produce shaped pulses by independent timing of the HVSs. As pulsed-power systems become more complex, the complexity of the associated trigger systems also increases. One of the means to reduce this complexity is to allow the trigger system to be charged directly from the voltage appearing across the HVS. However, for slow or dc-charged pulsed-power systems, this can be particularly challenging as the dc hold-off of the PCSS dramatically declines. This paper presents results that are seeking to address HVS performance requirements over large operating ranges by triggering using a pulsed-charged PCSS-based TG. Switch operating conditions that are as low as 45% of the self-break were achieved. A dc-charged PCSS-based TG is also introduced and demonstrated over a 39-61 kV operating range. DC-charged PCSS allows the TG to be directly charged from slow or dc-charged pulsed-power systems. GaAs and neutron-irradiated GaAs (n-GaAs) PCSSs were used to investigate the dc-charged operation. © 2010 IEEE.
Abstract not provided.
Review of Scientific Instruments
We are attempting to measure the transmission of iron on Z at plasma temperatures and densities relevant to the solar radiation and convection zone boundary. The opacity data published by us to date has been taken at an electron density about a factor of 10 below the 9× 1022/cm3 electron density of this boundary. We present results of two-dimensional (2D) simulations of the heating and expansion of an opacity sample driven by the dynamic Hohlraum radiation source on Z. The aim of the simulations is to design foil samples that provide opacity data at increased density. The inputs or source terms for the simulations are spatially and temporally varying radiation temperatures with a Lambertian angular distribution. These temperature profiles were inferred on Z with on-axis time-resolved pinhole cameras, x-ray diodes, and bolometers. A typical sample is 0.3 μm of magnesium and 0.078 μm of iron sandwiched between 10 μm layers of plastic. The 2D LASNEX simulations indicate that to increase the density of the sample one should increase the thickness of the plastic backing. © 2010 American Institute of Physics.
Research Evaluation
Current policy and program rationale, objectives, and evaluation use a fragmented picture of the innovation process. This presents a challenge since in the United States officials in both the executive and legislative branches of government see innovation, whether that be new products or processes or business models, as the solution to many of the problems the country faces. The logic model is a popular tool for developing and describing the rationale for a policy or program and its context. This article sets out to describe generic logic models of both the R&D process and the diffusion process, building on existing theory-based frameworks. Then a combined, theory-based logic model for the innovation process is presented. Examples of the elements of the logic, each a possible leverage point or intervention, are provided, along with a discussion of how this comprehensive but simple model might be useful for both evaluation and policy development. © Beech Tree Publishing 2010.
Plasma Sources Science and Technology
We discuss the application of the laser-collisional induced fluorescence (LCIF) technique to produce two-dimensional maps of both electron densities and electron temperatures in a helium plasma. A collisional-radiative model (CRM) is used to describe the evolution of electronic states after laser excitation. We discuss generalizations to the time dependent results which are useful for simplifying data acquisition and analysis. LCIF measurements are performed in plasma containing densities ranging from ∼109 electrons cm -3 and approaching 1011 electrons cm-3 and comparison is made between the predictions made by the CRM and the measurements. Finally, spatial and temporal evolution of an ion sheath formed during a pulse bias is measured to demonstrate this technique. © 2010 IOP Publishing Ltd.
Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.
Abstract not provided.
This paper describes techniques for determining impact deformation and the subsequent reactivity change for a space reactor impacting the ground following a potential launch accident or for large fuel bundles in a shipping container following an accident. This technique could be used to determine the margin of subcriticality for such potential accidents. Specifically, the approach couples a finite element continuum mechanics model (Pronto3D or Presto) with a neutronics code (MCNP). DAGMC, developed at the University of Wisconsin-Madison, is used to enable MCNP geometric queries to be performed using Pronto3D output. This paper summarizes what has been done historically for reactor launch analysis, describes the impact criticality analysis methodology, and presents preliminary results using representative reactor designs.
Laboratories that work with biological agents need to manage their safety risks to persons working the laboratories and the human and animal community in the surrounding areas. Biosafety guidance defines a wide variety of biosafety risk mitigation measures, which include measures which fall under the following categories: engineering controls, procedural and administrative controls, and the use of personal protective equipment; the determination of which mitigation measures should be used to address the specific laboratory risks are dependent upon a risk assessment. Ideally, a risk assessment should be conducted in a manner which is standardized and systematic which allows it to be repeatable and comparable. A risk assessment should clearly define the risk being assessed and avoid over complication.
Composite materials behave differently from conventional fuel sources and have the potential to smolder and burn for extended time periods. As the amount of composite materials on modern aircraft continues to increase, understanding the response of composites in fire environments becomes increasingly important. An effort is ongoing to enhance the capability to simulate composite material response in fires including the decomposition of the composite and the interaction with a fire. To adequately model composite material in a fire, two physical model development tasks are necessary; first, the decomposition model for the composite material and second, the interaction with a fire. A porous media approach for the decomposition model including a time dependent formulation with the effects of heat, mass, species, and momentum transfer of the porous solid and gas phase is being implemented in an engineering code, ARIA. ARIA is a Sandia National Laboratories multiphysics code including a range of capabilities such as incompressible Navier-Stokes equations, energy transport equations, species transport equations, non-Newtonian fluid rheology, linear elastic solid mechanics, and electro-statics. To simulate the fire, FUEGO, also a Sandia National Laboratories code, is coupled to ARIA. FUEGO represents the turbulent, buoyantly driven incompressible flow, heat transfer, mass transfer, and combustion. FUEGO and ARIA are uniquely able to solve this problem because they were designed using a common architecture (SIERRA) that enhances multiphysics coupling and both codes are capable of massively parallel calculations, enhancing performance. The decomposition reaction model is developed from small scale experimental data including thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) in both nitrogen and air for a range of heating rates and from available data in the literature. The response of the composite material subject to a radiant heat flux boundary condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.
Abstract not provided.
Abstract not provided.
A new high-fidelity integrated system method and analysis approach was developed and implemented for consistent and comprehensive evaluations of advanced fuel cycles leading to minimized Transuranic (TRU) inventories. The method has been implemented in a developed code system integrating capabilities of Monte Carlo N - Particle Extended (MCNPX) for high-fidelity fuel cycle component simulations. In this report, a Nuclear Energy System (NES) configuration was developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized TRU waste inventories, long-term activities, and radiotoxicities. The reactor systems and fuel cycle components that make up the NES were selected for their ability to perform in tandem to produce clean, safe, and dependable energy in an environmentally conscious manner. The diversity in performance and spectral characteristics were used to enhance TRU waste elimination while efficiently utilizing uranium resources and providing an abundant energy source. A computational modeling approach was developed for integrating the individual models of the NES. A general approach was utilized allowing for the Integrated System Model (ISM) to be modified in order to provide simulation for other systems with similar attributes. By utilizing this approach, the ISM is capable of performing system evaluations under many different design parameter options. Additionally, the predictive capabilities of the ISM and its computational time efficiency allow for system sensitivity/uncertainty analysis and the implementation of optimization techniques.
Abstract not provided.
The energy spectrum of a H{sup +} beam generated within the HERMES III accelerator is calculated from dosimetry data to refine future experiments. Multiple layers of radiochromic film are exposed to the beam. A graphic user interface was written in MATLAB to align the film images and calculate the beam's dose depth profile. Singular value regularization is used to stabilize the unfolding and provide the H{sup +} beam's energy spectrum. The beam was found to have major contributions from 1 MeV and 8.5 MeV protons. The HERMES III accelerator is typically used as a pulsed photon source to experimentally obtain photon impulse response of systems due to high energy photons. A series of experiments were performed to explore the use of Hermes III to generate an intense pulsed proton beam. Knowing the beam energy spectrum allows for greater precision in experiment predictions and beam model verification.
Abstract not provided.
Abstract not provided.
Over the past decade optical approaches were introduced that effectively break the diffraction barrier. Of particular note were introductions of Stimulated Emission/Depletion (STED) microscopy, Photo-Activated Localization Microscopy (PALM), and the closely related Stochastic Optical Reconstruction Microscopy (STORM). STORM represents an attractive method for researchers, as it does not require highly specialized optical setups, can be implemented using commercially available dyes, and is more easily amenable to multicolor imaging. We implemented a simultaneous dual-color, direct-STORM imaging system through the use of an objective-based TIRF microscope and filter-based image splitter. This system allows for excitation and detection of two fluorophors simultaneously, via projection of each fluorophor's signal onto separate regions of a detector. We imaged the sub-resolution organization of the TLR4 receptor, a key mediator of innate immune response, after challenge with lipopolysaccharide (LPS), a bacteria-specific antigen. While distinct forms of LPS have evolved among various bacteria, only some LPS variations (such as that derived from E. coli) typically result in significant cellular immune response. Others (such as from the plague bacteria Y. pestis) do not, despite affinity to TLR4. We will show that challenge with LPS antigens produces a statistically significant increase in TLR4 receptor clusters on the cell membrane, presumably due to recruitment of receptors to lipid rafts. These changes, however, are only detectable below the diffraction limit and are not evident using conventional imaging methods. Furthermore, we will compare the spatiotemporal behavior of TLR4 receptors in response to different LPS chemotypes in order to elucidate possible routes by which pathogens such as Y. pestis are able to circumvent the innate immune system. Finally, we will exploit the dual-color STORM capabilities to simultaneously image LPS and TLR4 receptors in the cellular membrane at resolutions at or below 40nm.
Hafnium oxide-based MOS capacitors were investigated to determine electrical property response to radiation environments. In situ capacitance versus voltage measurements were analyzed to identify voltage shifting as a result of changes to trapped charge with increasing dose of gamma, neutron, and ion radiation. In situ measurements required investigation and optimization of capacitor fabrication to include dicing, cleaning, metalization, packaging, and wire bonding. A top metal contact of 200 angstroms of titanium followed by 2800 angstroms of gold allowed for repeatable wire bonding and proper electrical response. Gamma and ion irradiations of atomic layer deposited hafnium oxide on silicon devices both resulted in a midgap voltage shift of no more than 0.2 V toward less positive voltages. This shift indicates recombination of radiation induced positive charge with negative trapped charge in the bulk oxide. Silicon ion irradiation caused interface effects in addition to oxide trap effects that resulted in a flatband voltage shift of approximately 0.6 V also toward less positive voltages. Additionally, no bias dependent voltage shifts with gamma irradiation and strong oxide capacitance room temperature annealing after ion irradiation was observed. These characteristics, in addition to the small voltage shifts observed, demonstrate the radiation hardness of hafnium oxide and its applicability for use in space systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A series of experiments on the MEDUSA linear accelerator radiation test facility were performed to evaluate the difference in dose measured using different methods. Significant differences in dosimeter-measured radiation dose were observed for the different dosimeter types for the same radiation environments, and the results are compared and discussed in this report.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Transportation for each step of a closed fuel cycle is analyzed in consideration of the availability of appropriate transportation infrastructure. The United States has both experience and certified casks for transportation that may be required by this cycle, except for the transport of fresh and used MOX fuel and fresh and used Advanced Burner Reactor (ABR) fuel. Packaging that had been used for other fuel with somewhat similar characteristics may be appropriate for these fuels, but would be inefficient. Therefore, the required neutron and gamma shielding, heat dissipation, and criticality were calculated for MOX and ABR fresh and spent fuel. Criticality would not be an issue, but the packaging design would need to balance neutron shielding and regulatory heat dissipation requirements.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.
Abstract not provided.
Three large open pool fire experiments involving a calorimeter the size of a spent fuel rail cask were conducted at Sandia National Laboratories Lurance Canyon Burn Site. These experiments were performed to study the heat transfer between a very large fire and a large cask-like object. In all of the tests, the calorimeter was located at the center of a 7.93-meter diameter fuel pan, elevated 1 meter above the fuel pool. The relative pool size and positioning of the calorimeter conformed to the required positioning of a package undergoing certification fire testing. Approximately 2000 gallons of JP-8 aviation fuel were used in each test. The first two tests had relatively light winds and lasted 40 minutes, while the third had stronger winds and consumed the fuel in 25 minutes. Wind speed and direction, calorimeter temperature, fire envelop temperature, vertical gas plume speed, and radiant heat flux near the calorimeter were measured at several locations in all tests. Fuel regression rate data was also acquired. The experimental setup and certain fire characteristics that were observed during the test are described in this paper. Results from three-dimensional fire simulations performed with the Cask Analysis Fire Environment (CAFE) fire code are also presented. Comparisons of the thermal response of the calorimeter as measured in each test to the results obtained from the CAFE simulations are presented and discussed.
For certification, packages used for the transportation of plutonium by air must survive the hypothetical thermal environment specified in 10CFR71.74(a)(5). This regulation specifies that 'the package must be exposed to luminous flames from a pool fire of JP-4 or JP-5 aviation fuel for a period of at least 60 minutes.' This regulation was developed when jet propellant (JP) 4 and 5 were the standard jet fuels. However, JP-4 and JP-5 currently are of limited availability in the United States of America. JP-4 is very hard to obtain as it is not used much anymore. JP-5 may be easier to get than JP-4, but only through a military supplier. The purpose of this paper is to illustrate that readily-available JP-8 fuel is a possible substitute for the aforementioned certification test. Comparisons between the properties of the three fuels are given. Results from computer simulations that compared large JP-4 to JP-8 pool fires using Sandia's VULCAN fire model are shown and discussed. Additionally, the Container Analysis Fire (CAFE) code was used to compare the thermal response of a large calorimeter exposed to engulfing fires fueled by these three jet propellants. The paper then recommends JP-8 as an alternate fuel that complies with the thermal environment implied in 10CFR71.74.
Sandia National Laboratories has constructed an unyielding target at the end of its 2000-foot rocket sled track. This target is made up of approximately 5 million pounds of concrete, an embedded steel load spreading structure, and a steel armor plate face that varies from 10 inches thick at the center to 4 inches thick at the left and right edges. The target/track combination will allow horizontal impacts at regulatory speeds of very large objects, such as a full-scale rail cask, or high-speed impacts of smaller packages. The load-spreading mechanism in the target is based upon the proven design that has been in use for over 20 years at Sandia's aerial cable facility. That target, with a weight of 2 million pounds, has successfully withstood impact forces of up to 25 million pounds. It is expected that the new target will be capable of withstanding impact forces of more than 70 million pounds. During construction various instrumentation was placed in the target so that the response of the target during severe impacts can be monitored. This paper will discuss the construction of the target and provide insights on the testing capabilities at the sled track with this new target.
The ASME Task Group on Computational Mechanics for Explicit Dynamics is investigating the types of finite element models needed to accurately solve various problems that occur frequently in cask design. One type of problem is the 1-meter impact onto a puncture spike. The work described in this paper considers this impact for a relatively thin-walled shell, represented as a flat plate. The effects of mesh refinement, friction coefficient, material models, and finite element code will be discussed. The actual punch, as defined in the transport regulations, is 15 cm in diameter with a corner radius of no more than 6 mm. The punch used in the initial part of this study has the same diameter, but has a corner radius of 25 mm. This more rounded punch was used to allow convergence of the solution with a coarser mesh. A future task will be to investigate the effect of having a punch with a smaller corner radius. The 25-cm thick type 304 stainless steel plate that represents the cask wall is 1 meter in diameter and has added mass on the edge to represent the remainder of the cask. The amount of added mass to use was calculated using Nelm's equation, an empirically derived relationship between weight, wall thickness, and ultimate strength that prevents punch through. The outer edge of the plate is restrained so that it can only move in the direction parallel to the axis of the punch. Results that are compared include the deflection at the edge of the plate, the deflection at the center of the plate, the plastic strains at radius r=50 cm and r=100 cm , and qualitatively, the distribution of plastic strains. The strains of interest are those on the surface of the plate, not the integration point strains. Because cask designers are using analyses of this type to determine if shell will puncture, a failure theory, including the effect of the tri-axial nature of the stress state, is also discussed. The results of this study will help to determine what constitutes an adequate finite element model for analyzing the puncture hypothetical accident.
Abstract not provided.
Abstract not provided.
In an effort to address the potential to scale up of carbon dioxide (CO{sub 2}) capture and sequestration in the United States saline formations, an assessment model is being developed using a national database and modeling tool. This tool builds upon the existing NatCarb database as well as supplemental geological information to address scale up potential for carbon dioxide storage within these formations. The focus of the assessment model is to specifically address the question, 'Where are opportunities to couple CO{sub 2} storage and extracted water use for existing and expanding power plants, and what are the economic impacts of these systems relative to traditional power systems?' Initial findings indicate that approximately less than 20% of all the existing complete saline formation well data points meet the working criteria for combined CO{sub 2} storage and extracted water treatment systems. The initial results of the analysis indicate that less than 20% of all the existing complete saline formation well data may meet the working depth, salinity and formation intersecting criteria. These results were taken from examining updated NatCarb data. This finding, while just an initial result, suggests that the combined use of saline formations for CO{sub 2} storage and extracted water use may be limited by the selection criteria chosen. A second preliminary finding of the analysis suggests that some of the necessary data required for this analysis is not present in all of the NatCarb records. This type of analysis represents the beginning of the larger, in depth study for all existing coal and natural gas power plants and saline formations in the U.S. for the purpose of potential CO{sub 2} storage and water reuse for supplemental cooling. Additionally, this allows for potential policy insight when understanding the difficult nature of combined potential institutional (regulatory) and physical (engineered geological sequestration and extracted water system) constraints across the United States. Finally, a representative scenario for a 1,800 MW subcritical coal fired power plant (amongst other types including supercritical coal, integrated gasification combined cycle, natural gas turbine and natural gas combined cycle) can look to existing and new carbon capture, transportation, compression and sequestration technologies along with a suite of extracting and treating technologies for water to assess the system's overall physical and economic viability. Thus, this particular plant, with 90% capture, will reduce the net emissions of CO{sub 2} (original less the amount of energy and hence CO{sub 2} emissions required to power the carbon capture water treatment systems) less than 90%, and its water demands will increase by approximately 50%. These systems may increase the plant's LCOE by approximately 50% or more. This representative example suggests that scaling up these CO{sub 2} capture and sequestration technologies to many plants throughout the country could increase the water demands substantially at the regional, and possibly national level. These scenarios for all power plants and saline formations throughout U.S. can incorporate new information as it becomes available for potential new plant build out planning.
Decision Trees, algorithms, software code, risk management, reports, plans, drawings, change control, presentations, and analysis - all useful tools and efforts but time consuming, resource intensive, and potentially costly for projects that have absolute schedule and budget constraints. What are necessary and prudent efforts when a customer calls with a major security problem that needs to be fixed with a proven, off-the-approval-list, multi-layered integrated system with high visibility and limited funding and expires at the end of the Fiscal Year? Whether driven by budget cycles, safety, or by management decree, many such projects begin with generic scopes and funding allocated based on a rapid management 'guestimate.' Then a Project Manager (PM) is assigned a project with a predefined and potentially limited scope, compressed schedule, and potentially insufficient funding. The PM is tasked to rapidly and cost effectively coordinate a requirements-based design, implementation, test, and turnover of a fully operational system to the customer, all while the customer is operating and maintaining an existing security system. Many project management manuals call this an impossible project that should not be attempted. However, security is serious business and the reality is that rapid deployment of proven systems via an 'Extreme Project' is sometimes necessary. Extreme Projects can be wildly successful but require a dedicated team of security professionals lead by an experienced project manager using a highly-tailored and agile project management process with management support at all levels, all combined with significant interface with the customer. This paper does not advocate such projects or condone eliminating the valuable analysis and project management techniques. Indeed, having worked on a well-planned project provides the basis for experienced team members to complete Extreme Projects. This paper does, however, provide insight into what it takes for projects to be successfully implemented and accepted when completed under extreme conditions.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The U. S. capability to monitor foreign underground nuclear test activities relies heavily on measurement of explosion phenomena, including characteristic seismic, infrasound, radionuclide, and acoustic signals. Despite recent advances in each of these fields, empirical, rather than physics-based, approaches are used to predict and explain observations. Seismologists rely on prior knowledge of the variations of teleseismic and regional seismic parameters such as p- and s-wave arrivals from simple one-dimensional models for the teleseismic case to somewhat more complicated enhanced two-dimensional models for the regional case. Likewise, radionuclide experts rely on empirical results from a handful of limited experiments to determine the radiological source terms present at the surface after an underground test. To make the next step in the advancement of the science of monitoring we need to transform these fields to enable predictive, physics-based modeling and analysis. The Nevada Test Site Source Physics Experiments (N-SPE) provide a unique opportunity to gather precise data from well-designed experiments to improve physics-based modeling capability. In the seismic experiments, data collection will include time domain reflectometry to measure explosive performance and yield, free-field accelerometers, extensive seismic arrays, and infrasound and acoustic measurements. The improved modeling capability that we will develop using this data should enable important advances in our ability to monitor worldwide for nuclear testing. The first of a series of source physics experiments will be conducted in the granite of Climax Stock at the NTS, near the locations of the HARD HAT and PILE DRIVER nuclear tests. This site not only provides a fairly homogeneous and well-documented geology, but also an opportunity to improve our understanding of how fractures, joints, and faults affect seismic wave generation and propagation. The Climax Stock experiments will consist of a 220 lb (TNT equivalent) calibration shot and a 2200 lb (TNT equivalent) over-buried shot conducted in the same emplacement hole. An identical 2200 lb shot at the same location will follow to investigate the effects of pre-conditioning. These experiments also provide an opportunity to advance capabilities for near-field monitoring, and on-site inspections (OSIs) of suspected testing sites. In particular, geologic, physical, and cultural signatures of underground testing can be evaluated using the N-SPE activities as case studies. Furthermore, experiments to measure the migration of radioactive noble gases to the surface from underground explosions will enable development of higher fidelity radiological source term models that can predict migration through a variety of geologic conditions. Because the detection of short-lived radionuclides is essential to determining if an explosion was nuclear or conventional, a better understanding of the gaseous and particulate radionuclide source terms that reach the surface from underground testing is critical to development of OSI capability.
IEEE Transactions on Nuclear Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.