Singular Stress Fields at the Intersection of a Grain Boundary and a Stress-free Edge in a Columnar Polycrystal
Journal Of Applied Mechanics
Abstract not provided.
Journal Of Applied Mechanics
Abstract not provided.
Abstract not provided.
Abstract not provided.
A recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates is assessed for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary non-equilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological nonequilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, significant differences can be found. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.
Abstract not provided.
The missions is to Provide the Highest Quality Materiel Solution to our Consequence Management DoD Responder's Requirements by Applying Rapid Acquisition. To Interface With Customers in Respect to the Assessment of Needs, to Deliver Products That Exceed Expectations, to Optimize Safety, Performance, Cost, Schedule, and Sustainability, Throughout the Life Cycle of Each COTS Item. On Order, Support Other Federal, State Agencies, and Local First Responders.
Abstract not provided.
There is a long history of testing crushed salt as backfill for the Waste Isolation Pilot Plant program, but testing was typically done at 100 C or less. Future applications may involve backfilling crushed salt around heat-generating waste packages, where near-field temperatures could reach 250 C or hotter. A series of experiments were conducted to investigate the effects of hydrostatic stress on run-of-mine salt at temperatures up to 250 C and pressures to 20 MPa. The results of these tests were compared with analogous modeling results. By comparing the modeling results at elevated temperatures to the experimental results, the adequacy of the current crushed salt reconsolidation model was evaluated. The model and experimental results both show an increase in the reconsolidation rate with temperature. The current crushed salt model predicts the experimental results well at a temperature of 100 C and matches the overall trends, but over-predicts the temperature dependence of the reconsolidation. Further development of the deformation mechanism activation energies would lead to a better prediction of the temperature dependence by the crushed salt reconsolidation model.
Large diameter nested wire array z-pinches imploded on the Z-generator at Sandia National Laboratories have been used extensively to generate high intensity K-shell radiation. Large initial radii are required to obtain the high implosion velocities needed to efficiently radiate in the K-shell. This necessitates low wire numbers and large inter-wire gaps which introduce large azimuthal non-uniformities. Furthermore, the development of magneto-Rayleigh-Taylor instabilities during the implosion are known to generate large axial non-uniformity These effects motivate the complete, full circumference 3-dimensional modeling of these systems. Such high velocity implosions also generate large voltages, which increase current losses in the power feed and limit the current delivery to these loads. Accurate representation of the generator coupling is therefore required to reliably represent the energy delivered to, and the power radiated from these sources. We present 3D-resistive MHD calculations of the implosion and stagnation of a variety of large diameter stainless steel wire arrays (hv {approx} 6.7 keV), imploded on the Z-generator both before and after its refurbishment. Use of a tabulated K-shell emission model allows us to compare total and K-shell radiated powers to available experimental measurements. Further comparison to electrical voltage and current measurements allows us to accurately assess the power delivered to these loads. These data allow us to begin to constrain and validate our 3D MHD calculations, providing insight into ways in which these sources may be further optimized.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper presents initial designs of multiple-shell gas puff imploding loads for the refurbished Z generator. The nozzle has three independent drivers for three independent plena. The outer and middle plena may be charged to 250psia whilst the central jet can be charged to 1000psia. 8-cm and 12-cm outer diameter nozzles have been built and tested on the bench. The unique valve design provides a very fast opening, hence the amount of stray gas outside the core nozzle flow is minimized. A similar 8-cm nozzle was characterized earlier using a fiber optic interferometer, but at lower pressures and without the central jet. Those data have been scaled to the higher pressures required for refurbished Z and used to estimate performance. The use of three independent plena allows variation of the pressure (hence mass distribution) in the nozzle flow, allowing optimization of implosion stability and the on-axis mass that most contributes to K-shell emission. Varying the outer/middle mass ratios influences the implosion time and should affect the details of the assembly on axis as well as the radiation physics. Varying the central jet pressure will have a minor effect on implosion dynamics, but a strong effect on pinch conditions and radiation physics. Optimum mass distributions for planned initial Ar shots on refurbished Z are described. Additional interferometer data including the central jet and at higher pressures will also be presented.
Large diameter (50-70 mm) wire array z pinches are fielded on the refurbished Z machine to generate 1-10 keV K-shell x-ray radiation. Imploding with velocities approaching 100 cm/{micro}s, these loads create large dL/dt which generates a high voltage, stresses the convolute, and leads to current loss. High velocities are required to reach the few-keV electron temperatures required to strip moderate-atomic-number plasmas to the K shell, thus there is an inherent trade-off between achieving high velocity and stressing the pulsed power driver via the large dL/dt.Here, we present experiments in which the length of stagnated Cu and stainless steel z pinches was varied from 12-24 mm. The motivation in reducing the pinch height is to lower the final inductance and improve coupling to the generator. Shortening a Cu pinch from 20 to 12 mm by angling the anode glide plane reduced the final L and dL/dt, enhancing the feed current by 1.4 MA, nearly doubling the K-shell power per unit length, and increasing the net K-shell yield by 20%. X-ray spectroscopy is employed to assess differences in plasma conditions between the loads. Lengthening the pinch could lead to yield enhancements by increasing the mass participating in the implosion, provided the increased inductance is not overly detrimental to the current coupling. In addition to the experimental results, these scenarios are studied via thin-shell 0D and also magneto-hydrodynamic modeling with a coupled driver circuit model.
Star wire arrays with two closely located wires ('gates') on the inner cylinder of star wire arrays were studied. The gate wires were used to study plasma interpenetration and reproduce transparent and non-transparent regimes of propagation of the imploding plasma through the gates. The non-transparent mode of collision is typical for regular star wire arrays and it was also observed in Al stars with gate wires of regular length. Gated star arrays demonstrate similar x-ray yield but slightly different delay of x-ray generation compared to regular stars. Double length wires were applied as gate wires to increase their inductance and resistance and to increase transparency for the imploding plasma. The wires of the gates were made of Al or high atomic number elements, while the rest of the arrays were regular length Al wires. An intermediate semi-transparent mode of collision was observed in Al stars with long Al gate wires. Arrays with long heavy-element gate wires demonstrated transparency to plasma passing through. Shadowgraphy at the wavelength of 266 nm showed that plasma moved through the gate wires. Double implosions, generating a double-peak keV X-ray pulse, were observed in star arrays when the gates were made of high atomic number elements. A new laser diagnostic beampath for vertical probing of the Z-pinch was built to test how wires could be used to redirect plasma flow. This setup was designed to test gated arrays and further configurations to create a rotating pinch. Results on plasma flow control obtained are discussed, and compared to numerical calculations.
Nature Materials
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proposed for publication in the Journal of Optical Society of America B.
This paper is focused on the optical properties of nanocomposite plasmonic emitters with core/shell configurations, where a fluorescence emitter is located inside a metal nanoshell. Systematic theoretical investigations are presented for the influence of material type, core radius, shell thickness, and excitation wavelength on the internal optical intensity, radiative quantum yield, and fluorescence enhancement of the nanocomposite emitter. It is our conclusion that: (i) an optimal ratio between the core radius and shell thickness is required to maximize the absorption rate of fluorescence emitters, and (ii) a large core radius is desired to minimize the non-radiative damping and avoid significant quantum yield degradation of light emitters. Several experimental approaches to synthesize these nanocomposite emitters are also discussed. Furthermore, our theoretical results are successfully used to explain several reported experimental observations and should prove useful for designing ultra-bright core/shell nanocomposite emitters.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present a shape-first approach to finding automobiles and trucks in overhead images and include results from our analysis of an image from the Overhead Imaging Research Dataset [1]. For the OIRDS, our shape-first approach traces candidate vehicle outlines by exploiting knowledge about an overhead image of a vehicle: a vehicle's outline fits into a rectangle, this rectangle is sized to allow vehicles to use local roads, and rectangles from two different vehicles are disjoint. Our shape-first approach can efficiently process high-resolution overhead imaging over wide areas to provide tips and cues for human analysts, or for subsequent automatic processing using machine learning or other analysis based on color, tone, pattern, texture, size, and/or location (shape first). In fact, computationally-intensive complex structural, syntactic, and statistical analysis may be possible when a shape-first work flow sends a list of specific tips and cues down a processing pipeline rather than sending the whole of wide area imaging information. This data flow may fit well when bandwidth is limited between computers delivering ad hoc image exploitation and an imaging sensor. As expected, our early computational experiments find that the shape-first processing stage appears to reliably detect rectangular shapes from vehicles. More intriguing is that our computational experiments with six-inch GSD OIRDS benchmark images show that the shape-first stage can be efficient, and that candidate vehicle locations corresponding to features that do not include vehicles are unlikely to trigger tips and cues. We found that stopping with just the shape-first list of candidate vehicle locations, and then solving a weighted, maximal independent vertex set problem to resolve conflicts among candidate vehicle locations, often correctly traces the vehicles in an OIRDS scene.
New Journal of Physiscs
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Chalcogenide compounds based on the rocksalt and tetradymite structures possess good thermoelectric properties and are widely used in a variety of thermoelectric devices. Examples include PbTe and AgSbTe2, which have the rocksalt structure, and Bi2Te3, Bi2Se3, and Sb2Te3, which fall within the broad tetradymite-class of structures. These materials are also of interest for thermoelectric nanocomposites, where the aim is to improve thermoelectric energy conversion efficiency by harnessing interfacial scattering processes (e.g., reducing the thermal conductivity by phonon scattering or enhancing the Seebeck coefficient by energy filtering). Understanding the phase stability and microstructural evolution within such materials is key to designing processing approaches for optimal thermoelectric performance and to predicting the long-term nanostructural stability of the materials. In this presentation, we discuss our work investigating relationships between interfacial structure and formation mechanisms in several telluride-based thermoelectric materials. We begin with a discussion of interfacial coherency and its special aspects at interfaces in telluride compounds based on the rocksalt and tetradymite structures. We compare perfectly coherent interfaces, such as the Bi2Te3 (0001) twin, with semi-coherent, misfitting interfaces. We next discuss the formal crystallographic analysis of interfacial defects in these systems and then apply this methodology to high resolution transmission electron microscopy (HRTEM) observations of interfaces in the AgSbTe2/Sb2Te3 and PbTe/Sb2Te3 systems, focusing on interfaces vicinal to {l_brace}111{r_brace}/{l_brace}0001{r_brace}. Through this analysis, we identify a defect that can accomplish the rocksalt-to-tetradymite phase transformation through diffusive-glide motion along the interface.
Physica Status Solidi
Abstract not provided.
Abstract not provided.
IEEE Power and Energy
Abstract not provided.
Abstract not provided.
A deuterium gas puff z-pinch has been shown to be a significant source of neutrons with yield scaling with current as Y{sub n} {approx} I{sup 3.5}. Recent implicit, electromagnetic and kinetic particle-in-cell simulations with the LSP code have shown that the yield has significant thermonuclear and beam-target components. Beam-target neutron yield is produced from deuterium ion high-energy tails driven by the Rayleigh Taylor instability. In this paper, we present further results from 1-3D simulations of deuterium z-pinches over a wider current range 1.4-20 MA. Preliminary results show that unlike the high current regime above 7 MA, the yield at lower currents is dominated by beam-target fusion reactions from high energy ions consistent with experiment. We will also examine in 3D the impact of the Rayleigh Taylor instability on the ion energy distribution. We discuss the implications of these simulations for neutron yield at still higher currents.
Abstract not provided.
Abstract not provided.
AIAA Journal
Abstract not provided.
Abstract not provided.
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.
The objectives of this project are to: (1) move scientific programmers to higher-level, platform-agnostic yet scalable abstractions; (2) to demonstrate general OOD patterns and distill new domain-specific patterns from multiphysics applications in Fortran; and (3) to construct an open-source framework that encourages the use of the demonstrated patterns. Some conclusions are: (1) Calculus illuminates a path toward highly asynchronous computing that blurs the task/data parallel distinction; (2) Fortran 2003 appears to have the expressiveness to support the general GoF design patterns in multiphysics applications; and (3) several domain-specific and language-specific patterns emerge along the way.
Large, complex networks are ubiquitous in nature and society, and there is great interest in developing rigorous, scalable methods for identifying and characterizing their vulnerabilities. This paper presents an approach for analyzing the dynamics of complex networks in which the network of interest is first abstracted to a much simpler, but mathematically equivalent, representation, the required analysis is performed on the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit vulnerability-preserving, finite state abstractions, and develop efficient algorithms for computing these abstractions. We then propose a vulnerability analysis methodology which combines these finite state abstractions with formal analytics from theoretical computer science to yield a comprehensive vulnerability analysis process for networks of realworld scale and complexity. The potential of the proposed approach is illustrated with a case study involving a realistic electric power grid model and also with brief discussions of biological and social network examples.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Image segmentation is one of the most important and difficult tasks in digital image processing. It represents a key stage of automated image analysis and interpretation. Segmentation algorithms for gray-scale images utilize basic properties of intensity values such as discontinuity and similarity. However, it is possible to enhance edge-detection capability by means of using spectral information provided by multispectral (MS) or hyperspectral (HS) imagery. In this paper we consider image segmentation algorithms for multispectral images with particular emphasis on detection of multi-color or multispectral edges. More specifically, we report on an algorithm for joint spatio-spectral (JSS) edge detection. By joint we mean simultaneous utilization of spatial and spectral characteristics of a given MS or HS image. The JSS-based edge-detection approach, termed Spectral Ratio Contrast (SRC) edge-detection algorithm, utilizes the novel concept of matching edge signatures. The edge signature represents a combination of spectral ratios calculated using bands that enhance the spectral contrast between the two materials. In conjunction with a spatial mask, the edge signature give rise to a multispectral operator that can be viewed as a three-dimensional extension of the mask. In the extended mask, the third (spectral) dimension of each hyper-pixel can be chosen independently. The SRC is verified using MS and HS imagery from a quantum-dot in a well infrared (IR) focal plane array, and the Airborne Hyperspectral Imager.
Abstract not provided.
In ductile metals, sliding contact is often accompanied by severe plastic deformation localized to a small volume of material adjacent to the wear surface. During the initial run-in period, hardness, grain structure and crystallographic texture of the surfaces that come into sliding contact undergo significant changes, culminating in the evolution of subsurface layers with their own characteristic features. Here, a brief overview of our ongoing research on the fundamental phenomena governing the friction-induced recrystallization in single crystal metals, and how these recrystallized structures with nanometer-size grains would in turn influence metallic friction will be presented. We have employed a novel combination of experimental tools (FIB, EBSD and TEM) and an analysis of the critical resolved shear stress (RSS) on the twelve slip systems of the FCC lattice to understand the evolution of these friction-induced structures in single crystal nickel. The later part of the talk deals with the mechanisms of friction in nanocrystalline Ni films. Analyses of friction-induced subsurfaces seem to confirm that the formation of stable ultrafine nanocrystalline layers with 2-10 nm grains changes the deformation mechanism from the traditional dislocation mediated one to that is predominantly controlled by grain boundaries, resulting in significant reductions in the coefficient friction.
Industrial Engineering Chemical Research
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Wind and Water Power Program supports the development of marine and hydrokinetic devices, which capture energy from waves, tides, ocean currents, the natural flow of water in rivers, and marine thermal gradients, without building new dams or diversions. The program works closely with industry and the Department of Energy's national laboratories to advance the development and testing of marine and hydrokinetic devices. In 2008, the program funded projects to develop and test point absorber, oscillating wave column, and tidal turbine technologies. The program also funds component design, such as techniques for manufacturing and installing coldwater pipes critical for ocean thermal energy conversion (OTEC) systems. Rigorous device testing is necessary to validate and optimize prototypes before beginning full-scale demonstration and deployment. The program supports device testing by providing technology developers with information on testing facilities. Technology developers require access to facilities capable of simulating open-water conditions in order to refine and validate device operability. The program has identified more than 20 tank testing operators in the United States with capabilities suited to the marine and hydrokinetic technology industry. This information is available to the public in the program's Hydrodynamic Testing Facilities Database. The program also supports the development of open-water, grid-connected testing facilities, as well as resource assessments that will improve simulations done in dry-dock and closed-water testing facilities. The program has established two university-led National Marine Renewable Energy Centers to be used for device testing. These centers are located on coasts and will have open-water testing berths, allowing researchers to investigate marine and estuary conditions. Optimal array design, development, modeling and testing are needed to maximize efficiency and electricity generation at marine and hydrokinetic power plants while mitigating nearby and distant impacts. Activities may include laboratory and computational modeling of mooring design or research on device spacing. The geographies, resources, technologies, and even nomenclature of the U.S. marine and hydrokinetic technology industry have yet to be fully understood or defined. The program characterizes and assesses marine and hydrokinetic devices, and then organizes the collected information into a comprehensive and searchable Web-based database, the Marine and Hydrokinetic Technology Database. The database, which reflects intergovernmental and international collaboration, provides industry with one of the most comprehensive and up-to-date public resources on marine and hydrokinetic devices.
The aerodynamic performance and aeroacoustic noise sources of a rotor employing flatback airfoils have been studied in field test campaign and companion modeling effort. The field test measurements of a sub-scale rotor employing nine meter blades include both performance measurements and acoustic measurements. The acoustic measurements are obtained using a 45 microphone beamforming array, enabling identification of both noise source amplitude and position. Semi-empirical models of flatback airfoil blunt trailing edge noise are developed and calibrated using available aeroacoustic wind tunnel test data. The model results and measurements indicate that flatback airfoil noise is less than drive train noise for the current test turbine. It is also demonstrated that the commonly used Brooks, Pope, and Marcolini model for blunt trailing edge noise may be over-conservative in predicting flatback airfoil noise for wind turbine applications.
Prior work on active aerodynamic load control (AALC) of wind turbine blades has demonstrated that appropriate use of this technology has the potential to yield significant reductions in blade loads, leading to a decrease in wind cost of energy. While the general concept of AALC is usually discussed in the context of multiple sensors and active control devices (such as flaps) distributed over the length of the blade, most work to date has been limited to consideration of a single control device per blade with very basic Proportional Derivative controllers, due to limitations in the aeroservoelastic codes used to perform turbine simulations. This work utilizes a new aeroservoelastic code developed at Delft University of Technology to model the NREL/Upwind 5 MW wind turbine to investigate the relative advantage of utilizing multiple-device AALC. System identification techniques are used to identify the frequencies and shapes of turbine vibration modes, and these are used with modern control techniques to develop both Single-Input Single-Output (SISO) and Multiple-Input Multiple-Output (MIMO) LQR flap controllers. Comparison of simulation results with these controllers shows that the MIMO controller does yield some improvement over the SISO controller in fatigue load reduction, but additional improvement is possible with further refinement. In addition, a preliminary investigation shows that AALC has the potential to reduce off-axis gearbox loads, leading to reduced gearbox bearing fatigue damage and improved lifetimes.
Abstract not provided.
J. Phys. Chem B
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING
Abstract not provided.
Abstract not provided.
IEEE Transatcions on Antennas and Propagation
Abstract not provided.
Abstract not provided.
Abstract not provided.
A NISAC study on the economic effects of a hypothetical H1N1 pandemic was done in order to assess the differential impacts at the state and industry levels given changes in absenteeism, mortality, and consumer spending rates. Part of the analysis was to determine if there were any direct relationships between pandemic impacts and gross domestic product (GDP) losses. Multiple regression analysis was used because it shows very clearly which predictors are significant in their impact on GDP. GDP impact data taken from the REMI PI+ (Regional Economic Models, Inc., Policy Insight +) model was used to serve as the response variable. NISAC economists selected the average absenteeism rate, mortality rate, and consumer spending categories as the predictor variables. Two outliers were found in the data: Nevada and Washington, DC. The analysis was done twice, with the outliers removed for the second analysis. The second set of regressions yielded a cleaner model, but for the purposes of this study, the analysts deemed it not as useful because particular interest was placed on determining the differential impacts to states. Hospitals and accommodation were found to be the most important predictors of percentage change in GDP among the consumer spending variables.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nanowires based on the III nitride materials system have attracted attention as potential nanoscale building blocks in optoelectronics, sensing, and electronics. However, before such applications can be realized, several challenges exist in the areas of controlled and ordered nanowire synthesis, fabrication of advanced nanowire heterostructures, and understanding and controlling the nanowire electrical and optical properties. Here, recent work is presented involving the aligned growth of GaN and III-nitride core-shell nanowires, along with extensive results providing insights into the nanowire properties obtained using advanced electrical, optical and structural characterization techniques.
Abstract not provided.
Linear Algebra and its Applcations
Abstract not provided.
The Fiber Optic Intrusion Detection System (FOIDS)1 is a physical security sensor deployed on fence lines to detect climb or cut intrusions by adversaries. Calibration of detection sensitivity can be time consuming because, for example, the FiberSenSys FD-332 has 32 settings that can be adjusted independently to provide a balance between a high probability of detection and a low nuisance alarm rate. Therefore, an efficient method of calibrating the FOIDS in the field, other than by trial and error, was needed. This study was conducted to: x Identify the most significant settings for controlling detection x Develop a way of predicting detection sensitivity for given settings x Develop a set of optimal settings for validation The Design of Experiments (DoE) 2-4 methodology was used to generate small, planned test matrixes, which could be statistically analyzed to yield more information from the test data. Design of Experiments is a statistical methodology for quickly optimizing performance of systems with measurable input and output variables. DoE was used to design custom screening experiments based on 11 FOIDS settings believed to have the most affect on WKH types of fence perimeter intrusions were evaluated: simulated cut intrusions and actual climb intrusions. Two slightly different two-level randomized fractional factorial designed experiment matrixes consisting of 16 unique experiments were performed in the field for each type of intrusion. Three repetitions were conducted for every cut test; two repetitions were conducted for every climb test. Total number of cut tests analyzed was 51; the total number of climb tests was 38. This paper discusses the results and benefits of using Design of Experiments (DoE) to calibrate and optimize the settings for a FOIDS sensor
IEEE Transactions on Nuclear Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present a newly developed microsystem enabled, back-contacted, shade-free GaAs solar cell. Using microsystem tools, we created sturdy 3 {micro}m thick devices with lateral dimensions of 250 {micro}m, 500 {micro}m, 1 mm, and 2 mm. The fabrication procedure and the results of characterization tests are discussed. The highest efficiency cell had a lateral size of 500 {micro}m and a conversion efficiency of 10%, open circuit voltage of 0.9 V and a current density of 14.9 mA/cm{sup 2} under one-sun illumination.
Abstract not provided.
This work simulated the response of idealized isotopic U-235, U-238, Th-232, and Pu-239 mediums to photonuclear activation with various photon energies. These simulations were conducted using MCNPX version 2.6.0. It was found that photon energies between 14-16 MeV produce the highest response with respect to neutron production rates from all photonuclear reactions. In all cases, Pu-239 responds the highest, followed by U-238. Th-232 produces more overall neutrons at lower photon energies then U-235 when material thickness is above 3.943 centimeters. The time it takes each isotopic material to reach stable neutron production rates in time is directly proportional to the material thickness and stopping power of the medium, where thicker mediums take longer to reach stable neutron production rates and thinner media display a neutron production plateau effect, due to the lack of significant attenuation of the activating photons in the isotopic mediums. At this time, no neutron sensor system has time resolutions capable of verifying these simulations, but various indirect methods are possible and should be explored for verification of these results.
World Academy of Science, Engineering and Technology
There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.
Proceedings of SPIE - The International Society for Optical Engineering
We have fabricated mid-wave infrared photodetectors containing InAsSb absorber regions and AlAsSb barriers in n-barrier-n (nBn) and n-barrier-p (nBp) configurations, and characterized them by current-voltage, photocurrent, and capacitance-voltage measurements in the 100-200 K temperature range. Efficient collection of photocurrent in the nBn structure requires application of a small reverse bias resulting in a minimum dark current, while the nBp devices have high responsivity at zero bias. When biasing both types of devices for equal dark currents, the nBn structure exhibits a differential resistance significantly higher than the nBp, although the nBp device may be biased for arbitrarily low dark current at the expense of much lower dynamic resistance. Capacitance-voltage measurements allow determination of the electron concentration in the unintentionally-doped absorber material, and demonstrate the existence of an electron accumulation layer at the absorber/barrier interface in the nBn device. Numerical simulations of idealized nBn devices demonstrate that photocurrent collection is possible under conditions of minimal absorber region depletion, thereby strongly suppressing depletion region Shockley-Read-Hall generation. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Proceedings of SPIE - The International Society for Optical Engineering
We present a new treatment of optical forces, revealing that the forces in virtually all optomechanically variable systems can be computed exactly and simply from only the optical phase and amplitude response of the system. This treatment, termed the response theory of optical forces (or RTOF), provides conceptual clarity to the essential physics of optomechanical systems, which computationally intensive Maxwell stress-tensor analyses leave obscured, enabling the construction simple models with which optical forces and trapping potentials can be synthesized based on the optical response of optomechanical systems. A theory of optical forces, based on the optical response of systems, is advantageous since the phase and amplitude response of virtually any optomechanical system (involving waveguides, ring resonators or photonic crystals) can be derived, with relative ease, through well-established analytical theories. In contrast, conventional Maxwell stress tensor methods require the computation of complex 3-dimensional electromagnetic field distributions; making a theory for the synthesis of optical forces exceedingly difficult. Through numerous examples, we illustrate that the optical forces generated in complex waveguide and microcavity systems can be computed exactly through use of analytical scattering-matrix methods. When compared with Maxwell stress-tensor methods of force computation, perfect agreement is found. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE
Elucidating the role of calcium fluctuations at the cellular level is essential to gain insight into more complex signaling and metabolic activity within tissues. Recent developments in optical monitoring of calcium transients suggest that cells integrate and transmit information through large networks. Thus, monitoring calcium transients in these populations is important for identifying normal and pathological states of a variety of systems. Though optical techniques can be used to image calcium fluxes using fluorescent probes, depth penetration limits the information that can be acquired from tissues in vivo. Alternatively, the calcium-sensitive dye arsenazo III is useful for optical techniques that rely on absorption of light rather than fluorescence for image contrast. We report on the use of arsenazo III for detection of calcium using photoacoustics, a deeply penetrating imaging technique in which an ultrasound signal is generated following localized absorption of light. The absorbance properties of the dye in the presence of calcium were measured directly using UV-Vis spectrophotometry. For photoacoustic studies, a phantom was constructed to monitor the change in absorbance of 25 μM arsenazo III at 680 nm in the presence of calcium. Subsequent results demonstrated a linear increase in photoacoustic signal as calcium in the range of 1 - 20 μM complexed with the dye, followed by saturation of the signal as increasing amounts of calcium were added. For delivery of the dye to tissue preparations, a liposomal carrier was fabricated and characterized. This work demonstrates the feasibility of using arsenazo III for photoacoustic monitoring of calcium transients in vivo. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Proceedings of SPIE - The International Society for Optical Engineering
In this work, we describe the most recent progress towards the device modeling, fabrication, testing and system integration of active resonant subwavelength grating (RSG) devices. Passive RSG devices have been a subject of interest in subwavelength-structured surfaces (SWS) in recent years due to their narrow spectral response and high quality filtering performance. Modulating the bias voltage of interdigitated metal electrodes over an electrooptic thin film material enables the RSG components to act as actively tunable high-speed optical filters. The filter characteristics of the device can be engineered using the geometry of the device grating and underlying materials. Using electron beam lithography and specialized etch techniques, we have fabricated interdigitated metal electrodes on an insulating layer and BaTiO3 thin film on sapphire substrate. With bias voltages of up to 100V, spectral red shifts of several nanometers are measured, as well as significant changes in the reflected and transmitted signal intensities around the 1.55um wavelength. Due to their small size and lack of moving parts, these devices are attractive for high speed spectral sensing applications. We will discuss the most recent device testing results as well as comment on the system integration aspects of this project. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Journal of Water Resources Planning and Management
To protect drinking water systems, a contamination warning system can use in-line sensors to indicate possible accidental and deliberate contamination. Currently, reporting of an incident occurs when data from a single station detects an anomaly. This paper proposes an approach for combining data from multiple stations to reduce false background alarms. By considering the location and time of individual detections as points resulting from a random space-time point process, Kulldorff's scan test can find statistically significant clusters of detections. Using EPANET to simulate contaminant plumes of varying sizes moving through a water network with varying amounts of sensing nodes, it is shown that the scan test can detect significant clusters of events. Also, these significant clusters can reduce the false alarms resulting from background noise and the clusters can help indicate the time and source location of the contaminant. Fusion of monitoring station results within a moderately sized network show false alarm errors are reduced by three orders of magnitude using the scan test. © 2011 ASCE.
The oil of the Strategic Petroleum Reserve (SPR) represents a national response to any potential emergency or intentional restriction of crude oil supply to this country, and conforms to International Agreements to maintain such a reserve. As assurance this reserve oil will be available in a timely manner should a restriction in supply occur, the oil of the reserve must meet certain transportation criteria. The transportation criteria require that the oil does not evolve dangerous gas, either explosive or toxic, while in the process of transport to, or storage at, the destination facility. This requirement can be a challenge because the stored oil can acquire dissolved gases while in the SPR. There have been a series of reports analyzing in exceptional detail the reasons for the increases, or regains, in gas content; however, there remains some uncertainty in these explanations and an inability to predict why the regains occur. Where the regains are prohibitive and exceed the criteria, the oil must undergo degasification, where excess portions of the volatile gas are removed. There are only two known sources of gas regain, one is the salt dome formation itself which may contain gas inclusions from which gas can be released during oil processing or storage, and the second is increases of the gases release by the volatile components of the crude oil itself during storage, especially if the stored oil undergoes heating or is subject to biological generation processes. In this work, the earlier analyses are reexamined and significant alterations in conclusions are proposed. The alterations are based on how the fluid exchanges of brine and oil uptake gas released from domal salt during solutioning, and thereafter, during further exchanges of fluids. Transparency of the brine/oil interface and the transfer of gas across this interface remains an important unanswered question. The contribution from creep induced damage releasing gas from the salt surrounding the cavern is considered through computations using the Multimechanism Deformation Coupled Fracture (MDCF) model, suggesting a relative minor, but potentially significant, contribution to the regain process. Apparently, gains in gas content can be generated from the oil itself during storage because the salt dome has been heated by the geothermal gradient of the earth. The heated domal salt transfers heat to the oil stored in the caverns and thereby increases the gas released by the volatile components and raises the boiling point pressure of the oil. The process is essentially a variation on the fractionation of oil, where each of the discrete components of the oil have a discrete temperature range over which that component can be volatized and removed from the remaining components. The most volatile components are methane and ethane, the shortest chain hydrocarbons. Since this fractionation is a fundamental aspect of oil behavior, the volatile component can be removed by degassing, potentially prohibiting the evolution of gas at or below the temperature of the degas process. While this process is well understood, the ability to describe the results of degassing and subsequent regain is not. Trends are not well defined for original gas content, regain, and prescribed effects of degassing. As a result, prediction of cavern response is difficult. As a consequence of this current analysis, it is suggested that solutioning brine of the final fluid exchange of a just completed cavern, immediately prior to the first oil filling, should be analyzed for gas content using existing analysis techniques. This would add important information and clarification to the regain process. It is also proposed that the quantity of volatile components, such as methane, be determined before and after any degasification operation.
Numerical Heat Transfer, Part B: Fundamentals
Model validation efforts often use a suite of experiments to provide data to test models for predictive use for a targeted application. A question that naturally arises is Does the experimental suite provide data to adequately test the target application model? The goal of this article is to develop methodology to partially address this question. The methodology utilizes computational models for the individual test suite experiments and for the target application, to assess coverage. The impact of uncertainties in model parameters on the assessment is addressed. Simple linear and nonlinear heat conduction examples of the methodology are provided. Copyright © Taylor & Francis Group, LLC.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Abstract not provided.
The ceramic nanocomposite capacitor goals are: (1) more than double energy density of ceramic capacitors (cutting size and weight by more than half); (2) potential cost reductino (factor of >4) due to decreased sintering temperature (allowing the use of lower cost electrode materials such as 70/30 Ag/Pd); and (3) lower sintering temperature will allow co-firing with other electrical components.
Abstract not provided.
This report considers the calculation of the quasi-static nonlinear response of rectangular flat plates and tubes of rectangular cross-section subjected to compressive loads using quadrilateralshell finite element models. The principal objective is to assess the effect that the shell drilling stiffness parameter has on the calculated results. The calculated collapse load of elastic-plastic tubes of rectangular cross-section is of particular interest here. The drilling stiffness factor specifies the amount of artificial stiffness that is given to the shell element drilling Degree of freedom (rotation normal to the plane of the element). The element formulation has no stiffness for this degree of freedom, and this can lead to numerical difficulties. The results indicate that in the problems considered it is necessary to add a small amount of drilling tiffness to obtain converged results when using both implicit quasi-statics or explicit dynamics methods. The report concludes with a parametric study of the imperfection sensitivity of the calculated responses of the elastic-plastic tubes with rectangular cross-section.
Abstract not provided.
Abstract not provided.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.
This report evaluates the feasibility of high-level radioactive waste disposal in shale within the United States. The U.S. has many possible clay/shale/argillite basins with positive attributes for permanent disposal. Similar geologic formations have been extensively studied by international programs with largely positive results, over significant ranges of the most important material characteristics including permeability, rheology, and sorptive potential. This report is enabled by the advanced work of the international community to establish functional and operational requirements for disposal of a range of waste forms in shale media. We develop scoping performance analyses, based on the applicable features, events, and processes identified by international investigators, to support a generic conclusion regarding post-closure safety. Requisite assumptions for these analyses include waste characteristics, disposal concepts, and important properties of the geologic formation. We then apply lessons learned from Sandia experience on the Waste Isolation Pilot Project and the Yucca Mountain Project to develop a disposal strategy should a shale repository be considered as an alternative disposal pathway in the U.S. Disposal of high-level radioactive waste in suitable shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. Thermal-hydrologic-mechanical calculations indicate that temperatures near emplaced waste packages can be maintained below boiling and will decay to within a few degrees of the ambient temperature within a few decades (or longer depending on the waste form). Construction effects, ventilation, and the thermal pulse will lead to clay dehydration and deformation, confined to an excavation disturbed zone within a few meters of the repository, that can be reasonably characterized. Within a few centuries after waste emplacement, overburden pressures will seal fractures, resaturate the dehydrated zones, and provide a repository setting that strongly limits radionuclide movement to diffusive transport. Coupled hydrogeochemical transport calculations indicate maximum extents of radionuclide transport on the order of tens to hundreds of meters, or less, in a million years. Under the conditions modeled, a shale repository could achieve total containment, with no releases to the environment in undisturbed scenarios. The performance analyses described here are based on the assumption that long-term standards for disposal in clay/shale would be identical in the key aspects, to those prescribed for existing repository programs such as Yucca Mountain. This generic repository evaluation for shale is the first developed in the United States. Previous repository considerations have emphasized salt formations and volcanic rock formations. Much of the experience gained from U.S. repository development, such as seal system design, coupled process simulation, and application of performance assessment methodology, is applied here to scoping analyses for a shale repository. A contemporary understanding of clay mineralogy and attendant chemical environments has allowed identification of the appropriate features, events, and processes to be incorporated into the analysis. Advanced multi-physics modeling provides key support for understanding the effects from coupled processes. The results of the assessment show that shale formations provide a technically advanced, scientifically sound disposal option for the U.S.
Abstract not provided.
Abstract not provided.
Abstract not provided.
To do effective product development, a systematic and rigorous approach to innovation is necessary. Standard models of system engineering provide that approach.
Abstract not provided.
Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.
The rheology at gas-liquid interfaces strongly influences the stability and dynamics of foams and emulsions. Several experimental techniques are employed to characterize the rheology at liquid-gas interfaces with an emphasis on the non-Newtonian behavior of surfactant-laden interfaces. The focus is to relate the interfacial rheology to the foamability and foam stability of various aqueous systems. An interfacial stress rheometer (ISR) is used to measure the steady and dynamic rheology by applying an external magnetic field to actuate a magnetic needle suspended at the interface. Results are compared with those from a double wall ring attachment to a rotational rheometer (TA Instruments AR-G2). Micro-interfacial rheology (MIR) is also performed using optical tweezers to manipulate suspended microparticle probes at the interface to investigate the steady and dynamic rheology. Additionally, a surface dilatational rheometer (SDR) is used to periodically oscillate the volume of a pendant drop or buoyant bubble. Applying the Young-Laplace equation to the drop shape, a time-dependent surface tension can be calculated and used to determine the effective dilatational viscosity of an interface. Using the ISR, double wall ring, SDR, and MIR, a wide range of sensitivity in surface forces (fN to nN) can be explored as each experimental method has different sensitivities. Measurements will be compared to foam stability.
Abstract not provided.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present a new model for closing a system of Lagrangian hydrodynamics equations for a two-material cell with a single velocity model. We describe a new approach that is motivated by earlier work of Delov and Sadchikov and of Goncharov and Yanilkin. Using a linearized Riemann problem to initialize volume fraction changes, we require that each material satisfy its own pdV equation, which breaks the overall energy balance in the mixed cell. To enforce this balance, we redistribute the energy discrepancy by assuming that the corresponding pressure change in each material is equal. This multiple-material model is packaged as part of a two-step time integration scheme. We compare results of our approach with other models and with corresponding pure-material calculations, on two-material test problems with ideal-gas or stiffened-gas equations of state.
Abstract not provided.
Full Wavefield Seismic Inversion (FWI) estimates a subsurface elastic model by iteratively minimizing the difference between observed and simulated data. This process is extremely compute intensive, with a cost on the order of at least hundreds of prestack reverse time migrations. For time-domain and Krylov-based frequency-domain FWI, the cost of FWI is proportional to the number of seismic sources inverted. We have found that the cost of FWI can be significantly reduced by applying it to data processed by encoding and summing individual source gathers, and by changing the encoding functions between iterations. The encoding step forms a single gather from many input source gathers. This gather represents data that would have been acquired from a spatially distributed set of sources operating simultaneously with different source signatures. We demonstrate, using synthetic data, significant cost reduction by applying FWI to encoded simultaneous-source data.
Abstract not provided.
Abstract not provided.
This is workshop is about methodologies, tools, techniques, models, training, codes and standards, etc., that can improve reliability of systems while reducing costs. We've intentionally scaled back on presentation time to allow more time for interaction. Sandia's PV Program Vision - Recognition as a world-class facility to develop and integrate new photovoltaic components, systems, and architectures for the future of our electric/energy delivery systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
European Physics Journal Web of Conferences
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We demonstrate a new semantic method for automatic analysis of wide-area, high-resolution overhead imagery to tip and cue human intelligence analysts to human activity. In the open demonstration, we find and trace cars and rooftops. Our methodology, extended to analysis of voxels, may be applicable to understanding morphology and to automatic tracing of neurons in large-scale, serial-section TEM datasets. We defined an algorithm and software implementation that efficiently finds all combinations of image blobs that satisfy given shape semantics, where image blobs are formed as a general-purpose, first step that 'oversegments' image pixels into blobs of similar pixels. We will demonstrate the remarkable power (ROC) of this combinatorial-based work flow for automatically tracing any automobiles in a scene by applying semantics that require a subset of image blobs to fill out a rectangular shape, with width and height in given intervals. In most applications we find that the new combinatorial-based work flow produces alternative (overlapping) tracings of possible objects (e.g. cars) in a scene. To force an estimation (tracing) of a consistent collection of objects (cars), a quick-and-simple greedy algorithm is often sufficient. We will demonstrate a more powerful resolution method: we produce a weighted graph from the conflicts in all of our enumerated hypotheses, and then solve a maximal independent vertex set problem on this graph to resolve conflicting hypotheses. This graph computation is almost certain to be necessary to adequately resolve multiple, conflicting neuron topologies into a set that is most consistent with a TEM dataset.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report comprises an annual summary of activities under the U.S. Strategic Petroleum Reserve (SPR) Vapor Pressure Committee in FY2009. The committee provides guidance to senior project management on the issues of crude oil vapor pressure monitoring nd mitigation. The principal objectives of the vapor pressure program are, in the event of an SPR drawdown, to minimize the impact on the environment and assure worker safety and public health from crude oil vapor emissions. The annual report reviews key program areas ncluding monitoring program status, mitigation program status, new developments in measurements and modeling, and path forward including specific recommendations on cavern sampling for the next year. The contents of this report were first presented to SPR senior anagement in December 2009, in a deliverable from the vapor pressure committee. The current SAND report is an adaptation for the Sandia technical audience.
Abstract not provided.
Abstract not provided.
The integration of block-copolymers (BCPs) and nanoimprint lithography (NIL) presents a novel and cost-effective approach to achieving nanoscale patterning capabilities. The authors demonstrate the fabrication of a surface-enhanced Raman scattering device using templates created by the BCP-NIL integrated method. The method utilizes a poly(styrene-block-methyl methacrylate) cylindrical-forming diblock-copolymer as a masking material to create a Si template, which is then used to perform a thermal imprint of a poly(methyl methacrylate) (PMMA) layer on a Si substrate. Au with a Cr adhesion layer was evaporated onto the patterned PMMA and the subsequent lift-off resulted in an array of nanodots. Raman spectra collected for samples of R6G on Si substrates with and without patterned nanodots showed enhancement of peak intensities due to the presence of the nanodot array. The demonstrated BCP-NIL fabrication method shows promise for cost-effective nanoscale fabrication of plasmonic and nanoelectronic devices.
International Journal of High-Performance Computing Applications
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The purpose of the DOE Metal Hydride Center of Excellence (MHCoE) is to develop hydrogen storage materials with engineering properties that allow the use of these materials in a way that satisfies the DOE/FreedomCAR Program system requirements for automotive hydrogen storage. The Center is a multidisciplinary and collaborative effort with technical interactions divided into two broad areas: (1) mechanisms and modeling (which provide a theoretically driven basis for pursuing new materials) and (2) materials development (in which new materials are synthesized and characterized). Driving all of this work are the hydrogen storage system specifications outlined by the FreedomCAR Program for 2010 and 2015. The organization of the MHCoE during the past year is show in Figure 1. During the past year, the technical work was divided into four project areas. The purpose of the project areas is to organize the MHCoE technical work along appropriate and flexible technical lines. The four areas summarized are: (1) Project A - Destabilized Hydrides, The objective of this project is to controllably modify the thermodynamics of hydrogen sorption reactions in light metal hydrides using hydride destabilization strategies; (2) Project B - Complex Anionic Materials, The objective is to predict and synthesize highly promising new anionic hydride materials; (3) Project C - Amides/Imides Storage Materials, The objective of Project C is to assess the viability of amides and imides (inorganic materials containing NH{sub 2} and NH moieties, respectively) for onboard hydrogen storage; and (4) Project D - Alane, AlH{sub 3}, The objective of Project D is to understand the sorption and regeneration properties of AlH{sub 3} for hydrogen storage.
Abstract not provided.
Decontamination of anthrax spores in critical infrastructure (e.g., subway systems, major airports) and critical assets (e.g., the interior of aircraft) can be challenging because effective decontaminants can damage materials. Current decontamination methods require the use of highly toxic and/or highly corrosive chemical solutions because bacterial spores are very difficult to kill. Bacterial spores such as Bacillus anthracis, the infectious agent of anthrax, are one of the most resistant forms of life and are several orders of magnitude more difficult to kill than their associated vegetative cells. Remediation of facilities and other spaces (e.g., subways, airports, and the interior of aircraft) contaminated with anthrax spores currently requires highly toxic and corrosive chemicals such as chlorine dioxide gas, vapor- phase hydrogen peroxide, or high-strength bleach, typically requiring complex deployment methods. We have developed a non-toxic, non-corrosive decontamination method to kill highly resistant bacterial spores in critical infrastructure and critical assets. A chemical solution that triggers the germination process in bacterial spores and causes those spores to rapidly and completely change to much less-resistant vegetative cells that can be easily killed. Vegetative cells are then exposed to mild chemicals (e.g., low concentrations of hydrogen peroxide, quaternary ammonium compounds, alcohols, aldehydes, etc.) or natural elements (e.g., heat, humidity, ultraviolet light, etc.) for complete and rapid kill. Our process employs a novel germination solution consisting of low-cost, non-toxic and non-corrosive chemicals. We are testing both direct surface application and aerosol delivery of the solutions. A key Homeland Security need is to develop the capability to rapidly recover from an attack utilizing biological warfare agents. This project will provide the capability to rapidly and safely decontaminate critical facilities and assets to return them to normal operations as quickly as possible, sparing significant economic damage by re-opening critical facilities more rapidly and safely. Facilities and assets contaminated with Bacillus anthracis (i.e., anthrax) spores can be decontaminated with mild chemicals as compared to the harsh chemicals currently needed. Both the 'germination' solution and the 'kill' solution are constructed of 'off-the-shelf,' inexpensive chemicals. The method can be utilized by directly spraying the solutions onto exposed surfaces or by application of the solutions as aerosols (i.e., small droplets), which can also reach hidden surfaces.
Photovoltaic (PV) system performance models are relied upon to provide accurate predictions of energy production for proposed and existing PV systems under a wide variety of environmental conditions. Ground based meteorological measurements are only available from a relatively small number of locations. In contrast, satellite-based radiation and weather data (e.g., SUNY database) are becoming increasingly available for most locations in North America, Europe, and Asia on a 10 x 10 km grid or better. This paper presents a study of how PV performance model results are affected when satellite-based weather data is used in place of ground-based measurements.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Probabilistic Engineering Mechanics
Abstract not provided.
Abstract not provided.
Los Alamos and Sandia National Laboratories have formed a new high performance computing center, the Alliance for Computing at the Extreme Scale (ACES). The two labs will jointly architect, develop, procure and operate capability systems for DOE's Advanced Simulation and Computing Program. This presentation will discuss a petascale production capability system, Cielo, that will be deployed in late 2010, and a new partnership with Cray on advanced interconnect technologies.
Abstract not provided.
Abstract not provided.
Improving the thermal performance of a trough plant will lower the LCOE: (1) Improve mirror alignment using the TOPCAT system - Current - increase optical intercept of existing trough solar power plants, Future - allows larger apertures with same receiver size in new trough solar power plants, and Increased concentration ratios/collection efficiencies & economies of scale; and (2) Improve tracking using a closed loop tracking system - Open loop tracking currently used own experience and from industry show need for a improved method. Performance testing of a Trough module and/or receiver on the rotating platform: (1) Installed costs of a trough plant are high. A significant portion of this is the material and assembly cost of the trough module. These costs need to be reduced without sacrificing performance; and (2) New receiver coatings with lower heat loss and higher absorbtivity. TOPCAT system is an optical evaluation tool for parabolic trough solar collectors. Aspects of the TOPCAT system are: (1) Practical, rapid, and cost effective; (2) Inherently aligns mirrors to the receiver of an entire solar collector array (SCA); (3) Can be used for existing installations -no equivalent tool exits; (4) Can be used during production; (5) Currently can be used on LS-2 or LS-3 configurations, but can be easily modified for any configuration; and (6)Generally, one time use.
Abstract not provided.
Abstract not provided.
Abstract not provided.