Publications

Results 70401–70500 of 99,299

Search results

Jump to search filters

Biofuel impacts on water

Tidwell, Vincent C.

Sandia National Laboratories and General Motors Global Energy Systems team conducted a joint biofuels systems analysis project from March to November 2008. The purpose of this study was to assess the feasibility, implications, limitations, and enablers of large-scale production of biofuels. 90 billion gallons of ethanol (the energy equivalent of approximately 60 billion gallons of gasoline) per year by 2030 was chosen as the book-end target to understand an aggressive deployment. Since previous studies have addressed the potential of biomass but not the supply chain rollout needed to achieve large production targets, the focus of this study was on a comprehensive systems understanding the evolution of the full supply chain and key interdependencies over time. The supply chain components examined in this study included agricultural land use changes, production of biomass feedstocks, storage and transportation of these feedstocks, construction of conversion plants, conversion of feedstocks to ethanol at these plants, transportation of ethanol and blending with gasoline, and distribution to retail outlets. To support this analysis, we developed a 'Seed to Station' system dynamics model (Biofuels Deployment Model - BDM) to explore the feasibility of meeting specified ethanol production targets. The focus of this report is water and its linkage to broad scale biofuel deployment.

More Details

Uncertainty quantification of US Southwest climate from IPCC projections

Boslough, Mark

The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) made extensive use of coordinated simulations by 18 international modeling groups using a variety of coupled general circulation models (GCMs) with different numerics, algorithms, resolutions, physics models, and parameterizations. These simulations span the 20th century and provide forecasts for various carbon emissions scenarios in the 21st century. All the output from this panoply of models is made available to researchers on an archive maintained by the Program for Climate Model Diagnosis and Intercomparison (PCMDI) at LLNL. I have downloaded this data and completed the first steps toward a statistical analysis of these ensembles for the US Southwest. This constitutes the final report for a late start LDRD project. Complete analysis will be the subject of a forthcoming report.

More Details

Advanced dexterous manipulation for IED defeat : report on the feasibility of using the ShadowHand for remote operations

Anderson, Robert J.

Improvised Explosive Device (IED) defeat (IEDD) operations can involve intricate operations that exceed the current capabilities of the grippers on board current bombsquad robots. The Shadow Dexterous Hand from the Shadow Robot Company or 'ShadowHand' for short (www.shadowrobot.com) is the first commercially available robot hand that realistically replicates the motion, degrees-of-freedom and dimensions of a human hand (Figure 1). In this study we evaluate the potential for the ShadowHand to perform potential IED defeat tasks on a mobile platform.

More Details

Quantitative laboratory measurements of biogeochemical processes controlling biogenic calcite carbon sequestration

Lane, Pamela; Lane, Todd; Zendejas, Frank Z.

The purpose of this LDRD was to generate data that could be used to populate and thereby reduce the uncertainty in global carbon cycle models. These efforts were focused on developing a system for determining the dissolution rate of biogenic calcite under oceanic pressure and temperature conditions and on carrying out a digital transcriptomic analysis of gene expression in response to changes in pCO2, and the consequent acidification of the growth medium.

More Details

Trusted Computing Technologies, Intel Trusted Execution Technology

Wendt, Jeremy; Guise, Max J.

We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

More Details

Assessing the operational life of flexible printed boards intended for continuous flexing applications : a case study

Beck, David F.

Through the vehicle of a case study, this paper describes in detail how the guidance found in the suite of IPC (Association Connecting Electronics Industries) publications can be applied to develop a high level of design assurance that flexible printed boards intended for continuous flexing applications will satisfy specified lifetime requirements.

More Details

Whither Commercial Nanobiosensors?

Journal of Biosensors and Bioelectronics

Achyuthan, Komandoor

The excitement surrounding the marriage of biosensors and nanotechnology is palpable even from a cursory examination of the scientific literature. Indeed, the word “nano” might be in danger of being overused and reduced to a cliché, although probably essential for publishing papers or securing research funding. The biosensor literature is littered with clever or catchy acronyms, birds being apparently favored (“CANARY”, “SPARROW”), quite apart from “electronic tongue,” “electronic nose,” and so on. Although biosensors have been around since glucose monitors were commercialized in the 1970s, the transition of laboratory research and innumerable research papers on biosensors into the world of commerce has lagged. There are several reasons for this phenomenon including the infamous “valley of death” afflicting entrepreneurs emerging from academic environment into the industrial world, where the rules for success can be radically different. In this context, musings on biosensors and especially nanobiosensors in an open access journal such as Journal of Biosensors and Bioelectronics is topical and appropriate especially since market surveys of biosensors are prohibitively expensive, sometimes running into thousands of dollars for a single copy. The contents and predictions of market share for biosensors in these reports also keep changing every time a report is published. Not only that, the market share projections for biosensors differs considerably amongst various reports. An editorial provides the opportunity to offer personal opinions and perhaps stimulate debate on a particular topic. In this sense, editorials are a departure from the rigor of a research paper. This editorial is no exception. With this preamble, it is worthwhile to stop and ponder the status of commercial biosensors and nanobiosensors.

More Details

Quantifying the value of hydropower in the electric grid : role of hydropower in existing markets

Loose, Verne W.

The electrical power industry is facing the prospect of integrating a significant addition of variable generation technologies in the next several decades, primarily from wind and solar facilities. Overall, transmission and generation reserve levels are decreasing and power system infrastructure in general is aging. To maintain grid reliability modernization and expansion of the power system as well as more optimized use of existing resources will be required. Conventional and pumped storage hydroelectric facilities can provide an increasingly significant contribution to power system reliability by providing energy, capacity and other ancillary services. However, the potential role of hydroelectric power will be affected by another transition that the industry currently experiences - the evolution and expansion of electricity markets. This evolution to market-based acquisition of generation resources and grid management is taking place in a heterogeneous manner. Some North American regions are moving toward full-featured markets while other regions operate without formal markets. Yet other U.S. regions are partially evolved. This report examines the current structure of electric industry acquisition of energy and ancillary services in different regions organized along different structures, reports on the current role of hydroelectric facilities in various regions, and attempts to identify features of market and scheduling areas that either promote or thwart the increased role that hydroelectric power can play in the future. This report is part of a larger effort led by the Electric Power Research Institute with purpose of examining the potential for hydroelectric facilities to play a greater role in balancing the grid in an era of greater penetration of variable renewable energy technologies. Other topics that will be addressed in this larger effort include industry case studies of specific conventional and hydro-electric facilities, systemic operating constraints on hydro-electric resources, and production cost simulations aimed at quantifying the increased role of hydro.

More Details

Passive load control for large wind turbines

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Ashwill, Thomas D.

Wind energy research activities at Sandia National Laboratories focus on developing large rotors that are lighter and more cost-effective than those designed with current technologies. Because gravity scales as the cube of the blade length, gravity loads become a constraining design factor for very large blades. Efforts to passively reduce turbulent loading has shown significant potential to reduce blade weight and capture more energy. Research in passive load reduction for wind turbines began at Sandia in the late 1990's and has moved from analytical studies to blade applications. This paper discusses the test results of two Sandia prototype research blades that incorporate load reduction techniques. The TX-100 is a 9-m long blade that induces bend-twist coupling with the use of off-axis carbon in the skin. The STAR blade is a 27-m long blade that induces bend-twist coupling by sweeping the blade in a geometric fashion.

More Details

Graphene islands on Cu foils: The interplay between shape, orientation, and defects

Nano Letters

Wofford, Joseph M.; Nie, Shu N.; McCarty, Kevin F.; Bartelt, Norman C.; Dubon, Oscar D.

We have observed the growth of monolayer graphene on Cu foils using low-energy electron microscopy. On the (100)-textured surface of the foils, four-lobed, 4-fold-symmetric islands nucleate and grow. The graphene in each of the four lobes has a different crystallographic alignment with respect to the underlying Cu substrate. These "polycrystalline" islands arise from complex heterogeneous nucleation events at surface imperfections. The shape evolution of the lobes is well explained by an angularly dependent growth velocity. Well-ordered graphene forms only above ∼790 °C. Sublimation-induced motion of Cu steps during growth at this temperature creates a rough surface, where large Cu mounds form under the graphene islands. Strategies for improving the quality of monolayer graphene grown on Cu foils must address these fundamental defect-generating processes. © 2010 American Chemical Society.

More Details

A generalized view on Galilean invariance in stabilized compressible flow computations

International Journal for Numerical Methods in Fluids

Scovazzi, G.; Love, Edward

This article presents a generalized analysis on the significance of Galilean invariance in compressible flow computations with stabilized and variational multi-scale methods. The understanding of the key issues and the development of general approaches to Galilean-invariant stabilization are facilitated by the use of a matrix-operator description of Galilean transformations. The analysis of invariance for discontinuity capturing operators is also included. Published in 2010 by John Wiley & Sons, Ltd. This article is a U.S. Government work and is in the public domain in the U.S.A. Published in 2010 by John Wiley & Sons, Ltd.

More Details

Aerodynamic and acoustic corrections for a Kevlar-walled anechoic wind tunnel

16th AIAA/CEAS Aeroacoustics Conference (31st AIAA Aeroacoustics Conference)

Devenport, William J.; Burdisso, Ricardo A.; Borgoltz, Aurelien; Ravetta, Patricio; Barone, Matthew F.

The aerodynamic and acoustic performance of a Kevlar-walled anechoic wind tunnel test section has been analyzed. Aerodynamic measurements and panel method calculations were performed on a series of airfoils to reveal the influence of the test section walls, including their porosity and flexibility. A lift interference correction method was developed from first principles which shows consistently high accuracy when measurements are compared to viscous free-flight calculations. Interference corrections are an order of magnitude smaller than those associated with an open jet test section. Blockage corrections are found to be a fraction of those which would be associated with a hard-wall test section of the same size, and are negligible in most cases. New measurements showing the acoustic transparency of the Kevlar and the quality of the anechoic environment in the chambers are presented, along with benchmark trailing edge noise measurements. © 2010 by William J. Devenport, Ricardo A. Burdisso, Aurelien Borgoltz, Patricio Ravetta and Matthew F Barone.

More Details

Computing contingency statistics in parallel: Design trade-offs and limiting cases

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Pébay, Philippe; Thompson, David; Bennett, Janine C.

Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and x2 independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse. © 2010 IEEE.

More Details

Advantages of clustering in the phase classification of hyperspectral materials images

Microscopy and Microanalysis

Stork, Christopher L.; Keenan, Michael R.

Despite the many demonstrated applications of factor analysis (FA) in analyzing hyperspectral materials images, FA does have inherent mathematical limitations, preventing it from solving certain materials characterization problems. A notable limitation of FA is its parsimony restriction, referring to the fact that in FA the number of components cannot exceed the chemical rank of a dataset. Clustering is a promising alternative to FA for the phase classification of hyperspectral materials images. In contrast with FA, the phases extracted by clustering do not have to be parsimonious. Clustering has an added advantage in its insensitivity to spectral collinearity that can result in phase mixing using FA. For representative energy dispersive X-ray spectroscopy materials images, namely a solder bump dataset and a braze interface dataset, clustering generates phase classification results that are superior to those obtained using representative FA-based methods. For the solder bump dataset, clustering identifies a Cu-Sn intermetallic phase that cannot be isolated using FA alone due to the parsimony restriction. For the braze interface sample that has collinearity among the phase spectra, the clustering results do not exhibit the physically unrealistic phase mixing obtained by multivariate curve resolution, a commonly utilized FA algorithm. © Microscopy Society of America 2010.

More Details

A framework for the solution of inverse radiation transport problems

IEEE Transactions on Nuclear Science

Mattingly, John K.; Mitchell, Dean J.

Radiation sensing applications for SNM detection, identification, and characterization all face the same fundamental problem: each to varying degrees must infer the presence, identity, and configuration of a radiation source given a set of radiation signatures. This is a problem of inverse radiation transport: given the outcome of a measurement, what source terms and transport medium caused that observation? This paper presents a framework for solving inverse radiation transport problems, describes its essential components, and illustrates its features and performance. The framework implements an implicit solution to the inverse transport problem using deterministic neutron, electron, and photon transport calculations embedded in a Levenberg-Marquardt nonlinear optimization solver. The solver finds the layer thicknesses of a one-dimensional transport model by minimizing the difference between the gamma spectrum calculated by deterministic transport and the measured gamma spectrum. The fit to the measured spectrum is a full-spectrum analysisall spectral features are modeled, including photopeaks and continua from spontaneous and induced photon emissions. An example problem is solved by analyzing a high-resolution gamma spectrometry measurement of plutonium metal. © 2010 IEEE.

More Details

Comparison of thermal conductivity and thermal boundary conductance sensitivities in continuous-wave and ultrashort-pulsed thermoreflectance analyses

International Journal of Thermophysics

Hopkins, Patrick E.; Serrano, Justin R.; Phinney, Leslie

Thermoreflectance techniques are powerful tools for measuring thermophysical properties of thin film systems, such as thermal conductivity, Λ, of individual layers, or thermal boundary conductance across thin film interfaces (G). Thermoreflectance pump-probe experiments monitor the thermoreflectance change on the surface of a sample, which is related to the thermal properties in the sample of interest. Thermoreflectance setups have been designed with both continuous wave (cw) and pulsed laser systems. In cw systems, the phase of the heating event is monitored, and its response to the heating modulation frequency is related to the thermophysical properties; this technique is commonly termed a phase sensitive thermoreflectance (PSTR) technique. In pulsed laser systems, pump and probe pulses are temporally delayed relative to each other, and the decay in the thermoreflectance signal in response to the heating event is related to the thermophysical properties; this technique is commonly termed a transient thermoreflectance (TTR) technique. In this work, mathematical models are presented to be used with PSTR and TTR techniques to determine the Λ and G of thin films on substrate structures. The sensitivities of the models to various thermal and sample parameters are discussed, and the advantages and disadvantages of each technique are elucidated from the results of the model analyses. © 2010 Springer Science+Business Media, LLC.

More Details

True triaxial testing of castlegate sandstone

44th US Rock Mechanics Symposium - 5th US/Canada Rock Mechanics Symposium

Ingraham, M.D.; Issen, K.A.; Holcomb, David J.

Deformation bands in high porosity sandstone are an important geological feature for geologists and petroleum engineers; however, formation of these bands is not fully understood. The theoretical framework for deformation band formation in high porosity geomaterials is well established. It suggests that the intermediate principal stress influences the predicted deformation band type; however, these predictions have yet to be fully validated through experiments. Therefore, this study investigates the influence of the intermediate principal stress on failure and the formation of deformation bands in Castlegate sandstone. Mean stresses for these tests range from 30 to 150 MPa, covering brittle to ductile behavior. Deformation band orientations are measured with external observation as well as through acoustic emission locations. Results of experiments conducted at Lode angles of 30 and 14.5 degrees show trends that qualitatively agree with localization theory. The band angle (between the band normal and maximum compression) decreases with increasing mean stress. For tests at the same mean stress, band angle decreases with increasing Lode angle. Copyright 2010 ARMA, American Rock Mechanics Association.

More Details

A system of parallel and selective microchannels for biosensor sample delivery and containment

Proceedings of IEEE Sensors

Edwards, Thayne L.

This paper presents an integrated microfluidic system for selectively interrogating parallel biosensors at programmed time intervals. Specifically, the microfluidic system is used for delivering a volume of sample from a single source to a surface-based arrayed biosensor. In this case the biosensors were an array of electrochemical electrodes modified with sample specific capture probes. In addition, the sample was required to be captured, stored and removed for additional laboratory analysis. This was accomplished by a plastic laminate stack in which each thin laminate was patterned by CO2 laser ablation to form microchannels and two novel valves. The first valve was a normally closed type opened by heat via an electrically resistive wire. The second valve was a check type integrated into a removable storage chamber. This setup allows for remote and leave-behind sensing applications and also containment of sensed sample for further laboratory analysis. ©2010 IEEE.

More Details

Cooling of an isothermal plate using a triangular array of swirling air jets

2010 14th International Heat Transfer Conference, IHTC 14

Rodriguez, Sal B.; El-Genk, Mohamed S.

Cooling with swirling jets is an effective means for enhancing heat transfer and improving spatial uniformity of the cooling rate in many applications. This paper investigates cooling a flat, isothermal plate at 1,000 K using a single and a triangular array of swirling air jets, and characterizes the resulting flow field and the air temperature above the plate. This problem was modeled using the Fuego computational fluid dynamics (CFD) code that is being developed at Sandia National Laboratories. The separation distance to jet diameter, L/D, varied from 3 to 12, Reynolds number, Re, varied from 5×103-5×104, and the swirl number, S varied from 0 to 2.49. The formation of the central recirculation zone (CRZ) and its impact on heat transfer were also investigated. For a hubless swirling jet, a CRZ was generated whenever S ≥ 0.67, in agreement with experimental data and our mathematical derivation for swirl (helicoid) azimuthal and axial velocities. On the other hand, for S <0.058, the velocity field closely approximated that of a conventional jet. With the azimuthal velocity of a swirling jet decaying as 1/z2, most mixing occurred only a few jet diameters from the jet nozzle. Highest cooling occurred when L/D = 3 and S = 0.12 to 0.79. Heat transfer enhancement increased as S or Re increased, or L/D decreased. © 2010 by ASME.

More Details

Charge enhancement effects in 6H-SiC MOSFETs induced by heavy ion strike

IEEE Transactions on Nuclear Science

Onoda, Shinobu; Makino, Takahiro; Iwamoto, Naoya; Vizkelethy, Gyorgy; Kojima, Kazutoshi; Nozaki, Shinji; Ohshima, Takeshi

The transient response of Silicon Carbide (SiC) Metal-Oxide-Semiconductor Field Effect Transistors (MOSFETs) with three different gates due to a single ion strike is studied. Comparing the experiment and numerical simulation, it is suggested that the charge enhancement is due to the bipolar effect. We find the bipolar gain depends on the quality of gate oxide. The impact of fixed charge in SiO2 and interface traps at SiC/SiO2 on the charge collection is discussed. © 2010 IEEE.

More Details

Ultra-compact optical true time delay device for wideband phased array radars

Proceedings of SPIE - The International Society for Optical Engineering

Anderson, Betty L.; Ho, James G.; Cowan, William D.; Spahn, Olga B.; Yi, Allen Y.; Flannery, Martin R.; Rowe, Delton J.; McCray, David L.; Rabb, David J.; Chen, Peter

An ultra-compact optical true time delay device is demonstrated that can support 112 antenna elements with better than six bits of delay in a volume 16″x5″x4″ including the box and electronics. Free-space beams circulate in a White cell, overlapping in space to minimize volume. The 18 mirrors are slow-tool diamond turned on two substrates, one at each end, to streamline alignment. Pointing accuracy of better than 10?rad is achieved, with surface roughness ∼45 nm rms. A MEMS tip-style mirror array selects among the paths for each beam independently, requiring ∼100 μs to switch the whole array. The micromirrors have 1.4° tip angle and three stable states (east, west, and flat). The input is a fiber-andmicrolens array, whose output spots are re-imaged multiple times in the White cell, striking a different area of the single MEMS chip in each of 10 bounces. The output is converted to RF by an integrated InP wideband optical combiner detector array. Delays were accurate to within 4% (shortest delay) to 0.03% (longest mirror train). The fiber-to- detector insertion loss is 7.82 dB for the shortest delay path. © 2010 SPIE.

More Details

Readout IC requirement trends based on a simplified parametric seeker model

Proceedings of SPIE the International Society for Optical Engineering

Osborn, Thor D.

More Details

Achromatic circular polarization generation for ultra-intense lasers

Optics InfoBase Conference Papers

Rambo, Patrick K.; Kimmel, Mark; Bennett, Guy R.; Schwarz, Jens; Schollmeier, Marius; Atherton, B.

Generating circular polarization for ultra-intense lasers requires solutions beyond traditional transmissive waveplates which have insufficient bandwidth and pose nonlinear phase (B-integral) problems. We demonstrate a reflective design employing 3 metallic mirrors to gen-erate circular polarization. © 2010 Optical Society of America.

More Details

Life assessment of full-scale EDS vessel under impulsive loadings

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Yip, Mien; Haroldsen, Brent L.

The Explosive Destruction System (EDS) was developed by Sandia National Laboratories for the US Army Product Manager for Non-Stockpile Chemical Materiel (PMNSCM) to destroy recovered, explosively configured, chemical munitions. PMNSCM currently has five EDS units that have processed over 1,400 items. The system uses linear and conical shaped charges to open munitions and attack the burster followed by chemical treatment of the agent. The main component of the EDS is a stainless steel, cylindrical vessel, which contains the explosion and the subsequent chemical treatment. Extensive modeling and testing have been used to design and qualify the vessel for different applications and conditions. The high explosive (HE) pressure histories and subsequent vessel response (strain histories) are modeled using the analysis codes CTH and LS-DYNA, respectively. Using the model results, a load rating for the EDS is determined based on design guidance provided in the ASME Code, Sect. VIII, Div. 3, Code Case No. 2564. One of the goals is to assess and understand the vessel's capacity in containing a wide variety of detonation sequences at various load levels. Of particular interest are to know the total number of detonation events at the rated load that can be processed inside each vessel, and a maximum load (such as that arising from an upset condition) that can be contained without causing catastrophic failure of the vessel. This paper will discuss application of Code Case 2564 to the stainless steel EDS vessels, including a fatigue analysis using a J-R curve, vessel response to extreme upset loads, and the effects of strain hardening from successive events. Copyright © 2010 by ASME.

More Details

Optical logic gates using interconnected photodiodes and electro-absorption modulators

Optics InfoBase Conference Papers

Skogen, Erik J.; Vawter, Gregory A.; Tauke-Pedretti, Anna; Overberg, Mark E.; Peake, Gregory M.; Alford, Charles; Torres, David; Cajas, Florante; Sullivan, Charles T.

We demonstrate an optical gate architecture with optical isolation between input and output using interconnected PD-EAMs to perform AND and NOT functions. Waveforms for 10 Gbps AND and 40 Gbps NOT gates are shown. © 2010 Optical Society of America.

More Details

A beamforming algorithm for bistatic SAR image formation

Proceedings of SPIE - The International Society for Optical Engineering

Jakowatz, Charles V.; Wahl, Daniel E.; Yocky, David A.

Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Controlling the microstructure of vapor-deposited pentaerythritol tetranitrate (PETN) films

Proceedings - 14th International Detonation Symposium, IDS 2010

Knepper, Robert; Tappan, Alexander S.; Wixom, Ryan R.

We have demonstrated the ability to control the microstructure of PETN films deposited using physical vapor deposition by altering the interface between the film and substrate. Evolution of surface morphology, average density, and surface roughness with film thickness were characterized using surface profilometry and scanning electron microscopy. While films on all of the substrates investigated showed a trend toward a lower average density with increasing film thickness, there were significant variations in density, pore size, and surface morphology in films deposited on different substrates.

More Details

Calculating hugoniots for molecular crystals from first principles

Proceedings - 14th International Detonation Symposium, IDS 2010

Wills, Ann E.; Wixom, Ryan R.; Mattsson, Thomas

Density Functional Theory (DFT) has over the last few years emerged as an indispensable tool for understanding the behavior of matter under extreme conditions. DFT based molecular dynamics simulations (MD) have for example confirmed experimental findings for shocked deuterium,1 enabled the first experimental evidence for a triple point in carbon above 850 GPa,2 and amended experimental data for constructing a global equation of state (EOS) for water, carrying implications for planetary physics.3 The ability to perform high-fidelity calculations is even more important for cases where experiments are impossible to perform, dangerous, and/or prohibitively expensive. For solid explosives, and other molecular crystals, similar success has been severely hampered by an inability of describing the materials at equilibrium. The binding mechanism of molecular crystals (van der Waals' forces) is not well described within traditional DFT.4 Among widely used exchange-correlation functionals, neither LDA nor PBE balances the strong intra-molecular chemical bonding and the weak inter-molecular attraction, resulting in incorrect equilibrium density, negatively affecting the construction of EOS for undetonated high explosives. We are exploring a way of bypassing this problem by using the new Armiento-Mattsson 2005 (AM05) exchange-correlation functional.5, 6 The AM05 functional is highly accurate for a wide range of solids,4, 7 in particular in compression.8 In addition, AM05 does not include any van der Waals' attraction,4 which can be advantageous compared to other functionals: Correcting for a fictitious van der Waals' like attraction with unknown origin can be harder than correcting for a complete absence of all types of van der Waals' attraction. We will show examples from other materials systems where van der Waals' attraction plays a key role, where this scheme has worked well,9 and discuss preliminary results for molecular crystals and explosives.

More Details

Risk-based cost-benefit analysis for security assessment problems

Proceedings - International Carnahan Conference on Security Technology

Wyss, Gregory D.; Clem, John; Darby, John L.; Guzman, Katherine D.; Hinton, John P.; Mitchiner, K.W.

Decision-makers want to perform risk-based cost-benefit prioritization of security investments. However, strong nonlinearities in the most common physical security performance metric make it difficult to use for cost-benefit analysis. This paper extends the definition of risk for security applications and embodies this definition in a new but related security risk metric based on the degree of difficulty an adversary will encounter to successfully execute the most advantageous attack scenario. This metric is compatible with traditional cost-benefit optimization algorithms, and can lead to an objective risk-based cost-benefit method for security investment option prioritization. It also enables decision-makers to more effectively communicate the justification for their investment decisions with stakeholders and funding authorities. ©2010 IEEE.

More Details

Applying human reliability analysis models as a probabilistic basis for an integrated evaluation of safeguards and security systems

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Duran, Felicia A.; Wyss, Gregory D.

Material control and accounting (MC&A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC&A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC&A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC&A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

More Details

Unreacted equation of state development and multiphase modeling of dynamic compaction of low density hexanitrostilbene (HNS) pressings

Proceedings - 14th International Detonation Symposium, IDS 2010

Brundage, Aaron

Compaction waves in porous energetic materials have been shown to induce reaction under impact loading. In the past, simple two-state burn models such as the Arrhenius Burn model have been developed to predict slapper initiation in Hexanitrostilbene (HNS) pellets; however, a more sophisticated, fundamental approach is needed to predict the shock response during impact loading, especially in pellets that have been shown to have strong density gradients. The intergranular stress measures the resistance to bed compaction or the removal of void space due to particle packing and rearrangement. A constitutive model for the intergranular stress is needed for closure in the Baer-Nunziato (BN) multiphase mixture theory for reactive energetic materials. The intergranular stress was obtained from both quasi-static compaction experiments and from dynamic compaction experiments. Additionally, historical data and more recently acquired data for porous pellets compacted to high densities under shock loading were used for model assessment. Predicted particle velocity profiles under dynamic compaction were generally in good agreement with the experimental data. Hence, a multiphase model of HNS has been developed to extend current predictive capability.

More Details

Lessons learned on Human Reliability Analysis (HRA) methods from the International HRA Empirical Study

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Forester, J.A.; Lois, E.; Dang, V.N.; Bye, A.; Parry, G.; Julius, J.

In the International HRA Empirical Study, human reliability analysis (HRA) method predictions for human failure events (HFEs) in steam generator tube rupture and loss of feedwater scenarios were compared against the performance of real crews in a nuclear power plant control room simulator. The comparisons examined both the qualitative and quantitative HRA method predictions. This paper discusses some of the lessons learned about HRA methods that have been identified to date. General strengths and weaknesses of HRA methods are addressed, along with the reasons for any limitations in the predictive results produced by the methods. However, the discussions of the lessons learned in this paper must be considered a "snapshot." While most of the data has been analyzed, more detailed analysis of the results from specific HRA methods are ongoing and additional information may emerge.

More Details

Application of a field-based method to spatially varying thermal transport problems in molecular dynamics

Modelling and Simulation in Materials Science and Engineering

Templeton, Jeremy A.; Jones, Reese E.; Wagner, Gregory J.

This paper derives a methodology to enable spatial and temporal control of thermally inhomogeneous molecular dynamics (MD) simulations. The primary goal is to perform non-equilibrium MD of thermal transport analogous to continuum solutions of heat flow which have complex initial and boundary conditions, moving MD beyond quasi-equilibrium simulations using periodic boundary conditions. In our paradigm, the entire spatial domain is filled with atoms and overlaid with a finite element (FE) mesh. The representation of continuous variables on this mesh allows fixed temperature and fixed heat flux boundary conditions to be applied, non-equilibrium initial conditions to be imposed and source terms to be added to the atomistic system. In effect, the FE mesh defines a large length scale over which atomic quantities can be locally averaged to derive continuous fields. Unlike coupling methods which require a surrogate model of thermal transport like Fourier's law, in this work the FE grid is only employed for its projection, averaging and interpolation properties. Inherent in this approach is the assumption that MD observables of interest, e.g. temperature, can be mapped to a continuous representation in a non-equilibrium setting. This assumption is taken advantage of to derive a single, unified set of control forces based on Gaussian isokinetic thermostats to regulate the temperature and heat flux locally in the MD. Example problems are used to illustrate potential applications. In addition to the physical results, data relevant to understanding the numerical effects of the method on these systems are also presented. © 2010 IOP Publishing Ltd.

More Details

Architecture of PFC supports analogy, but PFC is not an analogy machine

Cognitive Neuroscience

Speed, Ann E.

In the preceding discussion paper, I proposed a theory of prefrontal cortical organization that was fundamentally intended to address the question: How does prefrontal cortex (PFC) support the various functions for which it seems to be selectively recruited? In so doing, I chose to focus on a particular function, analogy, that seems to have been largely ignored in the theoretical treatments of PFC, but that does underlie many other cognitive functions (Hofstadter, 2001; Holyoak & Thagard, 1997). At its core, this paper was intended to use analogy as a foundation for exploring one possibility for prefrontal function in general, although it is easy to see how the analogy-specific interpretation arises (as in the comment by Ibáñez). In an attempt to address this more foundational question, this response will step away from analogy as a focus, and will address first the various comments from the perspective of the initial motivation for developing this theory, and then specific issues raised by the commentators. © 2010 Psychology Press.

More Details

Fire-induced failure mode testing for dc-powered control circuits

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Nowlen, Steven P.; Taylor, Gabriel; Brown, Jason

The U.S. Nuclear Regulatory Commission, in concert with industry, continues to explore the effects of fire on electrical cable and control circuit performance. The latest efforts, which are currently underway, are exploring issues related to fire-induced cable failure modes and effects for direct current (dc) powered electrical control circuits. An extensive series of small and intermediate scale fire tests has been performed. Each test induced electrical failure in copper conductor cables of various types typical of those used by the U.S. commercial nuclear power industry. The cables in each test were connected to one of several surrogate dc control circuits designed to monitor and detect cable electrical failure modes and effects. The tested dc control circuits included two sets of reversing dc motor starters typical of those used in motor-operated valve (MOV) circuits, two small solenoid-operated valves (SOV), one intermediate size (1-inch (25.4mm) diameter) SOV, a very large direct-acting valve coil, and a switchgear/breaker unit. Also included was a specialized test circuit designed specifically to monitor for electrical shorts between two cables (inter-cable shorting). Each of these circuits was powered from a nominal 125V battery bank comprised of 60 individual battery cells (nominal 2V lead-acid type cells with plates made from a lead-cadmium alloy). The total available short circuit current at the terminals of the battery bank was estimated at 13,000A. All of the planned tests have been completed with the data analysis and reporting currently being completed. This paper will briefly describe the test program, some of the preliminary test insights, and planned follow-on activities.

More Details

Investigation of microcantilever array with ordered nanoporous coatings for selective chemical detection

Proceedings of SPIE - The International Society for Optical Engineering

Lee, J.H.; Houk, R.T.J.; Robinson, Alex; Greathouse, Jeffery A.; Thornberg, Steve M.; Allendorf, M.D.; Hesketh, P.J.

In this paper we demonstrate the potential for novel nanoporous framework materials (NFM) such as metal-organic frameworks (MOFs) to provide selectivity and sensitivity to a broad range of analytes including explosives, nerve agents, and volatile organic compounds (VOCs). NFM are highly ordered, crystalline materials with considerable synthetic flexibility resulting from the presence of both organic and inorganic components within their structure. Detection of chemical weapons of mass destruction (CWMD), explosives, toxic industrial chemicals (TICs), and volatile organic compounds (VOCs) using micro-electro-mechanical-systems (MEMS) devices, such as microcantilevers and surface acoustic wave sensors, requires the use of recognition layers to impart selectivity. Traditional organic polymers are dense, impeding analyte uptake and slowing sensor response. The nanoporosity and ultrahigh surface areas of NFM enhance transport into and out of the NFM layer, improving response times, and their ordered structure enables structural tuning to impart selectivity. Here we describe experiments and modeling aimed at creating NFM layers tailored to the detection of water vapor, explosives, CWMD, and VOCs, and their integration with the surfaces of MEMS devices. Force field models show that a high degree of chemical selectivity is feasible. For example, using a suite of MOFs it should be possible to select for explosives vs. CWMD, VM vs. GA (nerve agents), and anthracene vs. naphthalene (VOCs). We will also demonstrate the integration of various NFM with the surfaces of MEMS devices and describe new synthetic methods developed to improve the quality of VFM coatings. Finally, MOF-coated MEMS devices show how temperature changes can be tuned to improve response times, selectivity, and sensitivity. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Pixelated spectral filter for integrated focal plane array in the long-wave IR

Proceedings of SPIE - The International Society for Optical Engineering

Kemme, Shanalyn A.; Boye, Robert; Cruz-Cabrera, Alvaro A.; Briggs, Ronald D.; Carter, T.R.; Samora, S.

We present the design, fabrication, and characterization of a pixelated, hyperspectral arrayed component for Focal Plane Array (FPA) integration in the Long-Wave IR. This device contains tens of pixels within a single super-pixel which is tiled across the extent of the FPA. Each spectral pixel maps to a single FPA pixel with a spectral FWHM of 200nm. With this arrayed approach, remote sensing data may be accumulated with a non-scanning, "snapshot" imaging system. This technology is flexible with respect to individual pixel center wavelength and to pixel position within the array. Moreover, the entire pixel area has a single wavelength response, not the integrated linear response of a graded cavity thickness design. These requirements bar tilted, linear array technologies where the cavity length monotonically increases across the device. © 2010 Copyright SPIE - The International Society for Optical Engineering.

More Details

Readout IC requirement trends based on a simplified parametric seeker model

Proceedings of SPIE - The International Society for Optical Engineering

Osborn, Thor D.

More Details

A physics-based device model of transient neutron damage in bipolar junction transistors

IEEE Transactions on Nuclear Science

Keiter, Eric R.; Russo, Thomas V.; Hembree, Charles; Kambour, Kenneth E.

For the purpose of simulating the effects of neutron radiation damage on bipolar circuit performance, a bipolar junction transistor (BJT) compact model incorporating displacement damage effects and rapid annealing has been developed. A physics-based approach is used to model displacement damage effects, and this modeling approach is implemented as an augmentation to the Gummel-Poon BJT model. The model is presented and implemented in the Xyce circuit simulator, and is shown to agree well with experiments and TCAD simulation, and is shown to be superior to a previous compact modeling approach. © 2010 IEEE.

More Details

Optimal utilization of heterogeneous resources for biomolecular simulations

2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2010

Hampton, Scott S.; Alam, Sadaf R.; Crozier, Paul; Agarwal, Pratul K.

Biomolecular simulations have traditionally benefited from increases in the processor clock speed and coarse-grain inter-node parallelism on large-scale clusters. With stagnating clock frequencies, the evolutionary path for performance of microprocessors is maintained by virtue of core multiplication. Graphical processing units (GPUs) offer revolutionary performance potential at the cost of increased programming complexity. Furthermore, it has been extremely challenging to effectively utilize heterogeneous resources (host processor and GPU cores) for scientific simulations, as underlying systems, programming models and tools are continually evolving. In this paper, we present a parametric study demonstrating approaches to exploit resources of heterogeneous systems to reduce time-to-solution of a production-level application for biological simulations. By overlapping and pipelining computation and communication, we observe up to 10-fold application acceleration in multi-core and multi-GPU environments illustrating significant performance improvements over code acceleration approaches, where the host-to-accelerator ratio is static, and is constrained by a given algorithmic implementation. © 2010 IEEE.

More Details

A parametric study of the impact of various error contributions on the flux distribution of a solar dish concentrator

ASME 2010 4th International Conference on Energy Sustainability, ES 2010

Andraka, Charles E.; Yellowhair, Julius; Iverson, Brian D.

Dish concentrators can produce highly concentrated flux for the operation of an engine, a chemical process, or other energy converter. The high concentration allows a small aperture to control thermal losses, and permits high temperature processes at the focal point. A variety of optical errors can influence the flux pattern both at the aperture and at the absorber surface. Impacts of these errors can be lost energy (intercept losses), aperture compromise (increased size to accommodate flux), high peak fluxes (leading to part failure or life reduction), and improperly positioned flux also leading to component failure. Optical errors can include small scale facet errors ("waviness"), facet shape errors, alignment (facet pointing) errors, structural deflections, and tracking errors. The errors may be random in nature, or may be systematic. The various sources of errors are often combined in a "root-mean-squared" process to present a single number as an "error budget". However, this approach ignores the fact that various errors can influence the performance in different ways, and can mislead the designer, leading to component damage in a system or poor system performance. In this paper, we model a hypothetical radial gore dish system using Sandia's CIRCE2 optical code. We evaluate the peak flux and incident power through the aperture and onto various parts of the receiver cavity. We explore the impact of different error sources on the character of the flux pattern, and demonstrate the limitations of lumping all of the errors into a single error budget. © 2010 by ASME.

More Details
Results 70401–70500 of 99,299
Results 70401–70500 of 99,299