Priorities for Technology Development and Policy To Reduce the Risk from Radioactive Materials
Abstract not provided.
Abstract not provided.
To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method. We compare the travel-time prediction and location capabilities of SALSA3D to standard 1D models via location tests on a global event set with GT of 5 km or better. These events generally possess hundreds of Pn and P picks from which we generate different realizations of station distributions, yielding a range of azimuthal coverage and ratios of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135 regardless of Pn to P ratio, with the improvement being most pronounced at higher azimuthal gaps.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Scripta Materialia
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The ion photon emission microscope (IPEM), a new radiation effects microscope for the imaging of single event effects from penetrating radiation, is being developed at Sandia National Laboratories and implemented on the 88' cyclotron at Lawrence Berkeley National Laboratories. The microscope is designed to permit the direct correlation between the locations of high-energy heavy-ion strikes and single event effects in microelectronic devices. The development of this microscope has required the production of a robust optical system that is compatible with the ion beam lines, design and assembly of a fast single photon sensitive measurement system to provide the necessary coincidence, and the development and testing of many scintillating films. A wide range of scintillating material for application to the ion photon emission microscope has been tested with few meeting the stringent radiation hardness, intensity, and photon lifetime requirements. The initial results of these luminescence studies and the current operation of the ion photon emission microscope will be presented. Finally, the planned development for future microscopes and ion luminescence testing chambers will be discussed.
Abstract not provided.
Plasma spray coating techniques allow unique control of electrolyte microstructures and properties as well as facilitating deposition on complex surfaces. This can enable significantly improved solid oxide fuel cells (SOFCs), including non-planar designs. SOFCs are promising because they directly convert the oxidization of fuel into electrical energy. However, electrolytes deposited using conventional plasma spray are porous and often greater than 50 microns thick. One solution to form dense, thin electrolytes of ideal composition for SOFCs is to combine suspension plasma spray (SPS) with very low pressure plasma spray (VLPPS). Increased compositional control is achieved due to dissolved dopant compounds in the suspension that are incorporated into the coating during plasma spraying. Thus, it is possible to change the chemistry of the feed stock during deposition. In the work reported, suspensions of sub-micron diameter 8 mol.% Y2O3-ZrO2 (YSZ) powders were sprayed on NiO-YSZ anodes at Sandia National Laboratories (SNL) Thermal Spray Research Laboratory (TSRL). These coatings were compared to the same suspensions doped with scandium nitrate at 3 to 8 mol%. The pressure in the chamber was 2.4 torr and the plasma was formed from a combination of argon and hydrogen gases. The resultant electrolytes were well adhered to the anode substrates and were approximately 10 microns thick. The microstructure of the resultant electrolytes will be reported as well as the electrolyte performance as part of a SOFC system via potentiodynamic testing and impedance spectroscopy.
Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysis for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.
Abstract not provided.
This presentation will discuss two tensor decompositions that are not as well known as PARAFAC (parallel factors) and Tucker, but have proven useful in informatics applications. Three-way DEDICOM (decomposition into directional components) is an algebraic model for the analysis of 3-way arrays with nonsymmetric slices. PARAFAC2 is a related model that is less constrained than PARAFAC and allows for different objects in one mode. Applications of both models to informatics problems will be shown.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
For practical quantum computing, it will be necessary to detect the fluorescence from many trapped ions. We describe a design and integration approach using micro-optics to couple this fluorescence into an array of optical fibers.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Nuclear Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Graphene has emerged as a promising material for high speed nano-electronics due to the relatively high carrier mobility that can be achieved. To further investigate electronic transport in graphene and reveal its potential for microwave applications, we employed a near-field scanning microwave microscope with the probe formed by an electrically open end of a 4 GHz half-lambda parallel-strip transmission line resonator. Because of the balanced probe geometry, our microscope allows for truly localized quantitative characterization of various bulk and low-dimensional materials, with the response region defined by the one micron spacing between the two metallic strips at the probe tip. The single- and few-layer graphene flakes were fabricated by a mechanical cleavage method on 300-nm-thick silicon dioxide grown on low resistivity Si wafer. The flake thickness was determined using both AFM and Raman microscopies. We observe clear correlation between the near-field microwave and far-field optical images of graphene produced by the probe resonant frequency shift and thickness-defined color gradation, respectively. We show that the microwave response of graphene flakes is determined by the local sheet impedance, which is found to be predominantly active. Furthermore, we apply a quantitative electrodynamic model relating the probe resonant frequency shift to 2D conductivity of single- and few-layer graphene. From fitting a model to the experimental data we evaluate graphene sheet resistance as a function of thickness. Near-field scanning microwave microscopy can simultaneously image location, geometry, thickness, and distribution of electrical properties of graphene without a need for device fabrication. The approach may be useful for design of graphene-based microwave transistors, quality control of large area graphene sheets, or investigation of chemical and electrical doping effects on graphene transport properties. We acknowledge support from the DOE Center for Integrated Nanotechnologies user support program (grant No.U2008A061), from the NASA NM Space Grant Consortium program, and from the LANL-NMT MOU program supported by UCDRD.
IEEE Security and Privacy
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal of Engineering Science
Abstract not provided.
This report documents the progression of crude oil phase behavior modeling within the U.S. Strategic Petroleum Reserve vapor pressure program during the period 2004-2009. Improvements in quality control on phase behavior measurements in 2006 coupled with a growing body of degasification plant operations data have created a solid measurement baseline that has served to inform and significantly improve project understanding on phase behavior of SPR oils. Systematic tuning of the model based on proven practices from the technical literature have shown to reduce model bias and match observed data very well, though this model tuning effort is currently in process at SPR and based on preliminary data. The current report addresses many of the steps that have helped to build a strong baseline of data coupled with sufficient understanding of model features so that calibration is possible.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Applied Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics B
Abstract not provided.
Abstract not provided.
Optics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Thin and small form factor cells have been researched lately by several research groups around the world due to possible lower assembly costs and reduced material consumption with higher efficiencies. Given the popularity of these devices, it is important to have detailed information about the behavior of these devices. Simulation of fabrication processes and device performance reveals some of the advantages and behavior of solar cells that are thin and small. Three main effects were studied: the effect of surface recombination on the optimum thickness, efficiency, and current density, the effect of contact distance on the efficiency for thin cells, and lastly the effect of surface recombination on the grams per Watt-peak. Results show that high efficiency can be obtained in thin devices if they are well-passivated and the distance between contacts is short. Furthermore, the ratio of grams per Watt-peak is greatly reduced as the device is thinned.
We present a newly developed microsystem enabled, back-contacted, shade-free GaAs solar cell. Using microsystem tools, we created sturdy 3 {micro}m thick devices with lateral dimensions of 250 {micro}m, 500 {micro}m, 1 mm, and 2 mm. The fabrication procedure and the results of characterization tests are discussed. The highest efficiency cell had a lateral size of 500 {micro}m and a conversion efficiency of 10%, open circuit voltage of 0.9 V and a current density of 14.9 mA/cm{sup 2} under one-sun illumination.
Abstract not provided.
Abstract not provided.
Analysis of 50 mm diameter wire arrays at the Z Accelerator has shown experimentally the accretion of mass in a stagnating z pinch and provided insight into details of the radiating plasma species and plasma conditions. This analysis focused on nested wire arrays with a 2:1 (outeninner) mass, radius, and wire number ratio where Al wires were fielded on the outer array and Ni-clad Ti wires were fielded on the inner array.In this presentation, we will present analysis of data from other mixed Al/Ni-clad Ti configurations to further evaluate nested wire array dynamics and mass accretion. These additional configurations include the opposite configuration to that described above (Ni-clad Ti wires on the outer array, with Al wires on the inner array) as well as higher wire number Al configurations fielded to vary the interaction of the two arrays. These same variations were also assessed for a smaller diameter nested array configuration (40 mm). Variations in the emitted radiation and plasma conditions will be presented, along with a discussion of what the results indicate about the nested array dynamics. Additional evidence for mass accretion will also be presented.
The planar wire array research on Zebra at UNR that started in 2005 continues experiments with new types of planar loads with results for consideration and comprehensive analysis [see, for example, Kantsyrev et al, HEDP 5, 115 (2009)]. The detailed studies of radiative properties of such loads are important and spectroscopy and imaging constitute a very valuable and informative diagnostic tool. The set of theoretical codes is implemented which provides non-LTE kinetics, wire ablation dynamic, and MHD modeling. This talk is based on the results of new recent experiments with planar wire arrays on Zebra at UNR. We start with results on radiative properties of a uniform single planar wire array (SPWA) from alloyed Al wires and move to combined triple planar wire arrays (TPWA) made from two materials, Cu and Al. Such combined TPWA includes three planar wire rows that are parallel to each other and made of either Cu or Al alloyed wires. Three different configurations (Al/Cu/Al, Cu/Al/Cu, and Cu/Cu/Al) are considered and compared with each other, and with the results from SPWA of the same materials. X-ray time-gated and time integrated pinhole images and spectra are analyzed together with bolometer, PCD, and XRD measurements, and optical images. Emphasis is made on the radiative properties and temporal and spatial evolution of plasma parameters of such two-component plasmas. The opacity effects are considered and the important question of what causes K-shell Al lines to be optically thin in combined TPWAs is addressed. In conclusion, the new findings from studying multi-planar wire array implosions are summarized and their input to Z-pinch radiation physics is discussed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Supercomputers are composed of many diverse components, operated at a variety of scales, and function as a coherent whole. The resulting logs are thus diverse in format, interrelated at multiple scales, and provide evidence of faults across subsystems. When combined with system configuration information, insights on both the downstream effects and upstream causes of events can be determined. However, difficulties in joining the data and expressing complex queries slow the speed at which actionable insights can be obtained. Effectively connecting data experts and data miners faces similar hurdles. This paper describes our experience with applying the Splunk log analysis tool as a vehicle to combine both data, and people. Splunk's search language, lookups, macros, and subsearches reduce hours of tedium to seconds of simplicity, and its tags, saved searches, and dashboards offer both operational insights and collaborative vehicles.
Abstract not provided.
Abstract not provided.
Circuit simulation codes, such as SPICE, are invaluable in the development and design of electronic circuits in radiation environments. These codes are often employed to study the effect of many thousands of devices under transient current conditions. Device-scale simulation codes are commonly used in the design of individual semiconductor components, but computational requirements limit their use to small-scale circuits. Analytic solutions to the ambipolar diffusion equation, an approximation to the carrier transport equations, may be used to characterize the transient currents at nodes within a circuit simulator. We present new analytic transient excess carrier density and photocurrent solutions to the ambipolar diffusion equation for 1-D abrupt-junction pn diodes. These solutions incorporate low-level radiation pulses and take into account a finite device geometry, ohmic fields outside the depleted region, and an arbitrary change in the carrier lifetime due to neutron irradiation or other effects. The solutions are specifically evaluated for the case of an abrupt change in the carrier lifetime during or after, a step, square, or piecewise linear radiation pulse. Noting slow convergence of the Fourier series solutions for some parameters sets, we evaluate portions of the solutions using closed-form formulas, which result in a two order of magnitude increase in computational efficiency.
Abstract not provided.
Abstract not provided.
International Journal for Numerical Methods in Fluids
Abstract not provided.
Abstract not provided.
A2BLnX6 elpasolites (A, B: alkali; Ln: lanthanide; X: halogen), LaBr3 lanthanum bromide, and AX alkali halides are three classes of the ionic compound crystals being explored for {gamma}-ray detection applications. Elpasolites are attractive because they can be optimized from combinations of four different elements. One design goal is to create cubic crystals that have isotropic optical properties and can be grown into large crystals at lower costs. Unfortunately, many elpasolites do not have cubic crystals and the experimental trial-and-error approach to find the cubic elpasolites has been prolonged and inefficient. LaBr3 is attractive due to its established good scintillation properties. The problem is that this brittle material is not only prone to fracture during services, but also difficult to grow into large crystals resulting in high production cost. Unfortunately, it is not always clear how to strengthen LaBr3 due to the lack of understanding of its fracture mechanisms. The problem with alkali halides is that their properties decay rapidly over time especially under harsh environment. Here we describe our recent progress on the development of atomistic models that may begin to enable the prediction of crystal structures and the study of fracture mechanisms of multi-element compounds.
Journal of the American Ceramic Society
Abstract not provided.
Abstract not provided.
Journal of energy security
Abstract not provided.
Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.
Goal - design methods to characterize and identify a low dimensional representation of graphs. Impact - enabling predictive simulation; monitoring dynamics on graphs; and sampling and recovering network structure from limited observations. Areas to explore are: (1) Enabling technologies - develop novel algorithms and tailor existing ones for complex networks; (2) Modeling and generation - Identify the right parameters for graph representation and develop algorithms to compute these parameters and generate graphs from these parameters; and (3) Comparison - Given two graphs how do we tell they are similar? Some conclusions are: (1) A bad metric can make anything look good; (2) A metric that is based an edge-by edge prediction will suffer from the skewed distribution of present and absent edges; (3) The dominant signal is the sparsity, edges only add a noise on top of the signal, the real signal, structure of the graph is often lost behind the dominant signal; and (4) Proposed alternative: comparison based on carefully chosen set of features, it is more efficient, sensitive to selection of features, finding independent set of features is an important area, and keep an eye on us for some important results.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Physical Review B
Abstract not provided.
Abstract not provided.
Journal of Applied Physics
Abstract not provided.
Physical Review B
Abstract not provided.
Physical Review Letters
Abstract not provided.
Abstract not provided.
Threshold stress intensity factors were measured in high-pressure hydrogen gas for a variety of low alloy ferritic steels using both constant crack opening displacement and rising crack opening displacement procedures. The sustained load cracking procedures are generally consistent with those in ASME Article KD-10 of Section VIII Division 3 of the Boiler and Pressure Vessel Code, which was recently published to guide design of high-pressure hydrogen vessels. Three definitions of threshold were established for the two test methods: K{sub THi}* is the maximum applied stress intensity factor for which no crack extension was observed under constant displacement; K{sub THa} is the stress intensity factor at the arrest position for a crack that extended under constant displacement; and K{sub JH} is the stress intensity factor at the onset of crack extension under rising displacement. The apparent crack initiation threshold under constant displacement, K{sub THi}*, and the crack arrest threshold, K{sub THa}, were both found to be non-conservative due to the hydrogen exposure and crack-tip deformation histories associated with typical procedures for sustained-load cracking tests under constant displacement. In contrast, K{sub JH}, which is measured under concurrent rising displacement and hydrogen gas exposure, provides a more conservative hydrogen-assisted fracture threshold that is relevant to structural components in which sub-critical crack extension is driven by internal hydrogen gas pressure.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Insider threats often target authentication and access control systems, which are frequently based on directory services. Detecting these threats is challenging, because malicious users with the technical ability to modify these structures often have sufficient knowledge and expertise to conceal unauthorized activity. The use of directory virtualization to monitor various systems across an enterprise can be a valuable tool for detecting insider activity. The addition of a policy engine to directory virtualization services enhances monitoring capabilities by allowing greater flexibility in analyzing changes for malicious intent. The resulting architecture is a system-based approach, where the relationships and dependencies between data sources and directory services are used to detect an insider threat, rather than simply relying on point solutions. This paper presents such an architecture in detail, including a description of implementation results.
Journal of Fluid Mechanics
Abstract not provided.
In addressing the issue of the determining the hazard categorization of the Z Accelerator of doing Special Nuclear Material (SNM) experiments the question arose as to whether the machine could be fired with its central vacuum chamber open, thus providing a path for airborne release of SNM materials. In this report we summarize calculations that show that we could only expect a maximum current of 460 kA into such a load in a long-pulse mode, which will be used for the SNM experiments, and 750 kA in a short-pulse mode, which is not useful for these experiments. We also investigated the effect of the current for both cases and found that for neither case is the current high enough to either melt or vaporize these loads, with a melt threshold of 1.6 MA. Therefore, a necessary condition to melt, vaporize, or otherwise disperse SNM material is that a vacuum must exist in the Z vacuum chamber. Thus the vacuum chamber serves as a passive feature that prevents any airborne release during the shot, regardless of whatever containment may be in place.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The aim of this project is to develop low dimension parametric (deterministic) models of complex networks, to use compressive sensing (CS) and multiscale analysis to do so and to exploit the structure of complex networks (some are self-similar under coarsening). CS provides a new way of sampling and reconstructing networks. The approach is based on multiresolution decomposition of the adjacency matrix and its efficient sampling. It requires preprocessing of the adjacency matrix to make it 'blocky' which is the biggest (combinatorial) algorithm challenge. Current CS reconstruction algorithm makes no use of the structure of a graph, its very general (and so not very efficient/customized). Other model-based CS techniques exist, but not yet adapted to networks. Obvious starting point for future work is to increase the efficiency of reconstruction.
Abstract not provided.
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.
A persistent challenge in simulating damage of natural geological materials, as well as rock-like engineered materials, is the development of efficient and accurate constitutive models. The common feature for these brittle and quasi-brittle materials are the presence of flaws such as porosity and network of microcracks. The desired models need to be able to predict the material responses over a wide range of porosities and strain rate. Kayenta (formerly called the Sandia GeoModel) is a unified general-purpose constitutive model that strikes a balance between first-principles micromechanics and phenomenological or semi-empirical modeling strategies. However, despite its sophistication and ability to reduce to several classical plasticity theories, Kayenta is incapable of modeling deformation of ductile materials in which deformation is dominated by dislocation generation and movement which can lead to significant heating. This stems from Kayenta's roots as a geological model, where heating due to inelastic deformation is often neglected or presumed to be incorporated implicitly through the elastic moduli. The sophistication of Kayenta and its large set of extensive features, however, make Kayenta an attractive candidate model to which thermal effects can be added. This report outlines the initial work in doing just that, extending the capabilities of Kayenta to include deformation of ductile materials, for which thermal effects cannot be neglected. Thermal effects are included based on an assumption of adiabatic loading by computing the bulk and thermal responses of the material with the Kerley Mie-Grueneisen equation of state and adjusting the yield surface according to the updated thermal state. This new version of Kayenta, referred to as Thermo-Kayenta throughout this report, is capable of reducing to classical Johnson-Cook plasticity in special case single element simulations and has been used to obtain reasonable results in more complicated Taylor impact simulations in LS-Dyna. Despite these successes, however, Thermo-Kayenta requires additional refinement for it to be consistent in the thermodynamic sense and for it to be considered superior to other, more mature thermoplastic models. The initial thermal development, results, and required refinements are all detailed in the following report.
Physical Review B
Abstract not provided.
Abstract not provided.
U.S. energy needs - minimizing climate change, mining and extraction technologies, safe waste disposal - require the ability to simulate, model, and predict the behavior of subsurface systems. They propose development of a coupled thermal, hydrological, mechanical, chemistry (THMC) modeling capability for massively parallel applications that can address these critical needs. The goal and expected outcome of this research is a state-of-the-art, extensible, simulation capability, based upon SIERRA Mechanics, to address multiphase, multicomponent reactive transport coupled to nonlinear geomechanics in heterogeneous (geologic) porous materials. The THMC code provides a platform for integrating research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture.
Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories (SNL) Technical Area V (TA-V) has provided unique nuclear experimental environments for decades. The technologies tested in TA-V facilities have furthered the United States Nuclear Weapons program and has contributed to the national energy and homeland security mission. The importance of TA-V working efficiently to produce an attractive and effective platform for experiments should not be underestimated. Throughout its brief history, TA-V has evolved to address multiple and diverse sets of requirements. These requirements evolved over many years; however, the requirements had not been managed nor communicated comprehensively or effectively. A series of programmatic findings over several years of external audits was evidence of this downfall. Today, these same requirements flow down through a new TA-V management system that produces consistently applied and reproducible approaches to work practices. In 2008, the TA-V department managers assessed the state of TA-V services and work activities to understand how to improve customer interfaces, stakeholders perceptions, and workforce efficiencies. The TA-V management team initiated the TA-V Transformation Project after they deemed the pre-June 2008 operational model to be ineffective in managing work and in providing integrated, continuous improvement to TA-V processes. This report summarizes the TA-V Transformation Project goals, activities, and accomplishments.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Polynomial chaos expansion provides a means of representing any L2 random variable as a sum of polynomials that are orthogonal with respect to a chosen measure. Examples include the Hermite polynomials with Gaussian measure on the real line and the Legendre polynomials with uniform measure on an interval. Polynomial chaos can be used to reformulate an uncertain ODE system, using Galerkin projection, as a new, higher-dimensional, deterministic ODE system which describes the evolution of each mode of the polynomial chaos expansion. It is of interest to explore the eigenstructure of the original and reformulated ODE systems by studying the eigenvalues and eigenvectors of their Jacobians. In this talk, we study the distribution of the eigenvalues of the two Jacobians. We outline in general the location of the eigenvalues of the new system with respect to those of the original system, and examine the effect of expansion order on this distribution.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
2010 NPR and President Obama's 2009 Prague Speech highlighted two key objectives with an inherent underlying tension: (1) Moving towards a world free of nuclear weapons; and (2) Sustaining a safe, secure, and effective nuclear arsenal. Objective 1 depends, inter alia, upon reductions in stockpiles at home and abroad and maintaining stability. Objective 2 depends upon needed investments in modernization and life extension. Objectives being pursued predominantly in parallel by largely separate communities.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Directory services are used by almost every enterprise computing environment to provide data concerning users, computers, contacts, and other objects. Virtual directories are components that provide directory services in a highly customized manner. Unfortunately, though the use of virtual directory services are widespread, an analysis of risks posed by their unique position and architecture has not been completed. We present a detailed analysis of six attacks to virtual directory services, including steps for detection and prevention. We also describe various categories of attack risks, and discuss what is necessary to launch an attack on virtual directories. Finally, we present a framework to use in analyzing risks to individual enterprise computing virtual directory instances. We show how to apply this framework to an example implementation, and discuss the benefits of doing so.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Nuclear Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The present paper is the second in a series published at I/ITSEC that seeks to explain the efficacy of multi-role experiential learning employed to create engaging game-based training methods transitioned to the U.S. Army, U.S. Army Special Forces, Civil Affairs, and Psychological Operations teams. The first publication (I/ITSEC 2009) summarized findings from a quantitative study that investigated experiential learning in the multi-player, PC-based game module transitioned to PEO-STRI, DARWARS Ambush! NK (non-kinetic). The 2009 publication reported that participants of multi-role (Player and Reflective Observer/Evaluator) game-based training reported statistically significant learning and engagement. Additionally when the means of the two groups (Player and Reflective Observer/Evaluator) were compared, they were not statistically significantly different from each other. That is to say that both playing as well as observing/evaluating were engaging learning modalities. The Observer/Evaluator role was designed to provide an opportunity for real-time reflection and meta-cognitive learning during game play. Results indicated that this role was an engaging way to learn about communication, that participants learned something about cultural awareness, and that the skills they learned were helpful in problem solving and decision-making.
The present paper seeks to continue to understand what and how users of non-kinetic game-based missions learn by revisiting the 2009 quantitative study with further investigation such as stochastic player performance analysis using latent semantic analyses and graph visualizations. The results are applicable to First-Person game-based learning systems designed to enhance trainee intercultural communication, interpersonal skills, and adaptive thinking. In the full paper, we discuss results obtained from data collected from 78 research participants of diverse backgrounds who trained by engaging in tasks directly, as well as observing and evaluating peer performance in real-time. The goal is two-fold. One is to quantify and visualize detailed player performance data coming from game play transcription to give further understanding to the results in the 2009 I/ITSEC paper. The second is to develop a set of technologies from this quantification and visualization approach into a generalized application tool to be used to aid in future games’ development of player/learner models and game adaptation algorithms.
Specifically, this paper addresses questions such as, “Are there significant differences in one's experience when an experiential learning task is observed first, and then performed by the same individual?” “Are there significant differences among groups participating in different roles in non-kinetic engagement training, especially when one role requires more active participation that the other?” “What is the impact of behavior modeling on learning in games?” In answering these questions the present paper reinforces the 2009 empirical study conclusion that contrary to current trends in military game development, experiential learning is enhanced by innovative training approaches designed to facilitate trainee mastery of reflective observation and abstract conceptualization as much as performance-based skills.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics of Plasmas
Abstract not provided.
Abstract not provided.
Abstract not provided.
For oxy-combustion with flue gas recirculation, as is commonly employed, it is recognized that elevated CO{sub 2} levels affect radiant transport, the heat capacity of the gas, and other gas transport properties. A topic of widespread speculation has concerned the effect of the CO{sub 2} gasification reaction with coal char on the char burning rate. To give clarity to the likely impact of this reaction on the oxy-fuel combustion of pulverized coal char, the Surface Kinetics in Porous Particles (SKIPPY) code was employed for a range of potential CO{sub 2} reaction rates for a high-volatile bituminous coal char particle (130 {micro}m diameter) reacting in several O{sub 2} concentration environments. The effects of boundary layer chemistry are also examined in this analysis. Under oxygen-enriched conditions, boundary layer reactions (converting CO to CO{sub 2}, with concomitant heat release) are shown to increase the char particle temperature and burning rate, while decreasing the O{sub 2} concentration at the particle surface. The CO{sub 2} gasification reaction acts to reduce the char particle temperature (because of the reaction endothermicity) and thereby reduces the rate of char oxidation. Interestingly, the presence of the CO{sub 2} gasification reaction increases the char conversion rate for combustion at low O{sub 2} concentrations, but decreases char conversion for combustion at high O{sub 2} concentrations. These calculations give new insight into the complexity of the effects from the CO{sub 2} gasification reaction and should help improve the understanding of experimentally measured oxy-fuel char combustion and burnout trends in the literature.
Abstract not provided.
Abstract not provided.
In an aim to develop photo-responsive composites, the UV photo-reduction of aqueous titanium oxide nanoparticle-graphene oxide (TiO{sub 2}-GO) dispersions (Lambert et al. J Phys. Chem. 2010 113 (46), 19812-19823) was undertaken. Photo-reduction led to the formation of a black precipitate as well as a soluble portion, comprised of titanium oxide nanoparticle-reduced graphene oxide (TiO{sub 2}-RGO). When allowed to slowly evaporate, self assembled titanium oxide nanoparticle-graphene oxide (SA-TiO{sub 2}-RGO) films formed at the air-liquid interface of the solution. The thickness of SARGO-TiO{sub 2} films range from {approx}30-100 nm when deposited on substrates, and appear to be comprised of a mosaic assembly of graphene nanosheets and TiO{sub 2}, as observed by scanning electron microscopy. Raman spectroscopy and X-ray photoelectron spectroscopy indicate that the graphene oxide is only partially reduced in the SA-TiO{sub 2}-RGO material. These films were also deposited onto inter-digitated electrodes and their photo-responsive behavior was examined. UV-exposure lead to a {approx} 200 kOhm decrease in resistance across the device, resulting in a cathodically biased film. The cathodic bias of the films was utilized for the subsequent reduction of Ag(NO{sub 3}) into silver (Ag) nanoparticles, forming a ternary Ag-(SA-RGO-TiO{sub 2}) composite. Various aspects of the self assembled films, their photoconductive properties as well as potential applications will be presented.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The performance of the Neosonic polymer Li-ion battery was measured using a number of tests including capacity, capacity as a function of temperature, ohmic resistance, spectral impedance, hybrid pulsed power test, utility partial state of charge (PSOC) pulsed cycle test, and an over-charge/voltage abuse test. The goal of this work was to evaluate the performance of the polymer Li-ion battery technology for utility applications requiring frequent charges and discharges, such as voltage support, frequency regulation, wind farm energy smoothing, and solar photovoltaic energy smoothing. Test results have indicated that the Neosonic polymer Li-ion battery technology can provide power levels up to the 10C{sub 1} discharge rate with minimal energy loss compared to the 1 h (1C) discharge rate. Two of the three cells used in the utility PSOC pulsed cycle test completed about 12,000 cycles with only a gradual loss in capacity of 10 and 13%. The third cell experienced a 40% loss in capacity at about 11,000 cycles. The DC ohmic resistance and AC spectral impedance measurements also indicate that there were increases in impedance after cycling, especially for the third cell. Cell No.3 impedance Rs increased significantly along with extensive ballooning of the foil pouch. Finally, at a 1C (10 A) charge rate, the over charge/voltage abuse test with cell confinement similar to a multi cell string resulted in the cell venting hot gases at about 45 C 45 minutes into the test. At 104 minutes into the test the cell voltage spiked to the 12 volt limit and continued out to the end of the test at 151 minutes. In summary, the Neosonic cells performed as expected with good cycle-life and safety.
Abstract not provided.
Nuclear Posture Review (NPR) is designed to make world safer by reducing the role of U.S. nuclear weapons and reducing the salience of nuclear weapons. U.S. also seeks to maintain a credible nuclear deterrent and reinforce regional security architectures with missile defenses and other conventional military capabilities. But recent studies suggest that nuclear proliferation is a direct response to the perceived threat of U.S. conventional capabilities not U.S. nuclear stockpile. If this is true, then the intent of the NPR to reduce the role and numbers of nuclear weapons and strengthen conventional military capabilities may actually make the world less safe. First stated objective of NPR is to reduce the role and numbers of U.S. nuclear weapons, reduce the salience of nuclear weapons and move step by step toward eliminating them. Second stated objective is a reaffirmation of U.S. commitment to maintaining a strong deterrent which forms the basis of U.S. assurances to allies and partners. The pathway - made explicit throughout the NPR - for reducing the role and numbers of nuclear weapons while maintaining a credible nuclear deterrent and reinforcing regional security architectures is to give conventional forces and capabilities and missile defenses (e.g. non-nuclear elements) a greater share of the deterrence burden.
Abstract not provided.
ACS Nano
Abstract not provided.
Abstract not provided.
The authors have detected magnetic fields from the human brain with a compact, fiber-coupled rubidium spin-exchange-relaxation-free magnetometer. Optical pumping is performed on the D1 transition and Faraday rotation is measured on the D2 transition. The beams share an optical axis, with dichroic optics preparing beam polarizations appropriately. A sensitivity of <5 fT/{radical}Hz is achieved. Evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer. Recordings were validated by comparison with those taken by a commercial magnetoencephalography system. The design is amenable to arraying sensors around the head, providing a framework for noncryogenic, whole-head magnetoencephalography.
Abstract not provided.
IEEE Transactons on Nanotechnology
Abstract not provided.