Fixture for characterizing electrochemical devices in-operando in traditional vacuum systems
Review of Scientific Instruments
Abstract not provided.
Review of Scientific Instruments
Abstract not provided.
Advanced Functional Materials
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
SPIE
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In a world plagued with improvised explosive devices, drugs and dangerous people, the desire to field technology to protect our police and military is providing a fertile market for the proliferation of protection technologies that range from the unproven to the disproven. The market place is currently being flooded with detection equipment making inflated and inaccurate marketing claims of high reliably, high detection probabilities, and ease of operation - all while offering detection capabilities at safe distances. The manufacturers of these devices have found a willing global marketplace, which includes some of the most dangerous places in the world. Despite a wealth of contradictory performance and testing data available on the Internet, sales of these devices remain brisk and profitable. Rather than enhancing the security of police and military personnel, the reliance on these unproven and disproven devices is creating a sense of false security that is actually lowering the safety of front-line forces in places like Iraq and Afghanistan. This paper addresses the development and distribution history of some of these devices and describes the testing performed by Sandia National Laboratories in Albuquerque, and other reputable testing agencies that illustrate the real danger in using this kind of unproven technology.
Abstract not provided.
J. Chem. Phys.
Abstract not provided.
Phys Chem Chem Phys
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Probabilistic Engineering Mechanics
Abstract not provided.
Abstract not provided.
Theoretical Computer Science
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review A
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Engineering Application of Artificial Intelligence
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Materials Science
Abstract not provided.
The role of crystal coherence length on the infrared optical response of MgO thin films was investigated with regard to Reststrahlen band photon-phonon coupling. Preferentially (001)-oriented sputtered and evaporated ion-beam assisted deposited thin films were prepared on silicon and annealed to vary film microstructure. Film crystalline coherence was characterized by x-ray diffraction line broadening and transmission electron microscopy. The infrared dielectric response revealed a strong dependence of dielectric resonance magnitude on crystalline coherence. Shifts to lower transverse optical phonon frequencies were observed with increased crystalline coherence. Increased optical phonon damping is attributed to increasing granularity and intergrain misorientation.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.
Physical Review B
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.
Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.
Physical Chemistry Chemical Physics
Abstract not provided.
The precipitation of Ag{sub 2}Te in a PbTe matrix is investigated using electron microscopy and atom probe tomography. We observe the formation of oriented nanoscale Ag{sub 2}Te precipitates in PbTe. These precipitates initially form as coherent spherical nanoparticles and evolve into flattened semi-coherent disks during coarsening. This change in morphology is consistent with equilibrium shape theory for coherently strained precipitates. Upon annealing at elevated temperatures these precipitates eventually revert to an equiaxed morphology. We suggest this shape change occurs once the precipitates grow beyond a critical size, making it favorable to relieve the elastic coherency strains by forming interfacial misfit dislocations. These investigations of the shape and coherency of Ag{sub 2}Te precipitates in PbTe should prove useful in the design of nanostructured thermoelectric materials.
Abstract not provided.
Physical Review Letters
Abstract not provided.
Acta Materialia
Abstract not provided.
Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.
Abstract not provided.
Cyber security analysis tools are necessary to evaluate the security, reliability, and resilience of networked information systems against cyber attack. It is common practice in modern cyber security analysis to separately utilize real systems of computers, routers, switches, firewalls, computer emulations (e.g., virtual machines) and simulation models to analyze the interplay between cyber threats and safeguards. In contrast, Sandia National Laboratories has developed novel methods to combine these evaluation platforms into a hybrid testbed that combines real, emulated, and simulated components. The combination of real, emulated, and simulated components enables the analysis of security features and components of a networked information system. When performing cyber security analysis on a system of interest, it is critical to realistically represent the subject security components in high fidelity. In some experiments, the security component may be the actual hardware and software with all the surrounding components represented in simulation or with surrogate devices. Sandia National Laboratories has developed a cyber testbed that combines modeling and simulation capabilities with virtual machines and real devices to represent, in varying fidelity, secure networked information system architectures and devices. Using this capability, secure networked information system architectures can be represented in our testbed on a single, unified computing platform. This provides an 'experiment-in-a-box' capability. The result is rapidly-produced, large-scale, relatively low-cost, multi-fidelity representations of networked information systems. These representations enable analysts to quickly investigate cyber threats and test protection approaches and configurations.
This 1/2 day workshop will survey various applications of XRD analysis, including in-situ analyses and neutron diffraction. The analyses will include phase ID, crystallite size and microstrain, preferred orientation and texture, lattice parameters and solid solutions, and residual stress. Brief overviews of high-temperature in-situ analysis, neutron diffraction and synchrotron studies will be included.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review A
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The analysis of networked activities is dramatically more challenging than many traditional kinds of analysis. A network is defined by a set of entities (people, organizations, banks, computers, etc.) linked by various types of relationships. These entities and relationships are often uninteresting alone, and only become significant in aggregate. The analysis and visualization of these networks is one of the driving factors behind the creation of the Titan Toolkit. Given the broad set of problem domains and the wide ranging databases in use by the information analysis community, the Titan Toolkit's flexible, component based pipeline provides an excellent platform for constructing specific combinations of network algorithms and visualizations.
Physica D
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Rev B
Abstract not provided.
This research explores the thermodynamics, economics, and environmental impacts of innovative, stationary, polygenerative fuel cell systems (FCSs). Each main report section is split into four subsections. The first subsection, 'Potential Greenhouse Gas (GHG) Impact of Stationary FCSs,' quantifies the degree to which GHG emissions can be reduced at a U.S. regional level with the implementation of different FCS designs. The second subsection, 'Optimizing the Design of Combined Heat and Power (CHP) FCSs,' discusses energy network optimization models that evaluate novel strategies for operating CHP FCSs so as to minimize (1) electricity and heating costs for building owners and (2) emissions of the primary GHG - carbon dioxide (CO{sub 2}). The third subsection, 'Optimizing the Design of Combined Cooling, Heating, and Electric Power (CCHP) FCSs,' is similar to the second subsection but is expanded to include capturing FCS heat with absorptive cooling cycles to produce cooling energy. The fourth subsection, - Thermodynamic and Chemical Engineering Models of CCHP FCSs,' discusses the physics and thermodynamic limits of CCHP FCSs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Many possible applications requiring or benefiting from a wireless network are available for bolstering physical security and awareness at high security installations or facilities. These enhancements are not always straightforward and may require careful analysis, selection, tuning, and implementation of wireless technologies. In this paper, an introduction to wireless networks and the task of enhancing physical security is first given. Next, numerous applications of a wireless network are brought forth. The technical issues that arise when using a wireless network to support these applications are then discussed. Finally, a summary is presented.
Abstract not provided.
Abstract not provided.
International Journal of Numerical Methods in Fluids
Abstract not provided.
Attractive for numerous technological applications, ferroelectronic oxides constitute an important class of multifunctional compounds. Intense experimental efforts have been made recently in synthesizing, processing and understanding ferroelectric nanostructures. This work will present the systematic characterization and optimization of barium titanate and lead lanthanum zirconate titanate nanoparticle based ceramics. The nanoparticles have been synthesized using several solution and pH-based synthesis processing routes and employed to fabricate polycrystalline ceramic and nanocomposite based components. The dielectric and ferroelectric properties of these various components have been gauged by impedance analysis and electromechanical response and will be discussed.
Journal of Electroceramics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Technologies that have been developed for microelectromechanical systems (MEMS) have been applied to the fabrication of field desorption arrays. These techniques include the use of thick films for enhanced dielectric stand-off, as well as an integrated gate electrode. The increased complexity of MEMS fabrication provides enhanced design flexibility over traditional methods.
Abstract not provided.
Nano-materials have shown unique crystallite-dependent properties that present distinct advantages for dielectric applications. PLZT is an excellent dielectric material used in several applications and may benefit crystallite engineering; however complex systems such as PLZT require well-controlled synthesis techniques. An aqueous based synthesis route has been developed, using standard precursor chemicals and scalable techniques to produce large batch sizes. The synthesis will be briefly covered, followed by a more in-depth discussion of incorporating nanocrystalline PLZT into a working device. Initial electrical properties will be presented illustrating the potential benefits and associated difficulties of working with PLZT nano-materials.
Abstract not provided.
Abstract not provided.
This report summarizes the current statistical analysis capability of OVIS and how it works in conjunction with the OVIS data readers and interpolators. It also documents how to extend these capabilities. OVIS is a tool for parallel statistical analysis of sensor data to improve system reliability. Parallelism is achieved using a distributed data model: many sensors on similar components (metaphorically sheep) insert measurements into a series of databases on computers reserved for analyzing the measurements (metaphorically shepherds). Each shepherd node then processes the sheep data stored locally and the results are aggregated across all shepherds. OVIS uses the Visualization Tool Kit (VTK) statistics algorithm class hierarchy to perform analysis of each process's data but avoids VTK's model aggregation stage which uses the Message Passing Interface (MPI); this is because if a single process in an MPI job fails, the entire job will fail. Instead, OVIS uses asynchronous database replication to aggregate statistical models. OVIS has several additional features beyond those present in VTK that, first, accommodate its particular data format and, second, improve the memory and speed of the statistical analyses. First, because many statistical algorithms are multivariate in nature and sensor data is typically univariate, interpolation of data is required to provide simultaneous observations of metrics. Note that in this report, we will refer to a single value obtained from a sensor as a measurement while a collection of multiple sensor values simultaneously present in the system is an observation. A base class for interpolation is provided that abstracts the operation of converting multiple sensor measurements into simultaneous observations. A concrete implementation is provided that performs piecewise constant temporal interpolation of multiple metrics across a single component. Secondly, because calculations may summarize data too large to fit in memory OVIS analyses batches of observations at a time and aggregates these intermediate intra-process models as it goes before storing the final model for inter-process aggregation via database replication. This reduces the memory footprint of the analysis, interpolation, and the database client and server query processing. This also interleaves processing with the disk I/O required to fetch data from the database - also improving speed. This report documents how OVIS performs analyses and how to create additional analysis components that fetch measurements from the database, perform interpolation, or perform operations on streamed observations (such as model updates or assessments). The rest of this section outlines the OVIS analysis algorithm and is followed by sections specific to each subtask. Note that we are limiting our discussion for now to the creation of a model from a set of measurements, and not including the assessment of observations using a model. The same framework can be used for assessment but that use case is not detailed in this report.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The observation and characterization of a single atom system in silicon is a significant landmark in half a century of device miniaturization, and presents an important new laboratory for fundamental quantum and atomic physics. We compare with multi-million atom tight binding (TB) calculations the measurements of the spectrum of a single two-electron (2e) atom system in silicon - a negatively charged (D-) gated Arsenic donor in a FinFET. The TB method captures accurate single electron eigenstates of the device taking into account device geometry, donor potentials, applied fields, interfaces, and the full host bandstructure. In a previous work, the depths and fields of As donors in six device samples were established through excited state spectroscopy of the D0 electron and comparison with TB calculations. Using self-consistent field (SCF) TB, we computed the charging energies of the D- electron for the same six device samples, and found good agreement with the measurements. Although a bulk donor has only a bound singlet ground state and a charging energy of about 40 meV, calculations show that a gated donor near an interface can have a reduced charging energy and bound excited states in the D- spectrum. Measurements indeed reveal reduced charging energies and bound 2e excited states, at least one of which is a triplet. The calculations also show the influence of the host valley physics in the two-electron spectrum of the donor.
This paper proposes a definition of 'IA and IA-enabled products' based on threat, as opposed to 'security services' (i.e., 'confidentiality, authentication, integrity, access control or non-repudiation of data'), as provided by Department of Defense (DoD) Instruction 8500.2, 'Information Assurance (IA) Implementation.' The DoDI 8500.2 definition is too broad, making it difficult to distinguish products that need higher protection from those that do not. As a consequence the products that need higher protection do not receive it, increasing risk. The threat-based definition proposed in this paper solves those problems by focusing attention on threats, thereby moving beyond compliance to risk management. (DoDI 8500.2 provides the definitions and controls that form the basis for IA across the DoD.) Familiarity with 8500.2 is assumed.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review Letters
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
We present numerical estimates of the leading two- and three-body dispersion energy terms in van der Waals interactions for a broad variety of molecules and solids. The calculations are based on London and Axilrod-Teller-Muto expressions where the required interatomic dispersion energy coefficients, C6 and C9, are computed "on the fly" from the electron density. Inter- and intramolecular energy contributions are obtained using the Tang-Toennies (TT) damping function for short interatomic distances. The TT range parameters are equally extracted on the fly from the electron density using their linear relationship to van der Waals radii. This relationship is empiricially determined for all the combinations of He-Xe rare gas dimers, as well as for the He and Ar trimers. The investigated systems include the S22 database of noncovalent interactions, Ar, benzene and ice crystals, bilayer graphene, C60 dimer, a peptide (Ala10), an intercalated drug-DNA model [ellipticine- d (CG) 2], 42 DNA base pairs, a protein (DHFR, 2616 atoms), double stranded DNA (1905 atoms), and 12 molecular crystal polymorphs from crystal structure prediction blind test studies. The two- and three-body interatomic dispersion energies are found to contribute significantly to binding and cohesive energies, for bilayer graphene the latter reaches 50% of experimentally derived binding energy. These results suggest that interatomic three-body dispersion potentials should be accounted for in atomistic simulations when modeling bulky molecules or condensed phase systems. © 2010 American Institute of Physics.
SAE International Journal of Engines
This paper presents experimental results for two fuel-related topics in a diesel engine: (1) how fuel volatility affects the premixed burn and heat release rate, and (2) how ignition quality influences the soot formation. Fast evaporation of fuel may lead to more intense heat release if a higher percentage of the fuel is mixed with air to form a combustible mixture. However, if the evaporation of fuel is driven by mixing with high-temperature gases from the ambient, a high-volatility fuel will require less oxygen entrainment and mixing for complete vaporization and, consequently, may not have potential for significant heat release simply because it has vaporized. Fuel cetane number changes also cause uncertainty regarding soot formation because variable ignition delay will change levels of fuel-air mixing prior to combustion. To address these questions, experiments are performed using a constant-volume combustion chamber simulating typical low-temperature-combustion (LTC) diesel conditions. We use fuels that have the same ignition delay (and therefore similar time for premixing with air), but different fuel volatility, to assess the heat-release rate and spatial location of combustion. Under this condition, where fuel volatility is decoupled from the ignition delay, results show almost the same heat release rate and spatial location of the premixed burn. The effect of ignition quality on soot formation has also been studied while maintaining similar levels of fuel-ambient mixing prior to combustion. To achieve the same ignition delay, the high-cetane-number fuel is injected into an ambient gas at a lower temperature and vice versa. The total soot mass within the spray is measured and compared for fuels with different cetane numbers but with the same premixing level (e.g. the same ignition delay and lift-off length). Experimental results show that the combination of high cetane number and low ambient gas temperature produces lower soot than the other combination, because the ambient temperature predominantly affects soot formation.
IEEE Aerospace Conference Proceedings
As the U.S. and the International Community come to grips with anthropogenic climate change, it will be necessary to develop accurate techniques with global span for remote measurement of emissions and uptake of greenhouse gases (GHGs), with special emphasis on carbon dioxide. Presently, techniques exist for in situ and local remote measurements. The first steps towards expansion of these techniques to span the world are only now being taken with the launch of satellites with the capability to accurately measure column abundances of selected GHGs, including carbon dioxide. These satellite sensors do not directly measure emissions and uptake. The satellite data, appropriately filtered and processed, provide only one necessary, but not sufficient, input for the determination of emission and uptake rates. Optimal filtering and processing is a challenge in itself. But these data must be further combined with output from data-assimilation models of atmospheric structure and flows in order to infer emission and uptake rates for relevant points and regions. In addition, it is likely that substantially more accurate determinations would be possible given the addition of data from a sparse network of in situ and/or upward-looking remote GHG sensors. We will present the most promising approaches we've found for combining satellite, in situ, fixed remote sensing, and other potentially available data with atmospheric data-assimilation and backwarddispersion models for the purpose of determination of point and regional GHG emission and uptake rates. We anticipate that the first application of these techniques will be to GHG management for the U.S. Subsequent application may be to confirmation of compliance of other nations with future international GHG agreements. ©2010 IEEE.
The research described in this report developed the theoretical and conceptual framework for understanding, recognizing, and anticipating the origins, dynamic mechanisms, perceptions, and social structures of Islamic social reform movements in the Muslim homeland and in diaspora communities. This research has revealed valuable insights into the dynamic mechanisms associated with reform movements and, as such, offers the potential to provide indications and warnings of impending violence. This study produced the following significant findings: (1) A framework for understanding Islamic radicalization in the context of Social Movement Theory was developed and implemented. This framework provides a causal structure for the interrelationships among the myriad features of a social movement. (2) The degree to which movement-related activity shows early diffusion across multiple social contexts is a powerful distinguisher of successful and unsuccessful social movements. Indeed, this measurable appears to have significantly more predictive power than volume of such activity and also more power than various system intrinsics. (3) Significant social movements can occur only if both the intra-context 'infectivity' of the movement exceeds a certain threshold and the inter-context interactions associated with the movement occur with a frequency that is larger than another threshold. Note that this is reminiscent of, and significantly extends, well-known results for epidemic thresholds in disease propagation models. (4) More in-depth content analysis of blogs through the lens of Argumentation Theory has the potential to reveal new insights into radicalization in the context of Social Movement Theory. This connection has the potential to be of value from two important perspectives - first, this connection has the potential to provide more in depth insights into the forces underlying the emergence of radical behavior and second, this connection may provide insights into how to use the blogosphere to influence the emergent dialog to effectively impact the resulting actions taken by the potential radicals. The authors of this report recognize that Islamic communities are not the only source of radicalism; indeed many other groups, religious and otherwise, have used and continue to use, radicalism to achieve their ends. Further, the authors also recognize that not all Muslims use, or condone the use of, radical behavior. Indeed, only a very small segment of the Muslim communities throughout the world use and/or support such behavior. Nevertheless, the focus of this research is, indeed, on understanding, recognizing, and anticipating the origins, dynamic mechanisms, perceptions, and social structures of Islamic radicalism.
Propellants, Explosives, Pyrotechnics
There are numerous applications for small-scale actuation utilizing pyrotechnics and explosives. In certain applications, especially when multiple actuation strokes are needed, or actuator reuse is required, it is desirable to have all gaseous combustion products with no condensed residue in the actuator cylinder. Toward this goal, we have performed experiments on utilizing milligram quantities of high explosives to drive a millimeterdiameter actuator with a stroke of 30 mm. Calculations were performed to select proper material quantities to provide 0.5 J of actuation energy. This was performed utilizing the thermochemical code Cheetah to calculate the impetus for numerous propellants and to select quantities based on estimated efficiencies of these propellants at small scales. Milligram quantities of propellants were loaded into a small-scale actuator and ignited with an ignition increment and hot wire ignition. Actuator combustion chamber pressure was monitored with a pressure transducer and actuator stroke was monitored using a laser displacement meter. Total actuation energy was determined by calculating the kinetic energy of reaction mass motion against gravity. Of the materials utilized, the best performance was obtained with a mixture of 2,4,6,8,10,12-hexanitro-2,4,6,8,10, 12- hexaazaisowurtzitane (CL-20) and bis-triaminoguanidinium(3,3' dinitroazotriazolate) (TAGDNAT). © 2010 Wiley-VCH Verlag GmbH&Co. KGaA, Weinheim.
Proceedings - Annual Reliability and Maintainability Symposium
Many people, when thinking about different stages of a particular device's life vis-à-vis defectiveness, use the notion of the "bathtub curve" as a model. However this model is not fully applicable for the class of systems referred to as one-shot or single-shot systems. Key attributes of these systems are outlined in [1]: they typically stay in dormant storage until called upon for one-time use. Common examples of one-shot devices are air-bags in vehicles, fire suppression systems, certain types of safety features in nuclear power plants, missiles, thermal batteries, and some stand-by systems. This paper will focus on a particular example of one-shot systems, nuclear weapons, but the concepts presented are relevant for one-shot devices in general. A new model will be proposed as an alternative to the bathtub curve for one-shot systems. The new model includes two regimes: birth defect dominated and time-dependent dominated. A short discussion of why a bathtub curve might mistakenly be inferred is included. Finally, the relationship between inherent and estimated reliability will be described in the context of this model. ©2010 IEEE.
Proceedings - Annual Reliability and Maintainability Symposium
Many people, when thinking about different stages of a particular device's life vis-à-vis defectiveness, use the notion of the "bathtub curve" as a model. However this model is not fully applicable for the class of systems referred to as one-shot or single-shot systems. Key attributes of these systems are outlined in [1]: they typically stay in dormant storage until called upon for one-time use. Common examples of one-shot devices are air-bags in vehicles, fire suppression systems, certain types of safety features in nuclear power plants, missiles, thermal batteries, and some stand-by systems. This paper will focus on a particular example of one-shot systems, nuclear weapons, but the concepts presented are relevant for one-shot devices in general. A new model will be proposed as an alternative to the bathtub curve for one-shot systems. The new model includes two regimes: birth defect dominated and time-dependent dominated. A short discussion of why a bathtub curve might mistakenly be inferred is included. Finally, the relationship between inherent and estimated reliability will be described in the context of this model. ©2010 IEEE.
Propellants, Explosives, Pyrotechnics
There are numerous applications for small-scale actuation utilizing pyrotechnics and explosives. In certain applications, especially when multiple actuation strokes are needed, or actuator reuse is required, it is desirable to have all gaseous combustion products with no condensed residue in the actuator cylinder. Toward this goal, we have performed experiments on utilizing milligram quantities of high explosives to drive a millimeterdiameter actuator with a stroke of 30 mm. Calculations were performed to select proper material quantities to provide 0.5 J of actuation energy. This was performed utilizing the thermochemical code Cheetah to calculate the impetus for numerous propellants and to select quantities based on estimated efficiencies of these propellants at small scales. Milligram quantities of propellants were loaded into a small-scale actuator and ignited with an ignition increment and hot wire ignition. Actuator combustion chamber pressure was monitored with a pressure transducer and actuator stroke was monitored using a laser displacement meter. Total actuation energy was determined by calculating the kinetic energy of reaction mass motion against gravity. Of the materials utilized, the best performance was obtained with a mixture of 2,4,6,8,10,12-hexanitro-2,4,6,8,10, 12- hexaazaisowurtzitane (CL-20) and bis-triaminoguanidinium(3,3' dinitroazotriazolate) (TAGDNAT). © 2010 Wiley-VCH Verlag GmbH&Co. KGaA, Weinheim.
Journal of Nuclear Materials Management
Education and training are the foundation for a state's development and maintenance of an indigenous capability to conduct a nuclear energy and research program, from both the regulatory perspective and the licensee or operator perspective. The International Training Course on the Physical Protection of Nuclear Facilities and Materials (ITC) is the original international training program in the area of physical protection of nuclear material, which the United States has been conducting since 1978. This course focuses on a systems engineering performance-based approach to requirements definition, design, and evaluation for physical protection systems. During the first twenty-one presentations of ITC, more than 600 national experts from more than sixty International Atomic Energy Agency member states were trained. This paper describes the content, structure, and process of ITC.
Abstract not provided.
Carbon-manganese steels are candidates for the structural materials in hydrogen gas pipelines, however it is well known that these steels are susceptible to hydrogen embrittlement. Decades of research and industrial experience have established that hydrogen embrittlement compromises the structural integrity of steel components. This experience has also helped identify the failure modes that can operate in hydrogen containment structures. As a result, there are tangible ideas for managing hydrogen embrittement in steels and quantifying safety margins for steel hydrogen containment structures. For example, fatigue crack growth aided by hydrogen embrittlement is a key failure mode for steel hydrogen containment structures subjected to pressure cycling. Applying appropriate structural integrity models coupled with measurement of relevant material properties allows quantification of safety margins against fatigue crack growth in hydrogen containment structures. Furthermore, application of these structural integrity models is aided by the development of micromechanics models, which provide important insights such as the hydrogen distribution near defects in steel structures. The principal objective of this project is to enable application of structural integrity models to steel hydrogen pipelines. The new American Society of Mechanical Engineers (ASME) B31.12 design code for hydrogen pipelines includes a fracture mechanics-based design option, which requires material property inputs such as the threshold for rapid cracking and fatigue crack growth rate under cyclic loading. Thus, one focus of this project is to measure the rapid-cracking thresholds and fatigue crack growth rates of line pipe steels in high-pressure hydrogen gas. These properties must be measured for the base materials but more importantly for the welds, which are likely to be most vulnerable to hydrogen embrittlement. The measured properties can be evaluated by predicting the performance of the pipeline using a relevant structural integrity model, such as that in ASME B31.12. A second objective of this project is to enable development of micromechanics models of hydrogen embrittlement in pipeline steels. The focus of this effort is to establish physical models of hydrogen embrittlement in line pipe steels using evidence from analytical techniques such as electron microscopy. These physical models then serve as the framework for developing sophisticated finite-element models, which can provide quantitative insight into the micromechanical state near defects. Understanding the micromechanics of defects can ensure that structural integrity models are applied accurately and conservatively.
Abstract not provided.
Sandia National Laboratories, California (SNL/CA) is a government-owned/contractor-operated laboratory. Sandia Corporation, a Lockheed Martin Company, operates the laboratory for the Department of Energy's National Nuclear Security Administration (NNSA). The NNSA Sandia Site Office oversees operations at the site, using Sandia Corporation as a management and operating contractor. This Site Environmental Report for 2009 was prepared in accordance with DOE Order 231.1A (DOE 2004a). The report provides a summary of environmental monitoring information and compliance activities that occurred at SNL/CA during calendar year 2009. General site and environmental program information is also included. The Site Environmental Report is divided into ten chapters. Chapter 1, the Executive Summary, highlights compliance and monitoring results obtained in 2009. Chapter 2 provides a brief introduction to SNL/CA and the existing environment found on site. Chapter 3 summarizes SNL/CA's compliance activities with the major environmental requirements applicable to site operations. Chapter 4 presents information on environmental management, performance measures, and environmental programs. Chapter 5 presents the results of monitoring and surveillance activities in 2009. Chapter 6 discusses quality assurance. Chapters 7 through 9 provide supporting information for the report and Chapter 10 is the report distribution list.
Abstract not provided.
The Advanced Engineering Environment (AEE) project identifies emerging engineering environment tools and assesses their value to Sandia National Laboratories and our partners in the Nuclear Security Enterprise (NSE) by testing them in our design environment. This project accomplished several pilot activities, including: the preliminary definition of an engineering bill of materials (BOM) based product structure in the Windchill PDMLink 9.0 application; an evaluation of Mentor Graphics Data Management System (DMS) application for electrical computer-aided design (ECAD) library administration; and implementation and documentation of a Windchill 9.1 application upgrade. The project also supported the migration of legacy data from existing corporate product lifecycle management systems into new classified and unclassified Windchill PDMLink 9.0 systems. The project included two infrastructure modernization efforts: the replacement of two aging AEE development servers for reliable platforms for ongoing AEE project work; and the replacement of four critical application and license servers that support design and engineering work at the Sandia National Laboratories/California site.
Are your employees unhappy with internal corporate search? Frequent complaints include: too many results to sift through; results are unrelated/outdated; employees aren't sure which terms to search for. One way to improve intranet search is to implement a controlled vocabulary ontology. Employing this takes the guess work out of searching, makes search efficient and precise, educates employees about the lingo used within the corporation, and allows employees to contribute to the corpus of terms. It promotes internal corporate search to rival its superior sibling, internet search. We will cover our experiences, lessons learned, and conclusions from implementing a controlled vocabulary ontology at Sandia National Laboratories. The work focuses on construction of this ontology from the content perspective and the technical perspective. We'll discuss the following: (1) The tool we used to build a polyhierarchical taxonomy; (2) Examples of two methods of indexing the content: traditional 'back of the book' and folksonomy word-mapping; (3) Tips on how to build future search capabilities while building the basic controlled vocabulary; (4) How to implement the controlled vocabulary as an ontology that mimics Google's search suggestions; (5) Making the user experience more interactive and intuitive; and (6) Sorting suggestions based on preferred, alternate and related terms using SPARQL queries. In summary, future improvements will be presented, including permitting end-users to add, edit and remove terms, and filtering on different subject domains.
Abstract not provided.
Abstract not provided.
Not submitted at this time
Abstract not provided.
Environmental Science and Technology
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Review of Scientific Instruments
Abstract not provided.
A sensitivity study was performed utilizing a three dimensional finite element model to assess allowable cavern field sizes in strategic petroleum reserve salt domes. A potential exists for tensile fracturing and dilatancy damage to salt that can compromise the integrity of a cavern field in situations where high extraction ratios exist. The effects of salt creep rate, depth of salt dome top, dome size, caprock thickness, elastic moduli of caprock and surrounding rock, lateral stress ratio of surrounding rock, cavern size, depth of cavern, and number of caverns are examined numerically. As a result, a correlation table between the parameters and the impact on the performance of a storage field was established. In general, slower salt creep rates, deeper depth of salt dome top, larger elastic moduli of caprock and surrounding rock, and a smaller radius of cavern are better for structural performance of the salt dome.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Geosphere Journal
Abstract not provided.
Abstract not provided.
Abstract not provided.
Molecular Biology of the Cell
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper introduces an effective-media toolset that can be used for the design of metamaterial structures based on metallic components such as split-ring resonators and dipoles, as well as dielectric spherical resonators. For demonstration purposes the toolset will be used to generate infrared metamaterial designs, and the predicted performances will be verified with full-wave numerical simulations.
Chemically reacting flow models generally involve inputs and parameters that are determined from empirical measurements, and therefore exhibit a certain degree of uncertainty. Estimating the propagation of this uncertainty into computational model output predictions is crucial for purposes of reacting flow model validation, model exploration, as well as design optimization. Recent years have seen great developments in probabilistic methods and tools for efficient uncertainty quantification (UQ) in computational models. These tools are grounded in the use of Polynomial Chaos (PC) expansions for representation of random variables. The utility and effectiveness of PC methods have been demonstrated in a range of physical models, including structural mechanics, transport in porous media, fluid dynamics, aeronautics, heat transfer, and chemically reacting flow. While high-dimensionality remains nominally an ongoing challenge, great strides have been made in dealing with moderate dimensionality along with non-linearity and oscillatory dynamics. In this talk, I will give an overview of UQ in chemical systems. I will cover both: (1) the estimation of uncertain input parameters from empirical data, and (2) the forward propagation of parametric uncertainty to model outputs. I will cover the basics of forward PC UQ methods with examples of their use. I will also highlight the need for accurate estimation of the joint probability density over the uncertain parameters, in order to arrive at meaningful estimates of model output uncertainties. Finally, I will discuss recent developments on the inference of this density given partial information from legacy experiments, in the absence of raw data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
SIRHEN (Sandia InfraRed HEtrodyne aNalysis) is a program for reducing data from photonic Doppler velocimetry (PDV) measurements. SIRHEN uses the short-time Fourier transform method to extract velocity information. The program can be run in MATLAB (2008b or later) or as a Windows executable. This report describes the new Sandia InfraRed HEtrodyne aNalysis program (SIRHEN; pronounced 'siren') that has been developed for efficient and robust analysis of PDV data. The program was designed for easy use within Sandia's dynamic compression community.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Carbon
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
To maintain effective containment surveillance (CS) system capabilities, the requirements for such systems must continue to evolve outpacing diversion capabilities, reduce costs, and meet the needs of the looming nuclear renaissance. What are the future sensor-based capabilities that must be available to support growing CS requirements and what are the technologies needed to provide the underlying capabilities? This presentation is intended to discuss the present gaps in sensor-based containment and surveillance relevant technologies, and future development trends which may address these gaps. Consumer driven technology development will represent a component rich source of technologies and devices that can find application in containment and surveillance tools helping to minimize the technology gaps. Recognizing and utilizing these sources is paramount to cost effective solutions. Where these gaps cannot be addressed by consumer based development, custom, CS specific approaches are the only solution.
Discontinuity detection is an important component in many fields: Image recognition, Digital signal processing, and Climate change research. Current methods shortcomings are: Restricted to one- or two-dimensional setting, Require uniformly spaced and/or dense input data, and Give deterministic answers without quantifying the uncertainty. Spectral methods for Uncertainty Quantification with global, smooth bases are challenged by discontinuities in model simulation results. Domain decomposition reduces the impact of nonlinearities and discontinuities. However, while gaining more smoothness in each subdomain, the current domain refinement methods require prohibitively many simulations. Therefore, detecting discontinuities up front and refining accordingly provides huge improvement to the current methodologies.
It is known that, in general, the correlation structure in the joint distribution of model parameters is critical to the uncertainty analysis of that model. Very often, however, studies in the literature only report nominal values for parameters inferred from data, along with confidence intervals for these parameters, but no details on the correlation or full joint distribution of these parameters. When neither posterior nor data are available, but only summary statistics such as nominal values and confidence intervals, a joint PDF must be chosen. Given the summary statistics it may not be reasonable nor necessary to assume the parameters are independent random variables. We demonstrate, using a Bayesian inference procedure, how to construct a posterior density for the parameters exhibiting self consistent correlations, in the absence of data, given (1) the fit-model, (2) nominal parameter values, (3) bounds on the parameters, and (4) a postulated statistical model, around the fit-model, for the missing data. Our approach ensures external Bayesian updating while marginalizing over possible data realizations. We then address the matching of given parameter bounds through the choice of hyperparameters, which are introduced in postulating the statistical model, but are not given nominal values. We discuss some possible approaches, including (1) inferring them in a separate Bayesian inference loop and (2) optimization. We also perform an empirical evaluation of the algorithm showing the posterior obtained with this data free inference compares well with the true posterior obtained from inference against the full data set.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA) established the Global Threat Reduction Initiative's (GTRI) mission to reduce and protect nuclear and radiological materials located at civilian sites worldwide. Internationally, over 80 countries are cooperating with GTRI to enhance security of facilities with these materials. In 2004, a GTRI delegation began working with the Tanzania Atomic Energy Commission, (TAEC). The team conducted site assessments for the physical protection of radiological materials in Tanzania. Today, GTRI and the Government of Tanzania continue cooperative efforts to enhance physical security at several radiological sites, including a central sealed-source storage facility, and sites in the cities of Arusha, Dar Es Salaam, and Tanga. This paper describes the scope of physical protection work, lessons learned, and plans for future cooperation between the GTRI program and the TAEC. Additionally the paper will review the cooperative efforts between TAEC and the International Atomic Energy Agency (IAEA) with regards to a remote monitoring system at a storage facility and to the repackaging of radioactive sources.
Abstract not provided.
Abstract not provided.
Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.
The Integrated Modeling, Mapping, and Simulation (IMMS) program is designing and prototyping a simulation and collaboration environment for linking together existing and future modeling and simulation tools to enable analysts, emergency planners, and incident managers to more effectively, economically, and rapidly prepare, analyze, train, and respond to real or potential incidents. When complete, the IMMS program will demonstrate an integrated modeling and simulation capability that supports emergency managers and responders with (1) conducting 'what-if' analyses and exercises to address preparedness, analysis, training, operations, and lessons learned, and (2) effectively, economically, and rapidly verifying response tactics, plans and procedures.
Abstract not provided.
Abstract not provided.
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousands or in some cases even millions of computers, making them among the world's most powerful computers for some applications.
Abstract not provided.
While RAID is the prevailing method of creating reliable secondary storage infrastructure, many users desire more flexibility than offered by current implementations. To attain needed performance, customers have often sought after hardware-based RAID solutions. This talk describes a RAID system that offloads erasure correction coding calculations to GPUs, allowing increased reliability by supporting new RAID levels while maintaining high performance.
Abstract not provided.
Materials with switchable states are desirable in many areas of science and technology. The ability to thermally transform a dielectric material to a conductive state should allow for the creation of electronics with built-in safety features. Specifically, the non-desirable build-up and discharge of electricity in the event of a fire or over-heating would be averted by utilizing thermo-switchable dielectrics in the capacitors of electrical devices (preventing the capacitors from charging at elevated temperatures). We have designed a series of polymers that effectively switch from a non-conductive to a conductive state. The thermal transition is governed by the stability of the leaving group after it leaves as a free entity. Here, we present the synthesis and characterization of a series of precursor polymers that eliminate to form poly(p-phenylene vinylene) (PPV's).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.