Modeling techniques for localization and failure
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this thesis, we address a new security problem in the realm of collaborating sensor networks. By collaborating sensor networks, we refer to the networks of sensor networks collaborating on a mission, with each sensor network is independently owned and operated by separate entities. Such networks are practical where a number of independent entities can deploy their own sensor networks in multi-national, commercial, and environmental scenarios, and some of these networks will integrate complementary functionalities for a mission. In the scenario, we address an authentication problem wherein the goal is for the Operator Oi of Sensor Network Si to correctly determine the number of active sensors in Network Si. Such a problem is challenging in collaborating sensor networks where other sensor networks, despite showing an intent to collaborate, may not be completely trustworthy and could compromise the authentication process. We propose two authentication protocols to address this problem. Our protocols rely on Physically Unclonable Functions, which are a hardware based authentication primitive exploiting inherent randomness in circuit fabrication. Our protocols are light-weight, energy efficient, and highly secure against a number of attacks. To the best of our knowledge, ours is the first to addresses a practical security problem in collaborating sensor networks.
Abstract not provided.
Abstract not provided.
Nano Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
We apply the Boltzmann-electron model in the electrostatic, particle-in-cell, finite- element code Aleph to a plasma sheath. By assuming a Boltzmann energy distribution for the electrons, the model eliminates the need to resolve the electron plasma fre- quency, and avoids the numerical "grid instability" that can cause unphysical heating of electrons. This allows much larger timesteps to be used than with kinetic electrons. Ions are treated with the standard PIC algorithm. The Boltzmann-electron model re- quires solution of a nonlinear Poisson equation, for which we use an iterative Newton solver (NOX) from the Trilinos Project. Results for the spatial variation of density and voltage in the plasma sheath agree well with an analytic model
Abstract not provided.
The purpose is to provide guidance to the Radiological Characterization Reviewer to complete the radiological characterization of waste items. This information is used for Department of Transportation (DOT) shipping and disposal, typically at the Nevada National Security Site (NNSS). Complete characterization ensures compliance with DOT shipping laws and NNSS Waste Acceptance Criteria (WAC). The fines for noncompliance can be extreme. This does not include possible bad press, and endangerment to the public, employees and the environment. A Radiological Characterization Reviewer has an important role in the organization. The scope is to outline the characterization process, but does not to include every possible situation. The Radiological Characterization Reviewer position requires a strong background in Health Physics; therefore, these concepts are minimally addressed. The characterization process includes many Excel spreadsheets that were developed by Michael Enghauser known as the WCT software suite. New Excel spreadsheets developed as part of this project include the Ra- 226 Decider and the Density Calculator by Jesse Bland, MicroShield Density Calculator and Molecular Weight Calculator by Pat Lambert.
This study attempts to answer the following questions: would a successful JPOA result in nuclear nonproliferation and regional security in Southwest Asia; and could the Middle East and South Asia work together to contain the threat of Salafi jihadism?
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Replicating compounds are used to cast reproductions of surface features on a variety of materials. Replicas allow for quantitative measurements and recordkeeping on parts that may otherwise be difficult to measure or maintain. In this study, the chemistry and replicating capability of several replicating compounds was investigated. Additionally, the residue remaining on material surfaces upon removal of replicas was quantified. Cleaning practices were tested for several different replicating compounds. For all replicating compounds investigated, a thin silicone residue was left by the replica. For some compounds, additional inorganic species could be identified in the residue. Simple solvent cleaning could remove some residue.
Abstract not provided.
Abstract not provided.
Dengue virus is a devastating human pathogen responsible for millions of infections each year. No antiviral therapies for Dengue currently exist, making effective treatment of the virus challenging. Dengue is taken into the cell through endocytosis. Low-pH mediated structural rearrangements of the envelope protein E leads to the formation of fusogenic E trimers that facilitate membrane fusion with late endosomes. The fusion mechanism is not fully understood, but is a key target for inhibiting the viral infection pathway. An important aspect of fusion is the dependence on endosomal membrane composition, and in particular, the requirement of anionic lipids. This study aims to characterize the biophysical reasons for this dependence. The work includes experimental studies and molecular simulations of the interactions of E with lipid membranes. These approaches revealed the structure of E bound to lipid membranes including the depth of its insertion into the membrane and the average angle with respect to the membrane, the fundamental interactions involved, the dependence of adsorption and anchoring energy on membrane composition, the membrane curvature induced upon insertion, and the correlation of the above with fusion efficiency of virus like particles (VLPs) with liposomes. As a part of this work we developed a new biophysical technique to measure the energy for pulling E out of a membrane, and distinguished anchoring (pull-out) and binding energies for this nonequilibrium system. We also developed a modeling approach combining molecular and continuum approaches to provide the first theoretical estimate of the binding energy. Taken together, this work lays the foundation for developing a systematic fundamental understanding of fusion in enveloped viruses that has been elusive to date.
The Z-Beamlet laser has been operating at Sandia National Laboratories since 2001 to provide a source of laser-generated x-rays for radiography of events on the Z-Accelerator. Changes in desired operational scope have necessitated the increase in pulse duration and energy available from the laser system. This is enabled via the addition of a phase modulated seed laser as an alternative front-end. The practical aspects of deployment are discussed here.
Abstract not provided.
Abstract not provided.
Journal of Materials Science
Abstract not provided.
Journal of Water Resources Planning and Management
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physics of Plasmas
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Consolidated Nuclear Security (CNS) Pantex took the initiative to organize a Review Panel of subject matter experts to independently assess the adequacy of the Pantex Tripping Man Analysis methodology. The purpose of this report is to capture the details of the assessment including the scope, approach, results, and detailed Appendices. Along with the assessment of the analysis methodology, the panel evaluated the adequacy with which the methodology was applied as well as congruence with Department of Energy (DOE) standards 3009 and 3016. The approach included the review of relevant documentation, interactive discussion with Pantex staff, and the iterative process of evaluating critical lines of inquiry.
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.
Abstract not provided.
Abstract not provided.
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The linear ground distance per unit time and ground area covered per unit time of producing synthetic aperture radar (SAR) imagery, termed rate of advance (ROA) and area coverage rate (ACR), are important metrics for platform and radar performance in surveillance applications. These metrics depend on many parameters of a SAR system such as wavelength, aircraft velocity, resolution, antenna beamwidth, imaging mode, and geometry. Often the effects of these parameters on rate of advance and area coverage rate are non-linear. This report addresses the impact of different parameter spaces as they relate to rate of advance and area coverage rate performance.
A reciprocity theorem is an explicit mathematical relationship between two different wavefields that can exist within the same space - time configuration. Reciprocity theorems provi de the theoretical underpinning for mod ern full waveform inversion solutions, and also suggest practical strategies for speed ing up large - scale numerical modeling of geophysical datasets . In the present work, several previously - developed electromagnetic r eciprocity theorems are generalized to accommodate a broader range of medi um, source , and receiver types. Reciprocity relations enabling the interchange of various types of point sources and point receivers within a three - dimensional electromagnetic model are derived. Two numerical modeling algorithms in current use are successfully tested for adherence to reciprocity. Finally, the reciprocity theorem forms the point of departure for a lengthy derivation of electromagnetic Frechet derivatives. These mathe matical objects quantify the sensitivity of geophysical electromagnetic data to variatio ns in medium parameters, and thus constitute indispensable tools for solution of the full waveform inverse problem. ACKNOWLEDGEMENTS Sandia National Labor atories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000. Signif icant portions of the work reported herein were conducted under a Cooperative Research and Development Agreement (CRADA) between Sandia National Laboratories (SNL) and CARBO Ceramics Incorporated. The author acknowledges Mr. Chad Cannan and Mr. Terry Pa lisch of CARBO Ceramics, and Ms. Amy Halloran, manager of SNL's Geophysics and Atmospheric Sciences Department, for their interest in and encouragement of this work. Special thanks are due to Dr . Lewis C. Bartel ( recently retired from Sandia National Labo ratories and now a geophysical consultant ) and Dr. Chester J. Weiss (recently rejoined with Sandia National Laboratories) for many stimulating (and reciprocal!) discussions regar ding the topic at hand.
Advances in Engineering Software
Abstract not provided.
As a corporate workforce grows, managers need more information close at hand to make decisions for the company. As every scientist knows, we can’t judge, let alone improve, something that can’t be measured. Thus, firms that are committed to constant improvement generate reports on various business metrics in order to make informed decisions. Sandia National Labs is no exception. Enterprise business intelligence (BI) is very important to keep such a large organization on the right track, as it signals any adjustments that may be required to stay the course. Like teams in other fields, Sandia’s HR Reporting wants to provide the most valuable Workforce BI possible, and it is considering a change in reporting paradigm to achieve that.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories has tested and evaluated a new infrasound sensor, the MB3a, manufactured by Seismo Wave. These infrasound sensors measure pressure output by a methodology developed by researchers at the French Alternative Energies and Atomic Energy Commission (CEA) and the technology was recently licensed to Seismo Wave for production and sales. The purpose of the infrasound sensor evaluation was to determine a measured sensitivity, transfer function, power, self-noise, dynamic range, seismic sensitivity, and self- calibration ability. The MB3a infrasound sensors are being evaluated for potential use in the International Monitoring System (IMS) of the Comprehensive Nuclear Test-Ban-Treaty Organization (CTBTO).
Abstract not provided.
Abstract not provided.
The purpose of the report is to describe the findings from the analysis of 100 Small Generation Interconnection Procedure (SGIP) studies and describe the methodology used to develop the database. The database was used to identify the most likely impacts and mitigation costs associated with PV system interconnections. A total of 100 SGIP reports performed by 3 utilities and one regional transmission operator (RTO) were analyzed. Each record within the database represents an itemized SGIP report and includes information about the generation facility, interconnection topology, electrical power system characteristics, identified adverse system impacts, mitigation options, and costs associated with interconnection the generation facility.
Abstract not provided.
This writeup is intended to address some questions about the compact model for neutron effects in HBT's that came up during the QASPR independent review.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Rapidly gaining understanding of an executable file is an extremely hard problem, yet one that is critical to support realistic network defense. Without a strong understanding of what programs do, there is no way that defenders can determine whether the presence of a given program is appropriate or not. This research effort was focused on developing ways to allow a human analyst to rapidly build understanding of the content of executable files.
When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well-designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rational DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool.
Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms
An in situ ion irradiation transmission electron microscope has been developed and is operational at Sandia National Laboratories. This facility permits high spatial resolution, real time observation of electron transparent samples under ion irradiation, implantation, mechanical loading, corrosive environments, and combinations thereof. This includes the simultaneous implantation of low-energy gas ions (0.8-30 keV) during high-energy heavy ion irradiation (0.8-48 MeV). Initial results in polycrystalline gold foils are provided to demonstrate the range of capabilities. © 2014 The Authors. Published by Elsevier B.V.
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models’ ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model’s ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Proposed Journal Article, unpublished
Techno-economic performances of Norwegian biojet fuel production via the Alcoholto- Jet and Fischer-Tropsch synthetic paraffinic kerosene routes were estimated based on adaptations of available literature data to Norwegian conditions. This paper reviews the deployment of feasible routes to sustainable jet fuel production for the short-to-medium term timeframe (2020-2025), with an emphasis on the Norwegian landscape. Given the fact that there are serious concerns regarding the availability and the sustainability of large-scale biofuels production both from oil seed plants and carbohydrates (sugars and starches) as well as the unsuitability of the Norwegian climate for oil seed or sugar/starch plant cultivation, only biojet fuels produced from lignocellulosic resources are considered. The short-to-medium term implies certified or near certified fuels. The most promising and feasible alternatives for Norwegian biojet fuel production are hence limited to FT-SPK and ATJ. The results suggest that, from a techno-economic point of view, production of jet fuel via the gasification-FT route is more favorable than the alcohol to jet fuel route. This is attributed to the inclusion of the alcohol production step. Feedstock price is the main operating cost for both of the routes. The current cost of production of jet fuel under Norwegian conditions for gasification FT route is estimated between 43 USD/GJ and 47.4 USD/GJ, and for the ATJ route, between 54 USD/GJ and 60 USD/GJ.
The overall spent fuel nondestructive assay project seeks to develop improved measurement capability for the verification of spent nuclear fuel, especially before its disposal or movement to hard-to-access storage. Various systems are being considered, employing neutron and/or gamma radiation detection with either passive or active methods; their use scenarios are not yet well defined. In a practical deployment, the measurement system would likely need to operate in unattended mode. The output results may also need to be shared between multiple recipients with various interests. The data authentication task considers what issues are important in being able to trust the measurement results. By defining and analyzing a generic (3z(Bbaseline(3y (Bsystem scenario, we have identified five key factors needing specific attention: application use-case details, equipment tamper indication, supporting (ancillary) instruments, systems implementation, and instrument state-of-health reporting.
In the United States, individual states enact Renewable Portfolio Standards (RPSs) for renewable electricity production with little coordination. Each state imposes restrictions on the amounts and locations of qualifying renewable generation. Using a co-optimization (transmission and generation) planning model, we quantify the economic benefits of allowing flexibility in the trading of Renewable Energy Credits (RECs) among the U.S. states belonging to the Western Electricity Coordinating Council. The flexibility was analyzed in terms of the amount and geographic eligibility of out-of-state RECs that can be used in meeting state RPSs' goals. Although more trade would be expected to have economic benefits, the magnitude of these benefits relative to the cost of additional transmission infrastructure is less certain. It is also unclear the effects of such trading on CO2 emissions and energy prices. We find that most of the economic benefits are captured with approximately 25% of interstate exchange of RECs. Furthermore, increasing REC trading flexibility does not necessarily result in either higher transmission investment costs or a substantial impact on CO2 emissions. Finally, increasing REC trading flexibility decreases energy prices in some states and increases them in others, while WECC-wide average energy price slightly decreases.
There exist several different reliability- and approximation-based methods to determine the capacity contribution of solar resources towards resource adequacy. However, most of these approaches require knowing in advance the installed capacities of both conventional and solar generators. This is a complication since generator capacities are actually decision variables in capacity planning studies. In this article we study the effect of time resolution and solar PV penetration using a capacity planning model that accounts for the full distribution of generator outages and solar resource variability. We also describe a modification of a standard deterministic planning model that enforces a resource adequacy target through a reserve margin constraint. Our numerical experiments show that at least 50 days worth of data are necessary to approximate the results of the full-resolution model with a maximum error of 2.5% on costs and capacity. We also show that the amount of displaced capacity of conventional generation decreases rapidly as the penetration of solar PV increases. We find that using an exogenously defined and constant capacity factor based on time-series data can yield relatively accurate results for small penetration levels (less than 5%). For higher penetration levels (up to 20%), the modified deterministic planning model better captures avoided costs and the decreasing value of solar PV. Although our results are not general, they highlight the importance of accounting for the variation in both energy and capacity value of solar resources endogenously in capacity planning models. A11 numerical experiments are performed using the IEEE Reliability Test System and 7 years worth of demand and solar data from a utility in Arizona.
Several use cases are included in this report.
Temperature measurements are very important in shock and ramp type dynamic materials experiments. In particular, accurate temperature measurements can provide stringent additional constraints on determining the equation of state for materials at high pressure. The key to providing these constraints is to develop diagnostic techniques that can determine the temperature with sufficient accuracy. To enable such measurements, we are working to improve our diagnostic capability with three separate techniques, each of which has specific applicability in a particular temperature range. To improve our capability at low temperatures (< 1 eV) we are working on a technique that takes advantage of the change in reflectivity of Au as the temperature is increased. This is most applicable to ramp type experiments. In the intermediate range (~1 eV < T< 5-10 eV) we are improving our optical pyrometry diagnostic by adding the capability of doing an absolute calibration as part of the diagnostic procedure for the shock or shock ramp dynamic materials experiment. This will enable more accurate temperature measurements for shock and shock ramp type experiments. For higher temperatures that occur in very high-pressure shock experiments, above 10 eV, we are developing the capability of doing x-ray Thomson scattering measurements. Such measurements will enable us to characterize strongly shocked or warm dense matter materials. Work on these diagnostic approaches is summarized in this report.
This report presents a specification for the Portals 4 network programming interface. Portals 4 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded systems. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is targeted to the next generation of machines employing advanced network interface architectures that support enhanced offload capabilities.
Physical Review B - Condensed Matter and Materials Physics
We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing density functional theory (DFT) and quantum Monte Carlo (QMC) treatments. The method is applied to address the longstanding discrepancy between DFT calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, in contrast to DAC data.
Physical Review B - Condensed Matter and Materials Physics
A combined experimental-theoretical study of optically pumped nuclear magnetic resonance (OPNMR) has been performed in a GaAs/Al0.1Ga0.9As quantum well film epoxy bonded to a Si substrate with thermally induced biaxial strain. The photon energy dependence of the Ga OPNMR signal was recorded at magnetic fields of 4.9 and 9.4 T at a temperature of 4.8-5.4 K. The data were compared to the nuclear spin polarization calculated from the electronic structure and differential absorption to spin-up and spin-down states of the electron conduction band using a modified k·p model based on the Pidgeon-Brown model. Comparison of theory with experiment facilitated the assignment of features in the OPNMR energy dependence to specific interband Landau level transitions. The results provide insight into how effects of strain and quantum confinement are manifested in optical nuclear polarization in semiconductors.
Journal of Spacecraft and Rockets
Normal tolerance limits are frequently used in dynamic environments specifications of aerospace systems as a method to account for aleatory variability in the environments. Upper tolerance limits, when used in this way, are computed from records of the environment and used to enforce conservatism in the specification by describing upper extreme values the environment may take in the future. Components and systems are designed to withstand these extreme loads to ensure they do not fail under normal use conditions. The degree of conservatism in the upper tolerance limits is controlled by specifying the coverage and confidence level (usually written in “coverage/confidence” form). Moreover, in high-consequence systems it is common to specify tolerance limits at 95% or 99% coverage and confidence at the 50% or 90% level. Despite the ubiquity of upper tolerance limits in the aerospace community, analysts and decision-makers frequently misinterpret their meaning. The misinterpretation extends into the standards that govern much of the acceptance and qualification of commercial and government aerospace systems. As a result, the risk of a future observation of the environment exceeding the upper tolerance limit is sometimes significantly underestimated by decision makers. This note explains the meaning of upper tolerance limits and a related measure, the upper prediction limit. So, the objective of this work is to clarify the probability of exceeding these limits in flight so that decision-makers can better understand the risk associated with exceeding design and test levels during flight and balance the cost of design and development with that of mission failure.
IEEE Power and Energy Society General Meeting
Electromechanical oscillations often limit transmission capacity in the western North American Power System (termed the wNAPS). Recent research and development has focused on employing large-scale damping controls via wide-area feedback. Such an approach is made possible by the recent installation of a wide-area real-time measurement system based upon Phasor Measurement Unit (PMU) technology. One potential large-scale damping approach is based on energy storage devices. Such an approach has considerable promise for damping oscillations. This paper considers the placement of such devices within the wNAPS system. We explore combining energy storage devices with HVDC modulation of the Pacific DC Intertie (PDCI). We include eigenanalysis of a reduced-order wNAPS system, detailed analysis of a basic two-area dynamic system, and full-order transient simulations. We conclude that the optimal energy storage location is in the area with the lower inertia.
Nucleic Acids Research
The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.
Journal of Materials Research
The reliability of nanomaterials depends on maintaining their specific sizes and structures. However, the stability of many nanomaterials in radiation environments remains uncertain due to the lack of a fully developed fundamental understanding of the radiation response on the nanoscale. To provide an insight into the dynamic aspects of single ion effects in nanomaterials, gold nanoparticles (NPs) with nominal diameters of 5, 20, and 60 nm were subjected to self-ion irradiation at energies of 46 keV, 2.8 MeV, and 10 MeV in situ inside of a transmission electron microscope. Ion interactions created a variety of far-from-equilibrium structures including small (∼1 nm) sputtered nanoclusters from the parent NPs of all sizes. Single ions created surface bumps and elongated nanofilaments in the 60 nm NPs. Similar shape changes were observed in the 20 nm NPs, while the 5 nm NPs were transiently melted or explosively broken apart.
Physical Review B - Condensed Matter and Materials Physics
Under shock compression, most porous materials exhibit lower densities for a given pressure than that of a full-dense sample of the same material. However, some porous materials exhibit an anomalous, or enhanced, densification under shock compression. We demonstrate a molecular mechanism that drives this behavior. We also present evidence from atomistic simulation that silicon belongs to this anomalous class of materials. Atomistic simulations indicate that local shear strain in the neighborhood of collapsing pores nucleates a local solid-solid phase transformation even when bulk pressures are below the thermodynamic phase transformation pressure. This metastable, local, and partial, solid-solid phase transformation, which accounts for the enhanced densification in silicon, is driven by the local stress state near the void, not equilibrium thermodynamics. This mechanism may also explain the phenomenon in other covalently bonded materials.