This report describes the next stage goals and resource needs for the joint Sandia and University of Rochester ARPA-E project. A key portion of this project is Technology Transfer and Outreach, with the goal being to help ensure that this project develops a credible method or tool that the magneto-inertial fusion (MIF) research community can use to broaden the advocacy base, to pursue a viable path to commercial fusion energy, and to develop other commercial opportunities for the associated technology. This report describes an analysis of next stage goals and resource needs as requested by Milestone 5.1.1.
This document describes the software development practice areas and processes which contribute to the ability of SWiFT software developers to provide quality software. These processes are designed to satisfy the requirements set forth by the Sandia Software Quality Assurance Program (SSQAP). APPROVALS SWiFT Software Quality Assurance Plan (SAND2016-0765) approved by: Department Manager SWiFT Site Lead Dave Minster (6121) Date Jonathan White (6121) Date SWiFT Controls Engineer Jonathan Berg (6121) Date CHANGE HISTORY Issue Date Originator(s) Description A 2016/01/27 Jon Berg (06121) Initial release of the SWiFT Software Quality Assurance Plan
In this study, ductile failure of structural metals is a pervasive issue for applications such as automotive manufacturing, transportation infrastructures, munitions and armor, and energy generation. Experimental investigation of all relevant failure scenarios is intractable, requiring reliance on computation models. Our confidence in model predictions rests on unbiased assessments of the entire predictive capability, including the mathematical formulation, numerical implementation, calibration, and execution.
Sabotage of spent nuclear fuel casks remains a concern nearly forty years after attacks against shipment casks were first analyzed and has a renewed relevance in the post-9/11 environment. A limited number of full-scale tests and supporting efforts using surrogate materials, typically depleted uranium dioxide (DUO 2 ), have been conducted in the interim to more definitively determine the source term from these postulated events. However, the validity of these large- scale results remain in question due to the lack of a defensible spent fuel ratio (SFR), defined as the amount of respirable aerosol generated by an attack on a mass of spent fuel compared to that of an otherwise identical surrogate. Previous attempts to define the SFR in the 1980's have resulted in estimates ranging from 0.42 to 12 and include suboptimal experimental techniques and data comparisons. Because of the large uncertainty surrounding the SFR, estimates of releases from security-related events may be unnecessarily conservative. Credible arguments exist that the SFR does not exceed a value of unity. A defensible determination of the SFR in this lower range would greatly reduce the calculated risk associated with the transport and storage of spent nuclear fuel in dry cask systems. In the present work, the shock physics codes CTH and ALE3D were used to simulate spent nuclear fuel (SNF) and DUO 2 targets impacted by a high-velocity jet at an ambient temperature condition. These preliminary results are used to illustrate an approach to estimate the respirable release fraction for each type of material and ultimately, an estimate of the SFR. This page intentionally blank
This report is a guide to the use of the Sandia - developed Ringdown Parasitic Extractor (RPE) software tool. It explains the theory behind performing parasitic extraction from current ringdown waveforms and describes how to use the tool to achieve good results.
We report on the realization of a GaN high voltage vertical p-n diode operating at > 3.9 kV breakdown with a specific on-resistance < 0.9 mΩ.cm2. Diodes achieved a forward current of 1 A for on-wafer, DC measurements, corresponding to a current density > 1.4 kA/cm2. An effective critical electric field of 3.9 MV/cm was estimated for the devices from analysis of the forward and reverse current-voltage characteristics. Furthermore this suggests that the fundamental limit to the GaN critical electric field is significantly greater than previously believed.
As grid energy storage systems become more complex, it grows more difficult to design them for safe operation. This paper first reviews the properties of lithium-ion batteries that can produce hazards in grid scale systems. Then the conventional safety engineering technique Probabilistic Risk Assessment (PRA) is reviewed to identify its limitations in complex systems. To address this gap, new research is presented on the application of Systems-Theoretic Process Analysis (STPA) to a lithium-ion battery based grid energy storage system. STPA is anticipated to fill the gaps recognized in PRA for designing complex systems and hence be more effective or less costly to use during safety engineering. It was observed that STPA is able to capture causal scenarios for accidents not identified using PRA. Additionally, STPA enabled a more rational assessment of uncertainty (all that is not known) thereby promoting a healthy skepticism of design assumptions. We conclude that STPA may indeed be more cost effective than PRA for safety engineering in lithium-ion battery systems. However, further research is needed to determine if this approach actually reduces safety engineering costs in development, or improves industry safety standards.
Here, electron sheaths are commonly found near Langmuir probes collecting the electron saturation current. The common assumption is that the probe collects the random flux of electrons incident on the sheath, which tacitly implies that there is no electron presheath and that the flux collected is due to a velocity space truncation of the electron velocity distribution function (EVDF). This work provides a dedicated theory of electron sheaths, which suggests that they are not so simple. Motivated by EVDFs observed in particle-in-cell(PIC) simulations, a 1D model for the electron sheath and presheath is developed. In the model, under low temperature plasma conditions (Te >> Ti), an electron pressure gradient accelerates electrons in the presheath to a flow velocity that exceeds the electron thermal speed at the sheath edge. This pressure gradient generates large flow velocities compared to what would be generated by ballistic motion in response to the electric field. It is found that in many situations, under common plasma conditions, the electron presheath extends much further into the plasma than an analogous ion presheath. PIC simulations reveal that the ion density in the electron presheath is determined by a flow around the electron sheath and that this flow is due to 2D aspects of the sheath geometry. Simulations also indicate the presence of ion acoustic instabilities excited by the differential flow between electrons and ions in the presheath, which result in sheath edge fluctuations. The 1D model and time averaged PIC simulations are compared and it is shown that the model provides a good description of the electron sheath and presheath.
A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.
The Auger lifetime is a critical intrinsic parameter for infrared photodetectors as it determines the longest potential minority carrier lifetime and consequently the fundamental limitations to their performance. Here, Auger recombination is characterized in a long-wave infrared InAs/InAsSb type-II superlattice. Auger coefficients as small as 7.1 × 10 - 26 cm6/s are experimentally measured using carrier lifetime data at temperatures in the range of 20 K-80 K. The data are compared to Auger-1 coefficients predicted using a 14-band K · p electronic structure model and to coefficients calculated for HgCdTe of the same bandgap. The experimental superlattice Auger coefficients are found to be an order-of-magnitude smaller than HgCdTe.
We examined amorphous titania thin films for use as the active material in a polarimetry based HF sensor. The amorphous titania films were found to be sensitive to vapor phase HF and the reaction product was identified as a hydronium oxofluorotitanate phase, which has previously only been synthesized in aqueous solution. The extent of reaction varied both with vapor phase HF concentration, relative humidity, and the exposure time. HF concentrations as low as 1 ppm could be detected for exposure times of 120 h.
Experimental-analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. The method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinates to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig-Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. These modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Walker, Johnnie A.; Takasuka, Taichi E.; Deng, Kai; Bianchetti, Christopher M.; Udell, Hannah S.; Prom, Ben M.; Kim, Hyunkee; Adams, Paul D.; Northen, Trent R.; Fox, Fox
Background: Carbohydrate binding modules (CBMs) bind polysaccharides and help target glycoside hydrolases catalytic domains to their appropriate carbohydrate substrates. To better understand how CBMs can improve cellulolytic enzyme reactivity, representatives from each of the 18 families of CBM found in Ruminoclostridium thermocellum were fused to the multifunctional GH5 catalytic domain of CelE (Cthe-0797, CelEcc), which can hydrolyze numerous types of polysaccharides including cellulose, mannan, and xylan. Since CelE is a cellulosomal enzyme, none of these fusions to a CBM previously existed. Results: CelEcc-CBM fusions were assayed for their ability to hydrolyze cellulose, lichenan, xylan, and mannan. Several CelEcc-CBM fusions showed enhanced hydrolytic activity with different substrates relative to the fusion to CBM3a from the cellulosome scaffoldin, which has high affinity for binding to crystalline cellulose. Additional binding studies and quantitative catalysis studies using nanostructure-initiator mass spectrometry (NIMS) were carried out with the CBM3a, CBM6, CBM30, and CBM44 fusion enzymes. In general, and consistent with observations of others, enhanced enzyme reactivity was correlated with moderate binding affinity of the CBM. Numerical analysis of reaction time courses showed that CelEcc-CBM44, a combination of a multifunctional enzyme domain with a CBM having broad binding specificity, gave the fastest rates for hydrolysis of both the hexose and pentose fractions of ionic-liquid pretreated switchgrass. Conclusion: We have shown that fusions of different CBMs to a single multifunctional GH5 catalytic domain can increase its rate of reaction with different pure polysaccharides and with pretreated biomass. This fusion approach, incorporating domains with broad specificity for binding and catalysis, provides a new avenue to improve reactivity of simple combinations of enzymes within the complexity of plant biomass.
Safe and efficient operation of lithium ion batteries requires precisely directed flow of lithium ions and electrons to control the first directional volume changes in anode and cathode materials. Understanding and controlling the lithium ion transport in battery electrodes becomes crucial to the design of high performance and durable batteries. Recent work revealed that the chemical potential barriers encountered at the surfaces of heteromaterials play an important role in directing lithium ion transport at nanoscale. Here, we utilize in situ transmission electron microscopy to demonstrate that we can switch lithiation pathways from radial to axial to grain-by-grain lithiation through the systematic creation of heteromaterial combinations in the Si-Ge nanowire system. Our systematic studies show that engineered materials at nanoscale can overcome the intrinsic orientation-dependent lithiation, and open new pathways to aid in the development of compact, safe, and efficient batteries.
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivity analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.
Weak scaling studies were performed for the explicit solid dynamics component of the ALEGRA code on two Cray supercomputer platforms during the period 2012-2015, involving a production-oriented hypervelocity impact problem. Results from these studies are presented, with analysis of the performance, scaling, and throughput of the code on these machines. The analysis demonstrates logarithmic scaling of the average CPU time per cycle up to core counts on the order of 10,000. At higher core counts, variable performance is observed, with significant upward excursions in compute time from the logarithmic trend. However, for core counts less than 10,000, the results show a 3 × improvement in simulation throughput, and a 2 × improvement in logarithmic scaling. This improvement is linked to improved memory performance on the Cray platforms, and to significant improvements made over this period to the data layout used by ALEGRA.
The Center for Computing Research (CCR) at Sandia National Laboratories organizes a summer student program each summer, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI).
Nanoscale structuring of optical materials leads to modification of their properties and can be used for improving efficiencies of photonic devices and for enabling new functionalities. In ultrafast optoelectronic switches for generation and detection of terahertz (THz) radiation, incorporation of nanostructures allows us to overcome inherent limitations of photoconductive materials. We propose and demonstrate a nanostructured photoconductive THz detector for sampling highly localized THz fields, down to the level of λ/150. The nanostructure that consists of an array of optical nanoantennas and a distributed Bragg reflector forms a hybrid cavity, which traps optical gate pulses within the photoconductive layer. The effect of photon trapping is observed as enhanced absorption at a designed wavelength. This optically thin photoconductive THz detector allows us to detect highly confined evanescent THz fields coupled through a deeply subwavelength aperture as small as 2 μm (λ/150 at 1 THz). By monolithically integrating the THz detector with apertures ranging from 2 to 5 μm we realize higher spatial resolution and higher sensitivity in aperture-type THz near-field microscopy and THz time-domain spectroscopy.
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients' emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithms on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.
The three-dimensional Monte Carlo code ERO has been used to simulate dedicated DIII-D experiments in which Mo and W samples with different sizes were exposed to controlled and well-diagnosed divertor plasma conditions to measure the gross and net erosion rates. Experimentally, the net erosion rate is significantly reduced due to the high local redeposition probability of eroded high-Z materials, which according to the modelling is mainly controlled by the electric field and plasma density within the Chodura sheath. Similar redeposition ratios were obtained from ERO modelling with three different sheath models for small angles between the magnetic field and the material surface, mainly because of their similar mean ionization lengths. The modelled redeposition ratios are close to the measured value. Decreasing the potential drop across the sheath can suppress both gross and net erosion because sputtering yield is decreased due to lower incident energy while the redeposition ratio is not reduced owing to the higher electron density in the Chodura sheath. Taking into account material mixing in the ERO surface model, the net erosion rate of high-Z materials is shown to be strongly dependent on the carbon impurity concentration in the background plasma; higher carbon concentration can suppress net erosion. As a result, the principal experimental results such as net erosion rate and profile and redeposition ratio are well reproduced by the ERO simulations.
Three balance of systems (BOS) connector designs common to industry were investigated as a means of assessing reliability from the perspective of arc fault risk. These connectors were aged in field and laboratory environments and performance data captured for future development of a reliability model. Comparison of connector resistance measured during damp heat, mixed flowing gas and field exposure in a light industrial environment indicated disparities in performance across the three designs. Performance was, in part, linked to materials of construction. A procedure was developed to evaluate new and aged connectors for arc fault risk and tested for one of the designs. Those connectors exposed to mixed flowing gas corrosion exhibited considerable Joule heating that may enhance arcing behavior, suggesting temperature monitoring as a potential method for arc fault prognostics. These findings, together with further characterization of connector aging, can provide operators of photovoltaic installations the information necessary to develop a data-driven approach to BOS connector maintenance as well as opportunities for arc fault prognostics.
Moving Target Defense (MTD) is the concept of controlling change across multiple information system dimensions with the objective of increasing uncertainty and complexity for attackers. Increased uncertainty and complexity will increase the costs of malicious probing and attack efforts and thus prevent or limit network intrusion. As MTD increases complexity of the system for the attacker, the MTD also increases complexity in the desired operation of the system. This introduced complexity results in more difficult network troubleshooting and can cause network degradation or longer network outages. In this research paper the authors describe the defensive work factor concept. Defensive work factors considers in detail the specific impact that the MTD approach has on computing resources and network resources. Measuring impacts on system performance along with identifying how network services (e.g., DHCP, DNS, in-place security mechanisms) are affected by the MTD approach are presented. Also included is a case study of an MTD deployment and the defensive work factor costs. An actual experiment is constructed and metrics are described for the use case.
This work quantifies the polarization persistence and memory of circularly polarized light in forward-scattering and isotropic (Rayleigh regime) environments; and for the first time, details the evolution of both circularly and linearly polarized states through scattering environments. Circularly polarized light persists through a larger number of scattering events longer than linearly polarized light for all forward-scattering environments; but not for scattering in the Rayleigh regime. Circular polarization's increased persistence occurs for both forward and backscattered light. The simulated environments model polystyrene microspheres in water with particle diameters of 0.1 μm, 2.0 μm, and 3.0 μm. The evolution of the polarization states as they scatter throughout the various environments are illustrated on the Poincaré sphere after one, two, and ten scattering events.
This study is focused on describing the desorbed off gases due to heating of the AgIMordenite (MOR) produced at ORNL for iodine (I2) gas capture from nuclear fuel aqueous reprocessing. In particular, the interest is for the incorporation of the AgI-MOR into a waste form, which might be the Sandia developed, low temperature sintering, Bi-Si oxide based, Glass Composite Material (GCM). The GCM has been developed as a waste form for the incorporation any oxide based getter material. In the case where iodine may be released during the sintering process of the GCM, additional Ag flake is added as further insurance in total iodine capture and retention. This has been the case for the incorporated ORNL developed AgIMOR. Thermal analysis studies were carried out to determine off gasing processes of ORNL AgIMOR. Independent of sample size, ~7wt% of total water is desorbed by 225°C. This includes both bulk surface and occluded water, and are monitored as H2O and OH. Of that total, ~5.5wt% is surface water which is removed by 125°C, and 1.5wt% is occluded (in zeolite pore) water. Less than ~1 wt% total water continues to desorb, but is completely removed by 500°C. Above 300°C, the detectable remaining desorbing species observed are iodine containing compounds, including I and I2.
The addition of a compressible degree of freedom (CDOF) has been shown to significantly increase the power absorption compared to a traditional rigid WEC of the same shape and mass for a variety of architectures. The present study demonstrates that a compressible point absorber, with a passive power-take-off (PTO) and optimized damping, can also achieve at the same performance levels or better than an optimally controlled rigid point absorber using reactive power from the PTO. Eliminating the need for a reactive PTO would substantially reduce costs by reducing PTO design complexity. In addition, it would negate the documented problems of reactive PTO efficiencies on absorbed power. Improvements to performance were quantified in the present study by comparing a compressible point absorber to a conventional rigid one with the same shape and mass. Wave energy is converted to mechanical energy in both cases using a linear damper PTO, with the PTO coefficient optimized for each resonance frequency and compressible volume. The large compressible volumes required to tune the compressible point absorber to the desired frequency are a practical limitation that needs to be addressed with further research; especially for low frequencies. If fact, all compressible volumes exceed the submerged volume of the point absorber by significant amounts; requiring auxiliary compressible volume storage units that are connected to the air chamber in the submerged portion of the point absorber. While realistic, these auxiliary units would increase the Cap Ex and Op Ex costs, potentially reducing the aforementioned benefits gained by CDOF. However, alternative approaches can be developed to implement CDOF without the large compressible volume requirements, including the development of flexible surface panels tuned with mechanical springs.
Sandia National Laboratories has an existing capability for hybrid control systems testing called SCEPTRE. This article proposes an architecture to add dynamic simulation capability for the underlying physical process (e.g. the power grid). Dynamic simulation for SCEPTRE will enable very accurate simulation, and allow the full integration of analog control systems hardware.
Utilities issuing new PV interconnection permits must be aware of any risks caused by PV on their distribution networks. One potential risk is the degradation of the effectiveness of the network's protection devices (PDs). This can limit the amount of PV allowed in the network, i.e. the network's PV hosting capacity. This research studies how the size and location of a PV installation can prevent network PDs from operating as intended. Simulations are carried out using data from multiple actual distribution feeders in OpenDSS. The PD TCC are modeled to find the timing of PD tripping to accurately identify when PV will cause unnecessary customer outages. The findings show that more aggressive protection settings limit the amount of PV that can be placed on a network that does not cause more customer outages or damage network equipment.
Most utilities use a standard small generator interconnection procedure (SGIP) process that includes a screen for placing potential PV interconnection requests on a fast track that do not require more detailed study. One common screening threshold is the 15% of peak load screen that fast tracks PV below a certain size. This paper performs a technical evaluation of the screen compared to a large number of simulation results for PV on 40 different feeders. Three error metrics are developed to quantify the accuracy of the screen for identifying interconnections that would cause problems or incorrectly sending a large number of allowable systems for more detailed study.
PV modules and their inverting electronics are becoming increasingly more integrated. When a PV module and microinverter are completely integrated a new class of PV system, the AC module, is created. Unfortunately, existing characterization and modeling techniques require separate characterization of PV and inverting components. Thus, existing methods are incapable of modeling AC modules. We have developed an empirical performance model capable of characterizing and modeling an AC module. The model is capable of predicting the active power from an AC module in a typical application with RMSE of approximately 1% of the reference power of the AC module. This paper describes the model form and presents the validation results in terms of model residuals.
With increasingly high penetrations of PV on distribution systems, there can be many benefits and impacts to the standard operation of the grid. This paper focuses on voltages below the allowable range caused by the installation of PV on distribution systems with line-drop compensation enabled voltage regulation controls. This paper demonstrates how this type of under-voltage issue has the potential to limit the hosting capacity of PV on a feeder and have possible consequences to other feeders served off a common regulated bus. Some examples of mitigation strategies are presented, along with the shortcomings of each. An example of advanced inverter functionality to mitigate overvoltage is shown, while also illustrating the ineffectiveness of inverter voltage control as a mitigation of under-voltage.
Reflection losses from a PV module become increasingly pronounced at solar incident angles >60°. However, accurate measurement in this region can be problematic due to tracker articulation limits and irradiance reference device calibration. We present the results of a measurement method enabling modules to be tested over the full range of 0-90° by articulating the tracker in elevation only. This facilitates the use of a shaded pyranometer to make a direct measurement of the diffuse component, reducing measurement uncertainty. We further present the results of a real-time intercomparison performed by two independent test facilities ∼10 km apart.
The texture or patterning of soil on PV surfaces may influence light capture at various angles of incidence. Accumulated soil can be considered a micro-shading element, which changes with respect to AOI. While scattering losses at this scale would be significant only to the most sensitive devices, micro-shading could lead to hot spot formation and other reliability issues. Indoor soil deposition was used to prepare test coupons for simultaneous AOI and soiling loss experiments. A mixed solvent deposition technique was used to consistently deposit patterned test soils onto glass slides. Transmission decreased as soil loading and AOI increased. Highly dispersed particles are less prone to secondary scattering, improving overall light collection.
To address the lack of knowledge of local solar variability, we have developed, deployed, and demonstrated the value of data collected from a low-cost solar variability sensor. While most currently used solar irradiance sensors are expensive pyranometers with high accuracy (relevant for annual energy estimates), low-cost sensors display similar precision (relevant for solar variability) as high-cost pyranometers, even if they are not as accurate. In this work, we list variability sensor requirements, describe testing of various low-cost sensor components, present a validation of an alpha prototype, and show how the variability sensor collected data can be used for grid integration studies. The variability sensor will enable a greater understanding of local solar variability, which will reduce developer and utility uncertainty about the impact of solar photovoltaic installations and thus will encourage greater penetrations of solar energy.
To address the lack of knowledge of local solar variability, we have developed, deployed, and demonstrated the value of data collected from a low-cost solar variability sensor. While most currently used solar irradiance sensors are expensive pyranometers with high accuracy (relevant for annual energy estimates), low-cost sensors display similar precision (relevant for solar variability) as high-cost pyranometers, even if they are not as accurate. In this work, we list variability sensor requirements, describe testing of various low-cost sensor components, present a validation of an alpha prototype, and show how the variability sensor collected data can be used for grid integration studies. The variability sensor will enable a greater understanding of local solar variability, which will reduce developer and utility uncertainty about the impact of solar photovoltaic installations and thus will encourage greater penetrations of solar energy.
Utilities issuing new PV interconnection permits must be aware of any risks caused by PV on their distribution networks. One potential risk is the degradation of the effectiveness of the network's protection devices (PDs). This can limit the amount of PV allowed in the network, i.e. the network's PV hosting capacity. This research studies how the size and location of a PV installation can prevent network PDs from operating as intended. Simulations are carried out using data from multiple actual distribution feeders in OpenDSS. The PD TCC are modeled to find the timing of PD tripping to accurately identify when PV will cause unnecessary customer outages. The findings show that more aggressive protection settings limit the amount of PV that can be placed on a network that does not cause more customer outages or damage network equipment.
Most utilities use a standard small generator interconnection procedure (SGIP) process that includes a screen for placing potential PV interconnection requests on a fast track that do not require more detailed study. One common screening threshold is the 15% of peak load screen that fast tracks PV below a certain size. This paper performs a technical evaluation of the screen compared to a large number of simulation results for PV on 40 different feeders. Three error metrics are developed to quantify the accuracy of the screen for identifying interconnections that would cause problems or incorrectly sending a large number of allowable systems for more detailed study.
The current state of PV module monitoring is in need of improvements to better detect, diagnose, and locate abnormal module conditions. Detection of common abnormalities is difficult with current methods. The value of optimal system operation is a quantifiable benefit, and cost-effective monitoring systems will continue to evolve for this reason. Sandia National Laboratories performed a practicality and monitoring investigation on a testbed of 15 in-situ module-level I-V curve tracers. Shading and series resistance tests were performed and examples of using I-V curve interpretation and the Loss Factors Model parameters for detection of each is presented.
PV project investments need comprehensive plant monitoring data in order to validate performance and to fulfil expectations. Algorithms from PV-LIB and Loss Factors Model are being combined to quantify their prediction improvements at Gantner Instruments' Outdoor Test facility at Tempe AZ on multiple Tier 1 technologies. The validation of measured vs. predicted long term performance will be demonstrated to quantify the potential of IV scan monitoring. This will give recommendations on what parameters and methods should be used by investors, test labs, and module producers.
Holmgren, William F.; Andrews, Robert W.; Lorenzo, Antonio T.; Stein, Joshua
We describe improvements to the open source PVLIB-Python modeling package. PVLIB-Python provides most of the functionality of its parent PVLIB-MATLAB package and now follows standard Python design patterns and conventions, has improved unit test coverage, and is installable. PVLIBPython is hosted on GitHub.com and co-developed by GitHub contributors. We also describe a roadmap for the future of the PVLIB-Python package.
The remarkable impact encapsulation matrix chemistry can have on the bioactivity and viability of integrated living cells is reported. Two silica chemistries (aqueous silicate and alkoxysilane), and a functional component additive (glycerol), are employed to generate three distinct silica matrices. These matrices are used to encapsulate living E. coli cells engineered with a synthetic riboswitch for cell-based biosensing. Following encapsulation, membrane integrity, reproductive capability, and riboswitch-based protein expression levels and rates are measured over a 5 week period. Striking differences in E. coli bioactivity, viability, and biosensing performance are observed for cells encapsulated within the different matrices. E. coli cells encapsulated for 35 days in aqueous silicate-based (AqS) matrices showed relatively low membrane integrity, but high reproductive capability in comparison to cells encapsulated in glycerol containing sodium silicate-based (AqS + g) and alkoxysilane-based (PGS) gels. Further, cells in sodium silicate-based matrices showed increasing fluorescence output over time, resulting in a 1.8-fold higher fluorescence level, and a faster expression rate, over cells free in solution. This unusual and unique combination of biological properties demonstrates that careful design of the encapsulation matrix chemistry can improve functionality of the biocomposite material, and result in new and unexpected physiological states.
Here, thin and continuous films of porous metal-organic frameworks can now be conformally deposited on various substrates using a vapor-phase synthesis approach that departs from conventional solution-based routes.
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
In situ X-Ray Absorption Near Edge Spectroscopy (XANES) and Extended X-Ray Absorption Fine Structure (EXAFS) techniques are applied to a metal center ionic liquid undergoing oxidation and reduction in a three electrode spectroscopic cell. Determination of the extent of reduction under negative bias on the working electrode and the extent of oxidation are determined after pulse voltammetry to quiescence. While the ionic liquid undergoes full oxidation, it undergoes only partial reduction, likely due to transport issues on the timescale of the experiment. Nearest neighbor Fe-O distances in the fully oxidized state match well to expected values for similarly coordinated solids, but reduction does not result in an extension of the Fe-O bond length, as would be expected from comparisons to the solid phase. Instead, little change in bond length is observed. We suggest that this may be due to a more complex interaction between the monodentate ligands of the metal center anion and the surrounding charge cloud, rather than straightforward electrostatics between the metal center and the nearest neighbor grouping.
The purpose of this project is to experimentally validate the thermal fatigue life of solder interconnects for a variety of surface mount electronic packages. Over the years, there has been a significant amount of research and analysis in the fracture of solder joints on printed circuit boards. Solder is important in the mechanical and electronic functionality of the component. It is important throughout the life of the product that the solder remains crack and fracture free. The specific type of solder used in this experiment is a 63Sn37Pb eutectic alloy. Each package was surrounded conformal coating or underfill material.
Sandia National Laboratories has funded the research and development of a new capability to interactively explore the effects of cyber exploits on the performance of physical protection systems. This informal, interim report of progress summarizes the project’s basis and year one (of two) accomplishments. It includes descriptions of confirmed cyber exploits against a representative testbed protection system and details the development of an emulytics capability to support live, virtual, and constructive experiments. This work will support stakeholders to better engineer, operate, and maintain reliable protection systems.
Reactive multilayers consisting of alternating layers of Al and Pt were irradiated by single laser pulses ranging from 100 μs to 100 ms in duration, resulting in the initiation of rapid, self-propagating reactions. The threshold intensities for ignition vary with the focused laser beam diameter, bilayer thickness, and pulse length and are affected by solid state reactions and conduction of heat away from the irradiated regions. High-speed photography was used to observe ignition dynamics during irradiation and elucidate the effects of heat transfer into a multilayer foil. For an increasing laser pulse length, the ignition process transitioned from a more uniform to a less uniform temperature profile within the laser-heated zone. A more uniform temperature profile is attributed to rapid heating rates and heat localization for shorter laser pulses, and a less uniform temperature profile is due to slower heating of reactants and conduction during irradiation by longer laser pulses. Finite element simulations of laser heating using measured threshold intensities indicate that micron-scale ignition of Al/Pt occurs at low temperatures, below the melting point of both reactants.
In the realm of cyber security, recent events have demonstrated the need for a significant change in the philosophies guiding the identification and mitigation of attacks. The unprecedented increase in the quantity and sophistication of cyber attacks in the past year alone has proven the inadequacy of current defensive philosophies that do not assume continuous compromise. This has given rise to new perspectives on cyber defense where, instead of total prevention, threat intelligence is the crucial tool allowing the mitigation of cyber threats. This paper formalizes a new framework for obtaining threat intelligence from an active cyber attack and demonstrates the realization of this framework in the software tool, LinkShop. Specifically, using the behavioral analysis technique known as linkography, our framework allows cyber defenders to, in an automated fashion, quantitatively capture both general and nuanced patterns in attacker's behavior - pushing capabilities for generating threat intelligence far beyond what is currently possible with rudimentary indicators of compromise and into the realm of capability needed to combat future cyber attackers. Furthermore, this paper shows in detail how such knowledge can be achieved by using LinkShop on actual cyber event data and lays a foundation for further scientific investigation into cyber attacker behavior.
The further development of all-solid-state batteries is still limited by the understanding/engineering of the interfaces formed upon cycling. Here, we correlate the morphological, chemical, and electrical changes of the surface of thin-film devices with Al negative electrodes. The stable Al-Li-O alloy formed at the stress-free surface of the electrode causes rapid capacity fade, from 48.0 to 41.5 μAh/cm2 in two cycles. Surprisingly, the addition of a Cu capping layer is insufficient to prevent the device degradation. Nevertheless, Si electrodes present extremely stable cycling, maintaining >92% of its capacity after 100 cycles, with average Coulombic efficiency of 98%.
This code implements the GloVe algorithm for learning word vectors from a text corpus. It uses a modern C++ approach. This algorithm is described in the open literature in the referenced paper by Pennington, Jeffrey, Richard Socher, and Christopher D. Manning.
We introduce a near-field scanning probe terahertz (THz) microscopy technique for probing surface plasmon waves on graphene. Based on THz time-domain spectroscopy method, this near-field imaging approach is well suited for studying the excitation and evolution of THz plasmon waves on graphene as well as for mapping of graphene properties at THz frequencies on the sub-wavelength scale.
This paper, the second of two parts, reports the measurement and characterization of a fully integrated oven controlled microelectromechanical oscillator (OCMO). The OCMO takes advantage of high thermal isolation and monolithic integration of both aluminum nitride (AlN) micromechanical resonators and electronic circuitry to thermally stabilize or ovenize all the components that comprise an oscillator. Operation at microscale sizes allows implementation of high thermal resistance platform supports that enable thermal stabilization at very low-power levels when compared with the state-of-the-art oven controlled crystal oscillators. A prototype OCMO has been demonstrated with a measured temperature stability of -1.2 ppb/°C, over the commercial temperature range while using tens of milliwatts of supply power and with a volume of 2.3 mm3 (not including the printed circuit board-based thermal control loop). In addition, due to its small thermal time constant, the thermal compensation loop can maintain stability during fast thermal transients (>10 °C/min). This new technology has resulted in a new paradigm in terms of power, size, and warm up time for high thermal stability oscillators. [2015-0036].
This paper, the first of two parts, reports the design and fabrication of a fully integrated oven controlled microelectromechanical oscillator (OCMO). This paper begins by describing the limits on oscillator frequency stability imposed by the thermal drift and electronic properties (Q, resistance) of both the resonant tank circuit and feedback electronics required to form an electronic oscillator. An OCMO is presented that takes advantage of high thermal isolation and monolithic integration of both micromechanical resonators and electronic circuitry to thermally stabilize or ovenize all the components that comprise an oscillator. This was achieved by developing a processing technique where both silicon-on-insulator complementary metal-oxide-semiconductor (CMOS) circuitry and piezoelectric aluminum nitride, AlN, micromechanical resonators are placed on a suspended platform within a standard CMOS integrated circuit. Operation at microscale sizes achieves high thermal resistances (∼10 °C/mW), and hence thermal stabilization of the oscillators at very low-power levels when compared with the state-of-the-art ovenized crystal oscillators, OCXO. A constant resistance feedback circuit is presented that incorporates on platform resistive heaters and temperature sensors to both measure and stabilize the platform temperature. The limits on temperature stability of the OCMO platform and oscillator frequency imposed by the gain of the constant resistance feedback loop, placement of the heater and temperature sensing resistors, as well as platform radiative and convective heat losses are investigated. [2015-0035].
The diffusive-thermal (D-T) instability of opposed nonpremixed tubular flames near extinction is investigated using two-dimensional (2-D) direct numerical simulations together with the linear stability analysis. Two different initial conditions (IC), i.e. the perturbed IC and the C-shaped IC are adopted to elucidate the effects of small and large amplitude disturbances on the formation of flame cells, similar to conditions found in linear stability analysis and experiments, respectively. The characteristics of the D-T instability of tubular flames are identified by a critical Damköhler number, DaC, at which the D-T instability first occurs and the corresponding number of flame cells for three different tubular flames with different flame radii. It is found that DaC predicted through linear stability analysis shows good agreement with that obtained from the 2-D simulations performed with two different ICs. The flame cell number, Ncell, from the 2-D simulations with the perturbed IC is also found to be equal to an integer close to the maximum wavenumber, kmax, obtained from the linear stability analysis. However, Ncell from the 2-D simulations with the C-shaped IC is smaller than kmax and Ncell found from the simulations with the perturbed IC. This is primarily because the strong reaction at the edges of the horseshoe-shaped cellular flame developed from the C-shaped IC is more likely to produce larger flame cells and reduce Ncell. It is also found that for cases with the C-shaped IC, once the cellular instability occurs, the number of flame cells remains constant until global extinction occurs by incomplete reaction manifested by small Da. It is also verified through the displacement speed, Sd, analysis that the two edges of the horseshoe-shaped cellular flame are stationary and therefore do not merge due to the diffusion-reaction balance at the edges. Moreover, large negative Sd is observed at the local extinction points while small positive or negative Sd features in the movement of flame cells as they adjust their location and size towards steady state.
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.
We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for the analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. The DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.
Compositional-homogeneity and crystalline-orientation are necessary attributes to achieve high thermoelectric performance in Bi1-xSbx thin films. Following deposition in vacuum, and upon air exposure, we find that 50%-95% of the Sb in 100-nm thick films segregates to form a nanocrystalline Sb2O3 surface layer, leaving the film bulk as Bi-metal. However, we demonstrate that a thin SiN capping layer deposited prior to air exposure prevents Sb-segregation, preserving a uniform film composition. Furthermore, the capping layer enables annealing in forming gas to improve crystalline orientations along the preferred trigonal axis, beneficially reducing electrical resistivity.
We present experimental evidence of single electron-induced upsets in commercial 28 nm and 45 nm CMOS SRAMs from a monoenergetic electron beam. Upsets were observed in both technology nodes when the SRAM was operated in a low power state. The experimental cross section depends strongly on both bias and technology node feature size, consistent with previous work in which SRAMs were irradiated with low energy muons and protons. Accompanying simulations demonstrate that delta-rays produced by the primary electrons are responsible for the observed upsets. Additional simulations predict the on-orbit event rates for various Earth and Jovian environments for a set of sensitive volumes representative of current technology nodes. The electron contribution to the total upset rate for Earth environments is significant for critical charges as high as 0.2 fC. This value is comparable to that of sub-22 nm bulk SRAMs. Similarly, for the Jovian environment, the electron-induced upset rate is larger than the proton-induced upset rate for critical charges as high as 0.3 fC.
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; Draper, Jeffrey
This study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. This paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. These techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
The interaction of a Mach 1.67 shock wave with a dense particle curtain is quantified using flash radiography. These new data provide a view of particle transport inside a compressible, dense gas–solid flow of high optical opacity. The curtain, composed of 115-µm glass spheres, initially spans 87 % of the test section width and has a streamwise thickness of about 2 mm. Radiograph intensities are converted to particle volume fraction distributions using the Beer–Lambert law. The mass in the particle curtain, as determined from the X-ray data, is in reasonable agreement with that given from a simpler method using a load cell and particle imaging. Following shock impingement, the curtain propagates downstream and the peak volume fraction decreases from about 23 to about 4 % over a time of 340 µs. The propagation occurs asymmetrically, with the downstream side of the particle curtain experiencing a greater volume fraction gradient than the upstream side, attributable to the dependence of particle drag on volume fraction. Bulk particle transport is quantified from the time-dependent center of mass of the curtain. The bulk acceleration of the curtain is shown to be greater than that predicted for a single 115-µm particle in a Mach 1.67 shock-induced flow.
Electrical performance and defect characterization of vertical GaN P-i-N diodes before and after irradiation with 2.5 MeV protons and neutrons is investigated. Devices exhibit increase in specific on-resistance following irradiation with protons and neutrons, indicating displacement damage introduces defects into the p-GaN and n- drift regions of the device that impact on-state device performance. The breakdown voltage of these devices, initially above 1700 V, is observed to decrease only slightly for particle fluence < {10{13}} hbox{cm}-2. The unipolar figure of merit for power devices indicates that while the on-resistance and breakdown voltage degrade with irradiation, vertical GaN P-i-Ns remain superior to the performance of the best available, unirradiated silicon devices and on-par with unirradiated modern SiC-based power devices.
We present a neutron detector system based on time-encoded imaging, and demonstrate its applicability toward the spatial mapping of special nuclear material. We demonstrate that two-dimensional fast-neutron imaging with 2° resolution at 2 m stand-off is feasible with only two instrumented detectors.
Objective: Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. Background: Visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied. Method: Eighty-two inspectors in the U.S. Nuclear Security Enterprise inspected 140 parts for eight different defects. Results: Inspectors correctly rejected 85% of defective items and incorrectly rejected 35% of acceptable parts. Use of a phased inspection approach based on inspector confidence ratings was not an effective or efficient technique to improve the overall accuracy of the process. Results did verify that inspection is a workload-intensive task, dominated by mental demand and effort. Conclusion: Hits for Nuclear Security Enterprise inspection were not vastly superior to the industry average of 80%, and they were achieved at the expense of a high scrap rate not typically observed during visual inspection tasks. Application: This study provides the first empirical data to address the reliability of visual inspection for precision manufactured parts used in nuclear weapons. Results enhance current understanding of the process of visual inspection and can be applied to improve reliability for precision manufactured parts.
We present a platform on the OMEGA EP Laser Facility that creates and diagnoses the conditions present during the preheat stage of the MAGnetized Liner Inertial Fusion (MagLIF) concept. Experiments were conducted using 9 kJ of 3ω (355 nm) light to heat an underdense deuterium gas (electron density: 2.5×1020 cm-3=0.025 of critical density) magnetized with a 10 T axial field. Results show that the deuterium plasma reached a peak electron temperature of 670 ± 140 eV, diagnosed using streaked spectroscopy of an argon dopant. The results demonstrate that plasmas relevant to the preheat stage of MagLIF can be produced at multiple laser facilities, thereby enabling more rapid progress in understanding magnetized preheat. Results are compared with magneto-radiation-hydrodynamics simulations, and plans for future experiments are described.
Low-and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 times as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. Grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.
Low-and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 times as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. Grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.
Khachatrian, Ani; Roche, Nicolas J.H.; Dodds, Nathaniel A.; Mcmorrow, Dale; Warner, Jeffrey H.; Buchner, Stephen P.; Reed, Robert A.
Charge-collection experiments and simulations designed to quantify the effects of reflections from metallization during through-wafer TPA testing are presented. The results reveal a strong dependence on metal line width and metal line position inside the {rm SiO}2 overlayer. The charge-collection enhancement is largest for the widest metal lines and the metal lines closest to the {rm Si}/{rm SiO}2 interface. The charge-collection enhancement is also dependent on incident laser pulse energy, an effect that is a consequence of higher-order optical nonlinearities induced by the ultrashort optical pulses. However, for the lines further away from the {rm Si}/{rm SiO}2 interface, variations in laser pulse energies affect the charge-collection enhancement to a lesser degree. Z-scan measurements reveal that the peak charge collection occurs when the axial position of the laser focal point is inside the Si substrate. There is a downward trend in peak collected-charge enhancement with the increase in laser pulse energies for the metal lines further away from the {rm Si}/{rm SiO}2 interface. Metallization enhances the collected charge by same amount regardless of the applied bias voltage. For thinner metal lines and laser pulse energies lower than 1 nJ, the collected charge enhancement due to metallization is negligible.
Port security is an increasing concern given the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality – ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systems and control theory. The associated analysis process – System Theoretic Process Analysis (STPA) – identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. This article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.
Electron sheaths are commonly found near Langmuir probes collecting the electron saturation current. The common assumption is that the probe collects the random flux of electrons incident on the sheath, which tacitly implies that there is no electron presheath and that the flux collected is due to a velocity space truncation of the electron velocity distribution function (EVDF). This work provides a dedicated theory of electron sheaths, which suggests that they are not so simple. Motivated by EVDFs observed in particle-in-cell (PIC) simulations, a 1D model for the electron sheath and presheath is developed. In the model, under low temperature plasma conditions (Te 蠑 Ti), an electron pressure gradient accelerates electrons in the presheath to a flow velocity that exceeds the electron thermal speed at the sheath edge. This pressure gradient generates large flow velocities compared to what would be generated by ballistic motion in response to the electric field. It is found that in many situations, under common plasma conditions, the electron presheath extends much further into the plasma than an analogous ion presheath. PIC simulations reveal that the ion density in the electron presheath is determined by a flow around the electron sheath and that this flow is due to 2D aspects of the sheath geometry. Simulations also indicate the presence of ion acoustic instabilities excited by the differential flow between electrons and ions in the presheath, which result in sheath edge fluctuations. The 1D model and time averaged PIC simulations are compared and it is shown that the model provides a good description of the electron sheath and presheath.
Displacement damage reduces ion beam induced charge (IBIC) through Shockley-Read-Hall recombination. Closely spaced pulses of 200 keVions focused in a 40 nm beam spot are used to create damage cascades within areas. Damaged areas are detected through contrast in IBIC signals generated with focused ion beams of {200 ions and 60 keV ions. IBIC signal reduction can be resolved over sub-micron regions of a silicon detector damaged by as few as 1000 heavy ions.
Bell-Plesset (BP) effects account for the influence of global convergence or divergence of the fluid flow on the evolution of the interfacial perturbations embedded in the flow. The development of the Rayleigh-Taylor instability in radiation-driven spherical capsules and magnetically-driven cylindrical liners necessarily includes a significant contribution from BP effects due to the time dependence of the radius, velocity, and acceleration of the unstable surfaces or interfaces. An analytical model is presented that, for an ideal incompressible fluid and small perturbation amplitudes, exactly evaluates the BP effects in finite-thickness shells through acceleration and deceleration phases. The time-dependent dispersion equations determining the "instantaneous growth rate" are derived. It is demonstrated that by integrating this approximate growth rate over time, one can accurately evaluate the number of perturbation e-foldings during the inward acceleration phase of the implosion. In the limit of small shell thickness, exact thin-shell perturbation equations and approximate thin-shell dispersion equations are obtained, generalizing the earlier results [E. G. Harris, Phys. Fluids 5, 1057 (1962); E. Ott, Phys. Rev. Lett. 29, 1429 (1972); A. B. Bud'ko et al., Phys. Fluids B 2, 1159 (1990)].
The suitability of crude and purified struvite (MgNH4PO4), a major precipitate in wastewater streams, was investigated for renewable replacement of conventional nitrogen and phosphate resources for cultivation of microalgae. Bovine effluent wastewater stone, the source of crude struvite, was characterized for soluble N/P, trace metals, and biochemical components and compared to the purified mineral. Cultivation trials using struvite as a major nutrient source were conducted using two microalgae production strains, Nannochloropsis salina and Phaeodactylum tricornutum, in both lab and outdoor pilot-scale raceways in a variety of seasonal conditions. Both crude and purified struvite-based media were found to result in biomass productivities at least as high as established media formulations (maximum outdoor co-culture yield ~20±4gAFDW/m2/day). Analysis of nutrient uptake by the alga suggest that struvite provides increased nutrient utilization efficiency, and that crude struvite satisfies the trace metals requirement and results in increased pigment productivity for both microalgae strains.
A technique in which the evolution of a perturbation in a shock wave front is monitored as it travels through a sample is applied to granular materials. Although the approach was originally conceived as a way to measure the viscosity of the sample, here it is utilized as a means to probe the deviatoric strength of the material. Initial results for a tungsten carbide powder are presented that demonstrate the approach is viable. Simulations of the experiments using continuum and mesoscale modeling approaches are used to better understand the experiments. The best agreement with the limited experimental data is obtained for the mesoscale model, which has previously been shown to give good agreement with planar impact results. The continuum simulations indicate that the decay of the perturbation is controlled by material strength but is insensitive to the compaction response. Other sensitivities are assessed using the two modeling approaches. The simulations indicate that the configuration used in the preliminary experiments suffers from certain artifacts and should be modified to remove them. The limitations of the current instrumentation are discussed, and possible approaches to improve it are suggested.
An n-dodecane spray flame (Spray A from Engine Combustion Network) was simulated using a δ function combustion model along with a dynamic structure large eddy simulation (LES) model to evaluate its performance at engine-relevant conditions and to understand the transient behavior of this turbulent flame. The liquid spray was treated with a traditional Lagrangian method and the gas-phase reaction was modeled using a δ function combustion model. A 103-species skeletal mechanism was used for the n-dodecane chemical kinetic model. Significantly different flame structures and ignition processes are observed for the LES compared to those of Reynolds-averaged Navier-Stokes (RANS) predictions. The LES data suggests that the first ignition initiates in a lean mixture and propagates to a rich mixture, and the main ignition happens in the rich mixture, preferably less than 0.14 in mixture fraction space. LES was observed to have multiple ignition spots in the mixing layer simultaneously while the main ignition initiates in a clearly asymmetric fashion. The temporal flame development also indicates the flame stabilization mechanism is auto-ignition controlled. Soot predictions by LES present much better agreement with experiments compared to RANS, both qualitatively and quantitatively. Multiple realizations for LES were performed to understand the realization to realization variation and to establish best practices for ensemble-averaging diesel spray flames. The relevance index analysis suggests that an average of 5 and 6 realizations can reach 99% of similarity to the target average of 16 realizations on the mixture fraction and temperature fields, respectively. However, more realizations are necessary for the hydroxide (OH) and soot mass fractions due to their high fluctuations.
Magnetic nanoparticles are the next tool in medical diagnoses and treatment in many different biomedical applications, including magnetic hyperthermia as alternative treatment for cancer and bacterial infections, as well as the disruption of biofilms. The colloidal stability of the magnetic nanoparticles in a biological environment is crucial for efficient delivery. A surface that can be easily modifiable can also improve the delivery and imaging properties of the magnetic nanoparticle by adding targeting and imaging moieties, providing a platform for additional modification. The strategy presented in this work includes multiple nitroDOPA anchors for robust binding to the surface tied to the same polymer backbone as multiple poly(ethylene oxide) chains for steric stability. This approach provides biocompatibility and enhanced stability in fetal bovine serum (FBS) and phosphate buffer saline (PBS). As a proof of concept, these polymer-particles complexes were then modified with a near infrared dye and utilized in characterizing the integration of magnetic nanoparticles in biofilms. The work presented in this manuscript describes the synthesis and characterization of a nontoxic platform for the labeling of near IR-dyes for bioimaging.
Solid electrolytes with sufficiently high conductivities and stabilities are the elusive answer to the inherent shortcomings of organic liquid electrolytes prevalent in today's rechargeable batteries. We recently revealed a novel fast-ion-conducting sodium salt, Na2B12H12, which contains large, icosahedral, divalent B12H122- anions that enable impressive superionic conductivity, albeit only above its 529 K phase transition. Its lithium congener, Li2B12H12, possesses an even more technologically prohibitive transition temperature above 600 K. Here we show that the chemically related LiCB11H12 and NaCB11H12 salts, which contain icosahedral, monovalent CB11H12- anions, both exhibit much lower transition temperatures near 400 K and 380 K, respectively, and truly stellar ionic conductivities (>0.1 S cm-1) unmatched by any other known polycrystalline materials at these temperatures. With proper modifications, we are confident that room-temperature-stabilized superionic salts incorporating such large polyhedral anion building blocks are attainable, thus enhancing their future prospects as practical electrolyte materials in next-generation, all-solid-state batteries.
The modulated scatterer technique (MST) has shown promise for applications in microwave imaging, electric field mapping, and materials characterization. Traditionally, MST scatterers are dipoles centrally loaded with an element capable of modulation (e.g., a p-i-n diode). By modulating the load element, signals scattered from the MST scatterer are also modulated. However, due to the small size of such scatterers, it can be difficult to reliably detect the modulated signal. Increasing the modulation depth (MD; a parameter related to how well the scatterer modulates the scattered signal) may improve the detectability of the scattered signal. In an effort to improve the MD, the concept of electrically invisible antennas is applied to the design of MST scatterers. This paper presents simulations and measurements of MST scatterers that have been designed to be electrically invisible during the reverse bias state of the modulated element (a p-i-n diode in this case), while producing detectable scattering during the forward bias state (i.e., operate in an electrically visible state). The results using the new design show significant improvement to the MD of the scattered signal as compared with a traditional MST scatterer (i.e., dipole centrally loaded with a p-i-n diode).
The understanding of soot formation in combustion processes and the optimization of practical combustion systems require in situ measurement techniques that can provide important characteristics, such as particle concentrations and sizes, under a variety of conditions. Of equal importance are techniques suitable for characterizing soot particles produced from incomplete combustion and emitted into the environment. Additionally, the production of engineered nanoparticles, such as carbon blacks, may benefit from techniques that allow for online monitoring of these processes. In this paper, we review the fundamentals and applications of laser-induced incandescence (LII) for particulate diagnostics in a variety of fields. The review takes into account two variants of LII, one that is based on pulsed-laser excitation and has been mainly used in combustion diagnostics and emissions measurements, and an alternate approach that relies on continuous-wave lasers and has become increasingly popular for measuring black carbon in environmental applications. We also review the state of the art in the determination of physical parameters central to the processes that contribute to the non-equilibrium nanoscale heat and mass balances of laser-heated particles; these parameters are important for LII-signal analysis and simulation. Awareness of the significance of particle aggregation and coatings has increased recently, and the effects of these characteristics on the LII technique are discussed. Because of the range of experimental constraints in the variety of applications for which laser-induced incandescence is suited, many implementation approaches have been developed. This review discusses considerations for selection of laser and detection characteristics to address application-specific needs. The benefits of using LII for measurements of a range of nanoparticles in the fields mentioned above are demonstrated with some typical examples, covering simple flames, internal-combustion engines, exhaust emissions, the ambient atmosphere, and nanoparticle production. We also remark on less well-known studies employing LII for particles suspended in liquids. An important aspect of the paper is to critically assess the improvement in the understanding of the fundamental physical mechanisms at the nanoscale and the determination of underlying parameters; we also identify further research needs in these contexts. Building on this enhanced capability in describing the underlying complex processes, LII has become a workhorse of particulate measurement in a variety of fields, and its utility continues to be expanding. When coupled with complementary methods, such as light scattering, probe-sampling, molecular-beam techniques, and other nanoparticle instrumentation, new directions for research and applications with LII continue to materialize.
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; Draper, Jeffrey
This study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. This paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. These techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
We have developed conceptual designs of two petawatt-class pulsed-power accelerators: Z 300 and Z 800. The designs are based on an accelerator architecture that is founded on two concepts: single-stage electrical-pulse compression and impedance matching [Phys. Rev. ST Accel. Beams 10, 030401 (2007)]. The prime power source of each machine consists of 90 linear-transformer-driver (LTD) modules. Each module comprises LTD cavities connected electrically in series, each of which is powered by 5-GW LTD bricks connected electrically in parallel. (A brick comprises a single switch and two capacitors in series.) Six water-insulated radial-transmission-line impedance transformers transport the power generated by the modules to a six-level vacuum-insulator stack. The stack serves as the accelerator's water-vacuum interface. The stack is connected to six conical outer magnetically insulated vacuum transmission lines (MITLs), which are joined in parallel at a 10-cm radius by a triple-post-hole vacuum convolute. The convolute sums the electrical currents at the outputs of the six outer MITLs, and delivers the combined current to a single short inner MITL. The inner MITL transmits the combined current to the accelerator's physics-package load. Z 300 is 35 m in diameter and stores 48 MJ of electrical energy in its LTD capacitors. The accelerator generates 320 TW of electrical power at the output of the LTD system, and delivers 48 MA in 154 ns to a magnetized-liner inertial-fusion (MagLIF) target [Phys. Plasmas 17, 056303 (2010)]. The peak electrical power at the MagLIF target is 870 TW, which is the highest power throughout the accelerator. Power amplification is accomplished by the centrally located vacuum section, which serves as an intermediate inductive-energy-storage device. The principal goal of Z 300 is to achieve thermonuclear ignition; i.e., a fusion yield that exceeds the energy transmitted by the accelerator to the liner. 2D magnetohydrodynamic (MHD) simulations suggest Z 300 will deliver 4.3 MJ to the liner, and achieve a yield on the order of 18 MJ. Z 800 is 52 m in diameter and stores 130 MJ. This accelerator generates 890 TW at the output of its LTD system, and delivers 65 MA in 113 ns to a MagLIF target. The peak electrical power at the MagLIF liner is 2500 TW. The principal goal of Z 800 is to achieve high-yield thermonuclear fusion; i.e., a yield that exceeds the energy initially stored by the accelerator's capacitors. 2D MHD simulations suggest Z 800 will deliver 8.0 MJ to the liner, and achieve a yield on the order of 440 MJ. Z 300 and Z 800, or variations of these accelerators, will allow the international high-energy-density-physics community to conduct advanced inertial-confinement-fusion, radiation-physics, material-physics, and laboratory-astrophysics experiments over heretofore-inaccessible parameter regimes.
Directed energy deposition (DED) is a type of additive manufacturing (AM) process; Laser Engineered Net Shaping (LENS) is a commercial DED process. We are developing LENS technology for printing 316L stainless steel components for structural applications. It is widely known that material properties of AM components are process dependent, attributed to different molten metal incorporation and thermal transport mechanisms. This investigation focuses on process-structure-property relationships for LENS deposits for enabling the process development and optimization to control material property. We observed interactions among powder melting, directional molten metal flow, and the molten metal solidification. The resultant LENS induced microstructure found to be dictated by the process-related characteristics, i.e., interpass boundaries from multi-layer deposition, molten metal flow lines, and solidification dendrite cells. Each characteristic bears the signature of the unique localized thermal history during deposition. Correlation observed between localized thermal transport, resultant microstructure, and its subsequent impact on the mechanical behavior of the current 316L is discussed. We also discuss how the structures of interpass boundaries are susceptible to localized recrystallization, grain growth and/or defect formation, and therefore, heterogeneous mechanical properties due to the adverse presence of unmelted powder inclusions.
On October 27, I was invited to speak in front of the Subcommittee Coast Guard and Maritime Infrastructure of the Transportation and Infrastructure Committee of the US House of Representatives at a hearing entitled Prevention of and Response to the Arrival of a Dirty Bomb at a US Port. Sandia National Laboratories in New Mexico is recognized as having expertise in the general threat from radiological dispersal devices having led or participated in many studies on the topic, including a landmark study on dangers presented by the use of cesium chloride salts due to their solubility and associated dispersibility, and I have been working primarily in this area since 2010.
A new approach was created for studying energetic material degradation. This approach involved detecting and tentatively identifying non-volatile chemical species by liquid chromatography-mass spectrometry (LC-MS) with multivariate statistical data analysis that form as the CL-20 energetic material thermally degraded. Multivariate data analysis showed clear separation and clustering of samples based on sample group: either pristine or aged material. Further analysis showed counter-clockwise trends in the principal components analysis (PCA), a type of multivariate data analysis, Scores plots. These trends may indicate that there was a discrete shift in the chemical markers as the went from pristine to aged material, and then again when the aged CL-20 mixed with a potentially incompatible material was thermally aged for 4, 6, or 9 months. This new approach to studying energetic material degradation should provide greater knowledge of potential degradation markers in these materials.
Here, photolithography systems are on pace to reach atomic scale by the mid-2020s, necessitating alternatives to continue realizing faster, more predictable, and cheaper computing performance. If the end of Moore's law is real, a research agenda is needed to assess the viability of novel semiconductor technologies and navigate the ensuing challenges.
In this work the integration of a memristor with a MEMS parallel plate capacitor coupled by an amplification stage is simulated. It is shown that the MEMS upper plate position can be controlled up to 95% of the total gap. Due to its common operation principle, the change in the MEMS plate position can be interpreted by the change in the memristor resistance, or memristance. A memristance modulation of ~1 KΩ was observed. A polynomial expression representing the MEMS upper plate displacement as a function of the memristance is presented. Thereafter a simple design for a voltage closed-loop control is presented showing that the MEMS upper plate can be stabilized up to 95% of the total gap using the memristor as a feedback sensing element. As a result, the memristor can play important dual roles in overcoming the limited operation range of MEMS parallel plate capacitors and in simplifying read-out circuits of those devices by representing the motion of the upper plate in the form of resistance change instead of capacitance change.
In this study, we present low-energy proton single-event upset (SEU) data on a 65 nm SOI SRAM whose substrate has been completely removed. Since the protons only had to penetrate a very thin buried oxide layer, these measurements were affected by far less energy loss, energy straggle, flux attrition, and angular scattering than previous datasets. The minimization of these common sources of experimental interference allows more direct interpretation of the data and deeper insight into SEU mechanisms. The results show a strong angular dependence, demonstrate that energy straggle, flux attrition, and angular scattering affect the measured SEU cross sections, and prove that proton direct ionization is the dominant mechanism for low-energy proton-induced SEUs in these circuits.
Graphene possesses excellent mechanical properties with a tensile strength that may exceed 130 GPa, excellent electrical conductivity, and good thermal properties. Future nano-composites can leverage many of these material properties in an attempt to build designer materials for a broad range of applications. 3-D printing has also seen vast improvements in recent years that have allowed many companies and individuals to realize rapid prototyping for relatively low capital investment. This research sought to create a graphene reinforced, polymer matrix nano-composite that is viable in commercial 3D printer technology, study the effects of ultra-high loading percentages of graphene in polymer matrices and determine the functional upper limit for loading. Loadings varied from 5 wt. % to 50 wt. % graphene nanopowder loaded in Acrylonitrile Butadiene Styrene (ABS) matrices. Loaded sample were characterized for their mechanical properties using three point bending, tensile tests, as well as dynamic mechanical analysis.
Magnetostrictive CoFe films were investigated for use as magnetoelastic tags or sensors. The ability to electrodeposit these films enables batch fabrication processes to pattern a variety of geometries while controlling the film stoichiometry and crystallography. In current research looking at CoFe, improved magnetostriction was achieved using a co-sputtering, annealing, and quenching method1. Other current research has reported electrodeposited CoFe films using a sulfate based chemistry resulting in film compositions that are Fe rich in the range of Co0.3-0.4Fe0.7-0.6 and have problems of codeposition of undesirables that can have a negative impact on magnetic properties. The research presented here focused on maximizing magnetostriction at the optimal stoichiometry range of Co0.7-0.75Fe0.3-0.25, targeting the (fcc+bcc)/bcc phase boundary, and using a novel chemistry and plating parameters to deposit films without being limited to “line of sight” deposition.
This document contains a description of the system architecture for the IDC Re-Engineering Phase 2 project. This is a draft version that primarily provides background information for understanding delivered Use Case Realizations.
The continued exponential growth of photovoltaic technologies paves a path to a solar-powered world, but requires continued progress toward low-cost, high-reliability, high-performance photovoltaic (PV) systems. High reliability is an essential element in achieving low-cost solar electricity by reducing operation and maintenance (O&M) costs and extending system lifetime and availability, but these attributes are difficult to verify at the time of installation. Utilities, financiers, homeowners, and planners are demanding this information in order to evaluate their financial risk as a prerequisite to large investments. Reliability research and development (R&D) is needed to build market confidence by improving product reliability and by improving predictions of system availability, O&M cost, and lifetime. This project is focused on understanding, predicting, and improving the reliability of PV systems. The two areas being pursued include PV arc-fault and ground fault issues, and inverter reliability.
A moment-of-fluid method is presented for computing solutions to incompressible multiphase flows in which the number of materials can be greater than two. In this work, the multimaterial moment-of-fluid interface representation technique is applied to simulating surface tension effects at points where three materials meet. The advection terms are solved using a directionally split cell integrated semi-Lagrangian algorithm, and the projection method is used to evaluate the pressure gradient force term. The underlying computational grid is a dynamic block-structured adaptive grid. The new method is applied to multiphase problems illustrating contact-line dynamics, triple junctions, and encapsulation in order to demonstrate its capabilities. Examples are given in two-dimensional, three-dimensional axisymmetric (R-Z), and three-dimensional (X-Y-Z) coordinate systems.
We report on experiments demonstrating the transition from thermally-dominated K-shell line emission to non-thermal, hot-electron-driven inner-shell emission for z pinch plasmas on the Z machine. While x-ray yields from thermal K-shell emission decrease rapidly with increasing atomic number Z, we find that non-thermal emission persists with favorable Z scaling, dominating over thermal emission for Z=42 and higher (hn ≥ 17keV). Initial experiments with Mo (Z=42) and Ag (Z=47) have produced kJ-level emission in the 17-keV and 22-keV Kα lines respectively. We will discuss the electron beam properties that could excite these non - thermal lines. We also report on experiments that have attempted to control non - thermal K - shell line emission by modifying the wire array or load hardware setup.
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.
One of the more severe environments for a store on an aircraft is during the ejection of the store. During this environment it is not possible to instrument all component responses, and it is also likely that some instruments may fail during the environment testing. This work provides a method for developing these responses from failed gages and uninstrumented locations. First, the forces observed by the store during the environment are reconstructed. A simple sampling method is used to reconstruct these forces given various parameters. Then, these forces are applied to a model to generate the component responses. Validation is performed on this methodology.
Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Hand book, DOE-HDBK-3010, Airborne Release Fractions/Rates and Resp irable Fractions for Nonreactor Nuclear Facilities , to determine source terms. In calcula ting source terms, analysts tend to use the DOE Handbook's bounding values on airbor ne release fractions (ARFs) and respirable fractions (RFs) for various cat egories of insults (representing potential accident release categories). This is typica lly due to both time constraints and the avoidance of regulatory critique. Unfort unately, these bounding ARFs/RFs represent extremely conservative values. Moreover, th ey were derived from very limited small- scale table-top and bench/labo ratory experiments and/or fr om engineered judgment. Thus the basis for the data may not be re presentative to the actual unique accident conditions and configura tions being evaluated. The goal of this res earch is to develop a more ac curate method to identify bounding values for the DOE Handbook using the st ate-of-art multi-physics-based high performance computer codes. This enable s us to better understand the fundamental physics and phenomena associated with the ty pes of accidents for the data described in it. This research has examined two of the DOE Handbook's liquid fire experiments to substantiate the airborne release frac tion data. We found th at additional physical phenomena (i.e., resuspension) need to be included to derive bounding values. For the specific cases of solid powder under pre ssurized condition and mechanical insult conditions the codes demonstrated that we can simulate the phenomena. This work thus provides a low-cost method to establis h physics-justified sa fety bounds by taking into account specific geometri es and conditions that may not have been previously measured and/or are too costly to do so.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. Trademarks The information herein is subject to change without notice. Copyright c 2002-2015 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. Xyce 's expression library is based on that inside Spice 3F5 developed by the EECS Department at the University of California. The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. All other trademarks are property of their respective owners. Contacts Bug Reports (Sandia only) http://joseki.sandia.gov/bugzilla http://charleston.sandia.gov/bugzilla World Wide Web http://xyce.sandia.gov http://charleston.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only)
IEEE Transactions on Parallel and Distributed Systems
Shan, Tzu-Ray; Aktulga, Hasan M.; Knight, Chris; Coffman, Paul; Jiang, Wei
Hybrid parallelism allows high performance computing applications to better leverage the increasing on-node parallelism of modern supercomputers. In this paper, we present a hybrid parallel implementation of the widely used LAMMPS/ReaxC package, where the construction of bonded and nonbonded lists and evaluation of complex ReaxFF interactions are implemented efficiently using OpenMP parallelism. Additionally, the performance of the QEq charge equilibration scheme is examined and a dual-solver is implemented. We present the performance of the resulting ReaxC-OMP package on a state-of-the-art multi-core architecture Mira, an IBM BlueGene/Q supercomputer. For system sizes ranging from 32 thousand to 16.6 million particles, speedups in the range of 1.5-4.5x are observed using the new ReaxC-OMP software. Sustained performance improvements have been observed for up to 262,144 cores (1,048,576 processes) of Mira with a weak scaling efficiency of 91.5% in larger simulations containing 16.6 million particles.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . Trademarks The information herein is subject to change without notice. Copyright c 2002-2015 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. Xyce 's expression library is based on that inside Spice 3F5 developed by the EECS Department at the University of California. The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. All other trademarks are property of their respective owners. Contacts Bug Reports (Sandia only) http://joseki.sandia.gov/bugzilla http://charleston.sandia.gov/bugzilla World Wide Web http://xyce.sandia.gov http://charleston.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only)
Subwavelength-thin metasurfaces have shown great promises for the control of optical wavefronts, thus opening new pathways for the development of efficient flat optics. In particular, Huygens’ metasurfaces based on all-dielectric resonant meta-atoms have already shown a huge potential for practical applications with their polarization insensitivity and high transmittance efficiency. Here, we experimentally demonstrate a holographic Huygens’ metasurface based on dielectric resonant meta-atoms capable of complex wavefront control at telecom wavelengths. Our metasurface produces a hologram image in the far-field with 82% transmittance efficiency and 40% imaging efficiency. Such efficient complex wavefront control shows that Huygens’ metasurfaces based on resonant dielectric meta-atoms are a big step towards practical applications of metasurfaces in wavefront design related technologies, including computer-generated holograms, ultra-thin optics, security and data storage devices.
Efforts are being pursued to develop and qualify a system-level model of a reactor core isolation (RCIC) steam-turbine-driven pump. The model is being developed with the intent of employing it to inform the design of experimental configurations for full-scale RCIC testing. The model is expected to be especially valuable in sizing equipment needed in the testing. An additional intent is to use the model in understanding more fully how RCIC apparently managed to operate far removed from its design envelope in the Fukushima Daiichi Unit 2 accident. RCIC modeling is proceeding along two avenues that are expected to complement each other well. The first avenue is the continued development of the system-level RCIC model that will serve in simulating a full reactor system or full experimental configuration of which a RCIC system is part. The model reasonably represents a RCIC system today, especially given design operating conditions, but lacks specifics that are likely important in representing the off-design conditions a RCIC system might experience in an emergency situation such as a loss of all electrical power. A known specific lacking in the system model, for example, is the efficiency at which a flashing slug of water (as opposed to a concentrated jet of steam) could propel the rotating drive wheel of a RCIC turbine. To address this specific, the second avenue is being pursued wherein computational fluid dynamics (CFD) analyses of such a jet are being carried out. The results of the CFD analyses will thus complement and inform the system modeling. The system modeling will, in turn, complement the CFD analysis by providing the system information needed to impose appropriate boundary conditions on the CFD simulations. The system model will be used to inform the selection of configurations and equipment best suitable of supporting planned RCIC experimental testing. Preliminary investigations with the RCIC model indicate that liquid water ingestion by the turbine decreases the developed turbine torque; the RCIC speed then slows, and thus the pump flow rate to the RPV decreases. Subsequently, RPV water level decreases due to continued boiling and the liquid fraction flowing to the RCIC decreases, thereby accelerating the RCIC and refilling the RPV. The feedback cycle then repeats itself and/or reaches a quasi-steady equilibrium condition. In other words, the water carry-over is limited by cyclic RCIC performance degradation, and hence the system becomes self-regulating. The indications achieved to date with the system model are more qualitative than quantitative. The avenues being pursued to increase the fidelity of the model are expected to add quantitative realism. The end product will be generic in the sense that the RCIC model will be incorporable within the larger reactor coolant system model of any nuclear power plant or experimental configuration.