The goal of this project is to produce an intense neutron pulse on HERMES III using the beam-target method with an intense proton beam. The potential advantage of proton use is that the generated neutron spectrum contains significantly more high-energy neutrons than that produced by electron-beam generated photoneutrons using the same facility. And compared to (D,T) facilities such as NIF, no tritium (or deuterium) is required for this process. To achieve the mid ~1010 neutrons/cm2 at a test object location listed as the goal in the Proposal, it was proposed that a radial ion diode previously developed and fielded at the 6 MeV - level be extended in performance to the full-power level on HERMES, with proton energies in the neighborhood of 15 MeV. This Report details the successful development of the radial ion diode at full power, which required more durable hardware which could be fielded at a one shot/day basis with minimal debris and activation (an important concern), and which could be substituted quickly into the normal negative-polarity bremsstrahlung source experiments without compromising the main HERMES validation mission. As direct measurement of proton beam characteristics proved challenging, the Project relied on an extensive series of simulations, LSP for beam dynamics and MCNP to characterize neutron output. Simulation results will be discussed, including the conclusion that neutron measurements made are consistent with an MCNP-predicted proton beam of 16 MeV peak energy, and 200 kA peak current. This Project also contributes to physics understanding of the use of inductive voltage adder (IVA) platforms to drive diode loads. Since such diodes operate independently of the physics of IVAs, the IVA-diode coupling requires matching of the MITL flow to the requirements of ion diode operation.
Unmanned aircraft systems (UASs) have grown significantly within the private sector with ease of acquisition and platform capabilities far outstretching what previously existed. Where once the operation of these platforms was limited to skilled individuals, increased computational power, manufacturing techniques, and increased autonomy allows inexperienced individuals to skillfully maneuver these devices. With this rise in consumer use of UAS comes an increased security concern regarding their use for malicious intent.The focus area of counter UAS (CUAS) remains a challenging space due to a small cross-sectioned UAS's ability to move in all three dimensions, attain very high speeds, carry payloads of notable weight, and avoid standard delay techniques.We examine frequency analysis of pixel fluctuation over time to exploit the temporal frequency signature present in UAS imagery. This signature allows for lower pixels-on-target detection [1]. The methodology also acts as a method of assessment due to the distinct frequency signatures of UAS when examined against the standard nuisance alarms such as birds. The temporal frequency analysis (TFA) method demonstrates a UAS detection and assessment method. In this paper we discuss signal processing and Fourier filter optimization methodologies that increase UAS contrast.
Manufacturers often buy and/or license communication ICs from third-party suppliers. These communication ICs are then integrated into a complex computational system, resulting in a wide range of potential hardware-software security issues. This work proposes a compact supervisory circuit to classify the Bluetooth profile operation of a Bluetooth System-on-Chip (SoC) at low frequencies by monitoring the radio frequency (RF) output power of the Bluetooth SoC. The idea is to inexpensively manufacture an RF envelope detector to monitor the RF output power and a profile classification algorithm on a custom low-frequency integrated circuit in a low-cost legacy technology. When the supervisory circuit observes unexpected behavior, it can shut off power to the Bluetooth SoC. In this preliminary work, we proto-type the supervisory circuit using off-the-shelf components to collect a sufficient data set to train 11 different Machine Learning models. We extract smart descriptive time-domain features from the envelope of the RF output signal. Then, we train the machine learning models to classify three different Bluetooth operation profiles: sensor, hands-free, and headset. Our results demonstrate 100% classification accuracy with low computational complexity.∼
Few of those who read the 1963 research note by Graeme A. Bird in Physics of Fluids could have imagined that 50 years later the proposed new numerical technique would have become the dominant numerical technique in molecular gas dynamics. The introduction of the Direct Simulation Monte Carlo (DSMC) method not only has altered the field of molecular gas dynamics but also has influenced fields such as physical chemistry, mathematics, computer science, and aerothermodynamics. Further, the DSMC method has been used to probe into previously uninvestigated theoretical aspects of the Boltzmann equation and has also served as a platform for the development of nonequilibrium chemistry models. The DSMC method’s most noteworthy achievement is that molecular gas dynamics became a practical tool in the hands of aerospace engineers in many situations of spacecraft design and mission analysis.
An important part of a navigation system for a moving platform is the estimation of the rate of travel. This document presents a method for estimating the platform velocity in 3-dimensions using multiple antenna subarrays which could be used to augment navigation in a GPS-degraded environment. An advantage of this technique is that it does not require any knowledge of a positions of any landmarks. Results from radar data collected by the Sandia National Laboratories demonstration radar system are presented to illustrate the promise of this technique.
Fundamental results and an efficient algorithm for constructing eigenvectors corresponding to non-zero eigenvalues of matrices with zero rows and/or columns are developed. The formulation is based on the relation between eigenvectors of such matrices and the eigenvectors of their submatrices after removing all zero rows and columns. While being easily implemented, the algorithm decreases the computation time needed for numerical eigenanalysis, and resolves potential numerical eigensolver instabilities.
Wilks’ non-parametric method for setting tolerance limits using order statistics has recently become popular in the nuclear industry. The method allows analysts to predict a desired tolerance limit with some confidence that the estimate is conservative. The method is popular because it is simple and fits well into established regulatory frameworks. A critical analysis of the underlying statistics is presented in this work, including a derivation, analytical and statistical verification, and a broad discussion. Possible impacts of the underlying assumptions for application to computational tools are discussed. An in-depth discussion of the order statistic rank used in Wilks’ formula is provided, including when it might be necessary to use a higher rank estimate.
Obtaining information about burning characteristics and flame structures by analyzing experimental data is an important issue for understanding combustion processes and pursuing combustion modeling approaches. It has been shown that Raman/Rayleigh measurements of major species and temperature can be used to approximate the local heat release rate and the chemical explosive mode, and that these results are sufficiently accurate for a qualitative assessment of the relative importance of different heat release zones within the same overall flame structure in laminar and mildly turbulent partially premixed flames [1,2]. The present study uses data from direct numerical simulation (DNS) to extend and quantify the understanding of the approximation method with respect to premixed and stratified-premixed flames with significant turbulence–chemistry interaction (high Karlovitz number). The accuracy of the approximation procedure is assessed as previously applied, using just major species and temperature, as well as with the OH radical included as an additional experimentally accessible species. The accuracy of the local chemical explosive mode and the local heat release rate results from the approximation are significantly improved with OH included, yielding quantitative agreement with the DNS results. Further, a global sensitivity analysis is applied to identify the sensitivity of the heat release rate and chemical explosive mode to experimental uncertainties imprinted upon the DNS data prior to the approximation procedure.
A hierarchical solver is proposed for solving sparse ill-conditioned linear systems in parallel. The solver is based on a modification of the LoRaSp method, but employs a deferred-compression technique, which provably reduces the approximation error and significantly improves efficiency. Moreover, the deferred-compression technique introduces minimal overhead and does not affect parallelism. As a result, the new solver achieves linear computational complexity under mild assumptions and excellent parallel scalability. To demonstrate the performance of the new solver, we focus on applying it to solve sparse linear systems arising from ice sheet modeling. The strong anisotropic phenomena associated with the thin structure of ice sheets creates serious challenges for existing solvers. To address the anisotropy, we additionally developed a customized partitioning scheme for the solver, which captures the strong-coupling direction accurately. In general, the partitioning can be computed algebraically with existing software packages, and thus the new solver is generalizable for solving other sparse linear systems. Our results show that ice sheet problems of about 300 million degrees of freedom have been solved in just a few minutes using 1024 processors.
Proceedings of PAW-ATM 2019: Parallel Applications Workshop, Alternatives to MPI+X, Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis
To minimize data movement, many parallel ap-plications statically distribute computational tasks among the processes. However, modern simulations often encounters ir-regular computational tasks whose computational loads change dynamically at runtime or are data dependent. As a result, load imbalance among the processes at each step of simulation is a natural situation that must be dealt with at the programming level. The de facto parallel programming approach, flat MPI (one process per core), is hardly suitable to manage the lack of balance, imposing significant idle time on the simulation as processes have to wait for the slowest process at each step of simulation. One critical application for many domains is the LU factor-ization of a large dense matrix stored in the Block Low-Rank (BLR) format. Using the low-rank format can significantly reduce the cost of factorization in many scientific applications, including the boundary element analysis of electrostatic field. However, the partitioning of the matrix based on underlying geometry leads to different sizes of the matrix blocks whose numerical ranks change at each step of factorization, leading to the load imbalance among the processes at each step of factorization. We use BLR LU factorization as a test case to study the programmability and performance of five different programming approaches: (1) flat MPI, (2) Adaptive MPI (Charm++), (3) MPI + OpenMP, (4) parameterized task graph (PTG), and (5) dynamic task discovery (DTD). The last two versions use a task-based paradigm to express the algorithm; we rely on the PaRSEC run-time system to execute the tasks. We first point out programming features needed to efficiently solve this category of problems, hinting at possible alternatives to the MPI+X programming paradigm. We then evaluate the programmability of the different approaches, detailing our experience implementing the algorithm using each of the models. Finally, we show the performance result on the Intel Haswell-based Bridges system at the Pittsburgh Supercomputing Center (PSC) and analyze the effectiveness of the implementations to address the load imbalance.
The following discussion contains a detailed description of how to interface and operate Radiation Effects Sciences (RES) Data Acquisition v1.0.0 software application. It describes the input required to run the application and actions to take for troubleshooting.
Personnel at Sandia National Laboratories (hereinafter referred to as Sandia) comply with United States Department of Energy (DOE) Policy 450.4A, Chg 1, Integrated Safety Management Policy, and implement an Integrated Safety Management System (ISMS) to ensure safe operations. Sandia personnel integrate safety into management and work practices at all levels so missions are accomplished while protecting Members of the Workforce, the public, and the environment. As a result, safety is effectively integrated into all facets of work planning and execution. Thus the management of safety functions becomes an integral part of mission accomplishment and meets the requirements outlined in the DOE Acquisition Regulation (DEAR) 970.5223-1, Integration of Environment, Safety, and Health into Work Planning and Execution, clause incorporated by reference into the Prime Contract. The DEAR 970.5223-1, Integration of Environment, Safety, and Health into Work Planning and Execution, clause requires DOE contractors to manage and perform work in accordance with a documented Safety Management System that fulfills conditions of the DEAR clause at a minimum. For purposes of this clause, safety encompasses environment, safety, and health (ES&H), including pollution prevention and waste minimization.
This is the last in a series of three papers documenting two large-scale human reliability analysis (HRA) empirical studies – the International HRA Empirical Study and the US HRA Empirical Study. The goal of the two studies was to develop an empirically-based understanding of the performance, strengths, and weaknesses of HRA methods by comparing HRA method predictions against actual operator performance in simulated accident scenarios on nuclear power plant (NPP) simulators. This paper first addresses areas where there is convergence between the two studies and where differences lie. Then it summarizes the combined insights and conclusions, including key findings on HRA in general through lessons learned about the HRA methods assessed in the studies and specific recommendations for improving guidance, practice and methods. Then it discusses the relevance and usefulness of simulator data for HRA in general. Finally, it presents the key achievements and overall conclusions of the two studies taken together.
It has been widely believed that crystalline TlBr can surpass CdZnTe to become the leading semiconductor for γ- and X- radiation detection. The major hurdle to this transition is the rapid aging of TlBr under the operating electrical field. Here, while ionic migration of vacancies has been traditionally the root cause for property degradation, quantum mechanical calculations indicated that the vacancy concentration needed to cause the observed aging must be orders of magnitude higher than the usual theoretical estimate. Recent molecular dynamics simulations and X-ray diffract ion experiments have shown that electrical fields can drive the motion of edge dislocations in both slip and climb directions. Furthermore, these combined motions eject a large number of vacancies. Both dislocation mot ion and vacancy ejection can account for the rapid aging of the TlBr detectors. Based on these new discoveries, the present work applies molecular dynamics simulations to “develop” aging-resistant TlBr crystals by inhibiting dislocation motions.
Islam, Zahabul; Paoletta, Angela L.; Monterrosa, Anthony M.; Schuler, Jennifer D.; Rupert, Timothy J.; Hattar, Khalid M.; Glavin, Nicholas; Haque, Aman
We investigate the effects of ion irradiation on AlGaN/GaN high electron mobility electron transistors using in-situ transmission electron microscopy. The experiments are performed inside the microscope to visualize the defects, microstructure and interfaces of ion irradiated transistors during operation and failure. Experimental results indicate that heavy ions such as Au4+ can create a significant number of defects such as vacancies, interstitials and dislocations in the device layer. It is hypothesized that these defects act as charge traps in the device layer and the resulting charge accumulation lowers the breakdown voltage. Sequential energy dispersive X-ray spectroscopy mapping allows us to track individual chemical elements during the experiment, and the results suggest that the electrical degradation in the device layer may originate from oxygen and nitrogen vacancies.
We statistically infer fluid flow and transport properties of porous materials based on their geometry and connectivity, without the need for detailed We summarize structure by persistent homology and then determines the similarity of structures using image analysis and statistics. Longer term, this may enable quick and automated categorization of rocks into known archetypes. We first compute persistent homology of binarized 3D images of material subvolume samples. The persistence parameter is the signed Euclidean distance from inferred material interfaces, which captures the distribution of sizes of pores and grains. Each persistence diagram is converted into an image vector. We infer structural similarity by calculating image similarity. For each image vector, we compute principal components to extract features. We fit statistical models to features estimates material permeability, tortuosity, and anisotropy. We develop a Structural SIMilarity index to determine statistical representative elementary volumes.
In practical applications of automated terrain classification from high-resolution polarimetric synthetic aperture radar (PolSAR) imagery, different terrain types may inherently contain a high level of internal variability, as when a broadly defined class (e.g., 'trees') contains elements arising from multiple subclasses (pine, oak, and willow). In addition, real-world factors such as the time of year of a collection, the moisture content of the scene, the imaging geometry, and the radar system parameters can all increase the variability observed within each class. Such variability challenges the ability of classifiers to maintain a high level of sensitivity in recognizing diverse elements that are within-class, without sacrificing their selectivity in rejecting out-of-class elements. In an effort to gauge the degree to which classifiers respond robustly in the presence of intraclass variability and generalize to untrained scenes and conditions, we compare the performance of a suite of classifiers across six broad terrain categories from a large set of polarimetric synthetic aperture radar (PolSAR) image sets. The main contributions of this article are as follows: 1) an analysis of the robustness of a variety of current state-of-the art classification algorithms to intraclass variability found in PolSAR image sets, and 2) the associated PolSAR image and feature data that Sandia is releasing to the research community with this publication. The analysis of the classification algorithms we provide will serve as a benchmark of performance for the future PolSAR terrain classification algorithm research and development enabled by the image sets and data provided. By sharing our analysis and high-resolution fully polarimetric Sandia data with the research community, we enable others to develop and assess a new generation of robust terrain classification algorithms for PolSAR.
Pennington, Heather M.; Kraus, Terrence D.; Tenney, Craig M.; Faucett, Christopher; Brooks, Dusty
Nuclear fission produces more than 1,000 radioactive isotopes, each having different half-lives and producing unique characteristic emissions (e.g., alpha, beta, gamma), which makes it challenging to quickly and accurately assess radiological impacts resulting from sabotage (e.g., atmospheric transport, projected dose). Since many of the radionuclides produced by nuclear fission contribute an insignificant dose compared to other radionuclides, identifying the top dose-producing radionuclides will accelerate and simplify radiological assessments of research and test reactor releases. Here, we identify the fission radionuclides that contribute significant dose, enabling assessors to conduct consistent, simple, rapid and accurate radiological assessments.
A critical component of the Underground Nuclear Explosion Signatures Experiment (UNESE) program is a realistic understanding of the post-detonation processes and changes in the environment that produce observable physical and radio-chemical signatures. Rock and fracture properties are essential parameters for modeling underground nuclear explosions. In response to the need for accurate simulations of physical and radio-chemical signatures, an experimental program to determine porosity, hydrostatic and triaxial compression, and Brazilian disc tension properties of P-Tunnel core was developed and executed. This report presents the results from the experimental program. Dry porosity for P-Tunnel core ranged from 8.7%-55%. Based on hydrostatic testing, bulk modulus was shown to increase with increasing confining pressure and ranged from 1.3GPa-42.3GPa. Compressional failure envelopes, derived from wet samples, are presented for P-Tunnel lithologies. Brazilian disc tension tests were conducted on wet samples and, along with triaxial tests, are compared with dry tests from the first UNESE test bed, Barnwell. P-Tunnel core disc tension test strength varied nearly two orders of magnitude between lithologies (0.03MPa-2.77MPa). Material tested in both tension and compression is weaker wet than dry with the exception of Strongly Welded Tuff in compression which is nearly identical in compressive strength for confining pressures of OMPa and 1 OOMPa. In addition to the inherent material properties of the rocks, fractures within the samples were quantified and characterized, in order to identify differences that might be caused by the explosion-induced damage. Finally, material property determinations are linked to optical microscopy observations. The work presented here is part of a broader material characterization effort; reports are referenced within.
Major breakthroughs in silicon photonics often come from the integration of new materials into the platform, from bonding III-Vs for on-chip lasers to growth of Ge for high-speed photodiodes. This report describes the integration of transparent conducting oxides (TCOs) onto silicon waveguides to enable ultra-compact (<10 μm) electro-optical modulators. These modulators exploit the "epsilon-near-zero" effect in TCOs to create a strong light-matter interaction and allow for a significant reduction in footprint. Waveguide-integrated devices fabricated in the Sandia Microfab demonstrated gigahertz-speed operation of epsilon-near-zero based modulators for the first time. Numerical modeling of these devices matched well with theory and showed a path for significant improvements in device performance with high-carrier-mobility TCOs such as cadmium oxide. A cadmium oxide sputtering capability has been brought online at Sandia; integration of these high mobility films is the subject of future work to develop and mature this exciting class of Si photonics devices.
A previously obscure area of cryptography known as Secure Multiparty Computation (MPC) is enjoying increased attention in the field of privacy-preserving machine learning (ML), because ML models implemented using MPC can be uniquely resistant to capture or reverse engineering by an adversary. In particular, an adversary who captures a share of a distributed MPC model provably cannot recover the model itself, nor data evaluated by the model, even by observing the model in operation. We report on our small project to survey current MPC software and judge its practicality for fielding mission-relevant distributed machine learning models.
Hiperco, manufactured by Carpenter Technology Corporation, is the trademark name for an alloy of equal composition iron and cobalt, with 2 percent vanadium added for enhanced mechanical properties (49Fe-49Co-2V). The alloy is interesting due to its very high magnetic saturation and flux density, yet undesirable mechanical properties such as brittleness and low strength. Several Hiperco specimens cut to a "D”-shaped geometry were placed under tension in a load frame, with a constant strain rate at room temperature, until failure occurred. Digital image correlation was used to obtain strain field data throughout the experiment. The data is to be used to compare with a finite element model, to investigate if Hiperco's unusual behavior can be modeled accurately with chosen model parameters. Between the five specimens tested, high-level results were consistent. Maximum strain and ultimate tensile strength all fell within acceptable bounds. However, several qualitative results differed from specimen to specimen. These differing results include the point of failure, the start point of Liiders band formation, as well as the presence or absence of Liiders bands on the curved section of the "D” specimens.
For strong cryptologic algorithms, it is often assumed that exhaustive search (AKA "brute force) will take 2b trials, where b is the number of bits of the secret key. What happens, though, if an adversary gains partial knowledge of the secret key? Perhaps he has intercepted a garbled transmission of the key, where he knows the maximum number of garbles, but not where they occur, or perhaps he knows the probability of each bit being correct. How much does this help him?
The influence of He ion radiation on GaAs thermal conductivity was investigated using TDTR and the PGM. We found that damage in the shallow defect only regions of the radiation profile scattering phonons with a frequency to the fourth dependence due to randomly distributed Frankel pairs. Damage near the end of range however, scatters phonons with a second order frequency dependence due to the cascading defects caused by the rapid radiation energy loss at the end of range resulting in defect clusters. Using the PGM and experimental thermal conductivity trends it was then possible to estimate the defect recombination rate and size of defect clusters. The methodology developed here results in a powerful tool for interrogating radiation damage in semiconductors.
The nanometer scale characterization technique of Frequency Modulated Kelvin Probe Force Microscopy (FM-KPFM) will be used to assess a diffusion study on thin metal films that undergo accelerated aging. The KPFM technique provides a relatively easy, non-destructive methodology that does not require high-vacuum facilities to obtain nanometer-scale spatial resolution of surface chemistry changes. The KPFM technique will be exercised in an effort to explore its capacity to map surface potential contrast caused by diffusion in a manner that allows for a qualitative assessment of diffusion of Cu or Al through Au. Supporting data will be obtained from traditional techniques: AES, XPS and UPS. An aging study was conducted on thin metal test specimens comprised of 500nm Cu or Al then 500nm Au on Si. The accelerated aging process was performed under inert conditions at aging temperatures of 100°C for Cu/Au film stack and 175°C for Al/Au film stack at aging times of 8 hours, 24 hours, 96 hours (4 days), and 216 hours (9 days).A calibration method was developed using Au, Al and Cu standards to establish precision and repeatability of the KPFM technique. The average Contact Potential Difference (CPD)s and standard deviations for each metal were found and summarized. Averages from surface roughness of the AFM topography images and roughness analysis of KPFM potential images which yield an average CPD of each area of unaged vs aged coupon surfaces were compared and show trends that indicate surface chemistry.
Brigner, Wesley H.; Hassan, Naimul; Jiang-Wei, Lucian; Hu, Xuan; Saha, Diptish; Bennett, Christopher H.; Marinella, Matthew J.; Incorvia, Jean A.C.; Garcia-Sanchez, Felipe; Friedman, Joseph S.
Spintronic devices based on domain wall (DW) motion through ferromagnetic nanowire tracks have received great interest as components of neuromorphic information processing systems. Previous proposals for spintronic artificial neurons required external stimuli to perform the leaking functionality, one of the three fundamental functions of a leaky integrate-and-fire (LIF) neuron. The use of this external magnetic field or electrical current stimulus results in either a decrease in energy efficiency or an increase in fabrication complexity. In this article, we modify the shape of previously demonstrated three-terminal magnetic tunnel junction neurons to perform the leaking operation without any external stimuli. The trapezoidal structure causes a shape-based DW drift, thus intrinsically providing the leaking functionality with no hardware cost. This LIF neuron, therefore, promises to advance the development of spintronic neural network crossbar arrays.
To safely and reliably operate without a human driver, connected and automated vehicles (CAVs) require more advanced computing hardware and software solutions than are implemented today in vehicles that provide driver-assistance features. A workshop was held to discuss advanced microelectronics and computing approaches that can help meet future energy and computational requirements for CAVs. Workshop questions were posed as follows: will highly automated vehicles be viable with conventional computing approaches or will they require a step-change in computing; what are the energy requirements to support on-board sensing and computing; and what advanced computing approaches could reduce the energy requirements while meeting their computational requirements? At present, there is no clear convergence in the computing architecture for highly automated vehicles. However, workshop participants generally agreed that there is a need to improve the computing performance per watt by at least 10x to advance the degree of automation. Participants suggested that DOE and the national laboratories could play a near-term role by developing benchmarks for determining and comparing CAV computing performance, developing public data sets to support algorithm and software development, and contributing precompetitive advancements in energy efficient computing.
The objective of this study is to assess the commercial viability to develop cost-competitive carbon fiber composites specifically suited for the unique loading experienced by wind turbine blades. The wind industry is a cost-driven market, while carbon fiber materials have been developed for the performance-driven aerospace industry. Carbon fiber has known benefits for reducing wind turbine blade mass due to the significantly improved stiffness, strength, and fatigue resistance per unit mass compared to fiberglass; however, the high relative cost has prohibited broad adoption within the wind industry. Novel carbon fiber materials derived from the textile industry are studied as a potentially more optimal material for the wind industry and are characterized using a validated material cost model and through mechanical testing. The novel heavy tow textile carbon fiber is compared with commercial carbon fiber and fiberglass materials in representative land-based and offshore reference wind turbine models. Some of the advantages of carbon fiber spar caps are observed in reduced blade mass and improved fatigue life. The heavy tow textile carbon fiber is found to have improved cost performance over the baseline carbon fiber and performed similarly to the commercial carbon fiber in wind turbine blade design, but at a significantly reduced cost. This novel carbon fiber was observed to even outperform fiberglass when comparing material cost estimates for spar caps optimized to satisfy the design constraints. This study reveals a route to enable broader carbon fiber usage by the wind industry to enable larger rotors that capture more energy at a lower cost.
Polymers such as PTFE (polytetrafluorethylene or Teflon), PEEK (polyetheretherketone), EPDM (ethylene propylene cliene monomer) rubber, Viton, EPR (ethylene propylene rubber), Nylon, Nitrile rubber, and perfluoroelastomers are commonly employed in super critical CO2 (sCO2) energy conversion systems. O-rings and gaskets made from these polymers face stringent performance conditions such as elevated temperatures, high pressures, pollutants and corrosive humid environments. Critical knowledge gaps about polymer degradation from sCO2 exposure need to be addressed. To understand these effects, we have studied nine commonly used polymers subjected to elevated temperatures under isobaric conditions of sCO2 pressure. The polymers (PEEK, Nylon, PTFE, EPDM, Nitrile rubber, EPR, Neoprene, perfluoroelastomer FF 202 and Viton) were exposed for 1000 hours at 100°C to 25 MPa sCO2 pressure in an autoclave. In a second study, elastomers perfluoroelastomer (FF202) and EPDM were exposed to 25 MPa sCO2 for 1000 hours at 150°C. Samples were extracted for ex-situ characterization at t = 200 hours and then at the completion of the test at t=1000 hours. The polymer samples were examined for physical and chemical changes by Dynamic Mechanical and Thermal Analysis (DMTA), Fourier Transform Infrared (FTIR) spectroscopy, and compression set. Density and mass changes immediately after removal from test and 48 hours later, and optical microscopy techniques were also used. Microcomputer tomography (micro CI) data was generated on select specimens. Super-critical CO2 effects have been identified as either physical or chemical effects. For each polymer, the dominance of one type of effect over the other was evaluated. Attempts were also made to qualitatively link sCO2 effects such as lowering or increase in glass transition temperatures, storage modulus changes, mass and compression set changes, chemical changes seen in FTIR analyses and blister and void formation seen post-exposure to polymer microstructure-related mechanisms such as plasticization of the polymer matrix, escape of volatiles from the polymer during depressurization, and filler and plasticizer effects on microstructure with rapid depressurization rates.
Sandia National Laboratories (SNL) is investigating photovoltaic (PV) cell configurations, integrating them with the battery-operated Remotely Monitored Sealing Array (RMSA), and testing and evaluating performance for enhanced battery life under various lighting conditions at a facility at the Savannah River Site (SRS) or Savannah River National Laboratory (SRNL). Unattended safeguards equipment (e.g. seals) incorporates many low-power electronic circuits, which are often powered by expensive and environmentally toxic lithium batteries. These batteries must periodically be replaced, adding a radiological hazard for both safeguards inspectors and operators. An extended field test of these prototype PV energy harvesting (EH) RMSAs at an operational nuclear facility will give additional data and allow for an analysis of this technology in a variety of realistic conditions, which will be documented in a final report. RMSAs are used for this testing, but SNL envisions energy harvesting technology may be applicable to additional safeguards equipment.
This letter report signals the end of the NMSBA project to help AEgis Technologies (AEGis) find a way to solve a coupled heat transfer equation for a laser-heated Silicon wafer. Accompanying this letter report is a MATLAB live script that documents the work done to date and provides a first attempt at a solution. The scope of work provided for Sandia National Laboratories (SNL) to document the problem in a general form in this phase of the work, with the goal of applying for additional funding in Calendar Year 2020 to attempt a complete solution. SNL staff analytically solved the differential heat equation and found solutions that reproduce reasonable shapes for heat flux. However, the required information to provide a complete solution is not available.
The 2019 Nonlinear Mechanics and Dynamics (NOMAD) Research Institute was successfully held from June 17 to August 1, 2019. NOMAD brings together participants with diverse technical backgrounds to work in small teams to cultivate new ideas and approaches in engineering mechanics and dynamics research. NOMAD provides an opportunity for researchers especially early career researchers - to develop lasting collaborations that go beyond what can be established from the limited interactions at their institutions or at annual conferences. A total of 20 students came to Albuquerque, New Mexico to participate in the seven-week long program held at the Mechanical Engineering building on the University of New Mexico campus. The students collaborated on one of seven research projects that were developed by various mentors from Sandia National Laboratories, the University of New Mexico, and academic institutions. In addition to the research activities, the students attended weekly technical seminars, various tours, and socialized at various off-hour events including an Albuquerque Isotopes baseball game. At the end of the summer, the students gave a final technical presentation on their research findings. Many of the research discoveries made at NOMAD are published as proceedings at technical conferences and have direct alignment with the critical mission work performed at Sandia.
This SAND report documents the findings of the LDRD project, "Modeling Complex Relationships in Large-Scale Data using Hypergraphs". The project ran from October 2017 through September 2019. The focus of the project was the development and application of hypergraph data analytics to Sandia relational data applications. In this project, we attempted to apply a hypergraph data analysis method—specifically, hypergraph eigenvector centrality—to Sandia mission problems to identify influential entities (people, location, times, etc.) in the data. Unfortunately, the application data led to graph and hypergraph representations containing disconnected components. To date, there are no well-established techniques for applying eigenvector centrality to such graphs and hypergraphs. In this report, we present several heuristics for computing eigenvector centrality for disconnected graphs. We believe this is an important start to understanding how to approach the similar problem for hypergraphs, but this project concluded before we made progress on that problem. The ideas, methods, and suggestions presented here can be used for further research into this challenging problem. We also present our ideas for generating graphs with known degree and centrality distributions. The goal in presenting this work is to identify a procedure for analyzing such graphs once the problem of addressing disconnected components has been addressed. When working with a single data set, this generator can be used to create many instances of graphs that can be used to analyze the robustness of the centrality computations for the original data set. Although the results did not match perfectly in the case of the Facebook Ego dataset used in the experiments presented here, this again represents a good start in the direction of a graph generator for such problems. We note that there are potential trade-offs between how the degree and centrality distributions are fit to the original data and suggested several possible avenues for follow-on research efforts.
The Message Passing Interface (MPI) libraries use message queues to guarantee correct message ordering between communicating processes. Message queues are in the critical path of MPI communications and thus, the performance of message queue operations can have significant impact on the performance of applications. Collective communications are widely used in MPI applications and they can have considerable impact on generating long message queues. In this paper, we propose a unified message matching mechanism that improves the message queue search time by distinguishing messages coming from point-to-point and collective communications and using a distinct message queue data structure for them. For collective operations, it dynamically profiles the impact of each collective call on message queues during the application runtime and uses this information to adapt the message queue data structure for each collective dynamically. Moreover, we use a partner/non-partner message queue data structure for the messages coming from point-to-point communications. The proposed approach can successfully reduce the queue search time while maintaining scalable memory consumption. The evaluation results show that we can obtain up to 5.5x runtime speedup for applications with long list traversals. Moreover, we can gain up to 15% and 94% queue search time improvement for all elements in applications with short and medium list traversals, respectively.
In this work we employ data-driven homogenization approaches to predict the particular mechanical evolution of polycrystalline aggregates with tens of individual crystals. In these oligocrystals the differences in stress response due to microstructural variation is pronounced. Shell-like structures produced by metal-based additive manufacturing and the like make the prediction of the behavior of oligocrystals technologically relevant. The predictions of traditional homogenization theories based on grain volumes are not sensitive to variations in local grain neighborhoods. Direct simulation of the local response with crystal plasticity finite element methods is more detailed, but the computations are expensive. To represent the stress-strain response of a polycrystalline sample given its initial grain texture and morphology we have designed a novel neural network that incorporates a convolution component to observe and reduce the information in the crystal texture field and a recursive component to represent the causal nature of the history information. This model exhibits accuracy on par with crystal plasticity simulations at minimal computational cost per prediction.
Intermetallic alloys possess exceptional soft magnetic properties, including high permeability, low coercivity, and high saturation induction, but exhibit poor mechanical properties that make them impractical to bulk process and use at ideal compositions. We used laser-based Additive Manufacturing to process traditionally brittle Fe–Co and Fe–Si alloys in bulk form without macroscopic defects and at near-ideal compositions for electromagnetic applications. The binary Fe–50Co, as a model material, demonstrated simultaneous high strength (600–700 MPa) and high ductility (35%) in tension, corresponding to a ∼300% increase in strength and an order-of-magnitude improvement in ductility relative to conventionally processed material. Atomic-scale toughening and strengthening mechanisms, based on engineered multiscale microstructures, are proposed to explain the unusual combination of mechanical properties. This work presents an instance in which metal Additive Manufacturing processes are enabling, rather than limiting, the development of higher-performance alloys.
Burtch, Nicholas C.; Baxter, Samuel J.; Heinen, Jurn; Bird, Ashley; Schneemann, Andreas; Dubbeldam, David; Wilkinson, Angus P.
Negative thermal expansion materials are of interest for an array of composite material applications whereby they can compensate for the behavior of a positive thermal expansion matrix. In this work, various design strategies for systematically tuning the coefficient of thermal expansion in a diverse series of metal–organic frameworks (MOFs) are demonstrated. By independently varying the metal, ligand, topology, and guest environment of representative MOFs, a range of negative and positive thermal expansion behaviors are experimentally achieved. Insights into the origin of these behaviors are obtained through an analysis of synchrotron-radiation total scattering and diffraction experiments, as well as complementary molecular simulations. The implications of these findings on the prospects for MOFs as an emergent negative thermal expansion material class are also discussed.
This is the first in a series of three papers documenting two large-scale human reliability analysis (HRA) empirical studies – the International HRA Empirical Study and the US HRA Empirical Study. The two studies are the first major efforts in recent years to benchmark HRA methods by comparing HRA method predictions against actual operator performance in responding to accidents simulated on nuclear power plant (NPP) full-scale simulators. The studies aimed to gain knowledge and insights concerning the strengths and weaknesses of the studied HRA methods and the factors contributing to inter-analyst (or intra-method) variability. In addition, the studies also compared the results of the same HRA method applied by different analysis teams. This paper provides the background and motivation of the studies, the overall study design, the simulation scenarios and human failure events to be analyzed, and concluding remarks concerning lessons learned on benchmarking HRA methods with crew performance of scenarios on NPP simulators.
This Construction Waste Management (CWM) Plan specifies the procedure for the management, control and disposition of items designated as waste material for the project. Since no two construction projects are exactly alike, this manual emphasizes general concepts and approaches from which the contractor may select solutions best suited to the given project.
Stevens, Mark J.; Innes-Gold, Sarah N.; Pincus, Philip A.; Saleh, Omar A.
The configuration of charged polymers is heavily dependent on interactions with surrounding salt ions, typically manifesting as a sensitivity to the bulk ionic strength. Here, we use single-molecule mechanical measurements to show that a charged polysaccharide, hyaluronic acid, shows a surprising regime of insensitivity to ionic strength in the presence of trivalent ions. Using simulations and theory, we propose that this is caused by the formation of a "jacket" of ions, tightly associated with the polymer, whose charge (and thus effect on configuration) is robust against changes in solution composition.
Tetrahedral finite element workflows have the potential to drastically reduce time to solution for computational solid mechanics simulations when compared to traditional hexahedral finite element analogues. A recently developed, higher-order composite tetrahedral element has shown promise in the space of incompressible computational plasticity. Mesh adaptivity has the potential to increase solution accuracy and increase solution robustness. In this work, we demonstrate an initial strategy to perform conformal mesh adaptivity for this higher-order composite tetrahedral element using well-established mesh modification operations for linear tetrahedra. We propose potential extensions to improve this initial strategy in terms of robustness and accuracy.
We present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal diffusion coupling. Numerical examples illustrate the theoretical properties of the approach.
Ananthan, Shreyas; Yellapantula, Shashank; Hu, Jonathan J.; Lawson, Michael; Sprague, Michael A.; Thomas, Stephen J.
This paper presents a comparison of parallel strong scaling performance of classical and aggregation algebraic multigrid (AMG) preconditioners in the context of wind turbine simulations. Fluid motion is governed by the incompressible Navier--Stokes equations, discretized in space with control-volume finite elements and in time with an inexact projection scheme using an implicit integrator. A discontinuous-Galerkin sliding-mesh algorithm captures rotor motion. The momentum equations are solved with iterative Krylov methods, preconditioned by symmetric Gauss--Seidel (SGS) in Trilinos and $\ell_1$ SGS in hypre. The mass-continuity equation is solved with GMRES preconditioned by AMG and can account for the majority of simulation time. Reducing this continuity solve time is crucial. Wind turbine simulations present two unique challenges for AMG preconditioned solvers: the computational meshes include strongly anisotropic elements, and mesh motion requires matrix reinitialization and computation of preconditioners at each time step. Detailed timing profiles are presented and analyzed, and best practices are discussed for both classical and aggregation-based AMG. Results are presented for simulations of two different wind turbines with up to 6 billion grid points on two different computer architectures. For moving mesh problems that require linear-system reinitialization, the well-established strategy of amortizing preconditioner setup costs over a large number of time steps to reduce the solve time is no longer valid. Instead, results show that faster time to solution is achieved by reducing preconditioner setup costs at the expense of linear-system solve costs. Standard smoothed aggregation with Chebyshev relaxation was found to perform poorly when compared with classical AMG in terms of solve time and robustness. However, plain aggregation was comparable to classical AMG.
The embrittling or strengthening effect of solute atoms at grain boundaries (GBs), commonly known as the embrittling potency, is an essential thermodynamic property for characterizing the effects of solute segregation on GB fracture. One of the more technologically relevant material systems related to embrittlement is the Ni-S system where S has a deleterious effect on fracture behavior in polycrystalline Ni. In this work, we develop a Ni-S embedded-atom method (EAM) interatomic potential that accounts for the embrittling behavior of S at Ni GBs. Results using this new interatomic potential are then compared to previous density functional theory studies and a reactive force-field potential via a layer-by-layer segregation analysis. Our potential shows strong agreement with existing literature and performs well in predicting properties that are not included in the fitting database. Finally, we calculate embrittling potencies and segregation energies for six [100] symmetric-tilt GBs using the new EAM potential. We observe that embrittling potency is dependent on GB structure, indicating that specific GBs are more susceptible to sulfur-induced embrittlement.
The radiation response of TaOx-based RRAM devices fabricated in academic (Set A) and industrial (Set B) settings was compared. Ionization damage from a 60Co gamma source did not cause any changes in device resistance for either device type, up to 45 Mrad(Si). Displacement damage from a heavy ion beam caused the Set B in the high resistance state to decrease in resistance at 1 x 1021 oxygen displacements per cm3; meanwhile, the Set A devices did not exhibit any decrease in resistance due to displacement damage. Both types of devices demonstrated an increase in resistance around 3 x 1022 oxygen displacements per cm3, possibly due to damage at the oxide/metal interfaces. These extremely high levels of damage represent near-total atomic disruption, and if this level of damage were ever reached, other circuit elements would likely fail before the RRAM devices in this study. Generally, both sets of devices were much more resistant to radiation effects than other devices reported in the literature. Displacement damage effects were only observed in the Set A devices once the displacement-induced oxygen vacancies surpassed the intrinsic vacancy concentration in the devices, suggesting that high oxygen vacancy concentration played a role in the devices’ high tolerance to displacement damage.
The properties of silica (SiO 2) at extreme conditions have important applications for planetary processes and for high pressure research. We report the results of 125 plate impact shock compression experiments on fused silica spanning 200-1100 GPa using the Z machine at Sandia National Laboratories. Additionally, we present a complementary set of density functional theory based molecular dynamics calculations based on an amorphous reference state that extend the Hugoniot to 2500 GPa. We find good agreement between the Z data, extant laser driven shock compression experiment data, and computational results over most of the pressure range. With these results, fused silica can be used as a new impedance matching standard for shock compression experiments.
Acid gases (e.g., NOx and SOx), commonly found in complex chemical and petrochemical streams, require material development for their selective adsorption and removal. Here, we report the NOx adsorption properties in a family of rare earth (RE) metal–organic frameworks (MOFs) materials. Fundamental understanding of the structure–property relationship of NOx adsorption in the RE-DOBDC materials platform was sought via a combined experimental and molecular modeling study. No structural change was noted following humid NOx exposure. Density functional theory (DFT) simulations indicated that H2O has a stronger affinity to bind with the metal center than NO2, while NO2 preferentially binds with the DOBDC ligands. Further modeling results indicate no change in binding energy across the RE elements investigated. Also, stabilization of the NO2 and H2O molecules following adsorption was noted, predicted to be due to hydrogen bonding between the framework ligands and the molecules and nanoconfinement within the MOF structure. This interaction also caused distinct changes in emission spectra, identified experimentally. As a result, calculations indicated that this is due to the adsorption of NO2 molecules onto the DOBDC ligand altering the electronic transitions and the resulting photoluminescent properties, a feature that has potential applications in future sensing technologies.
Shale is characterized by the predominant presence of nanometer-scale (1-100 nm) pores. The behavior of fluids in those pores directly controls shale gas storage and release in shale matrix and ultimately the wellbore production in unconventional reservoirs. Recently, it has been recognized that a fluid confined in nanopores can behave dramatically differently from the corresponding bulk phase due to nanopore confinement (Wang, 2014). CO2 and H2O, either preexisting or introduced, are two major components that coexist with shale gas (predominately CH4) during hydrofracturing and gas extraction.
Modelling and Simulation in Materials Science and Engineering
Foiles, Stephen; Mcdowell, David L.; Strachan, Alejandro
This focus issue is motivated by the growing demand for rigorous uncertainty quantification (UQ) in materials modeling, which is driven by the need to use these tools, in conjunction with experiments, to support decision making in materials design, development, and deployment. Traditionally, predictive materials modeling has focused on gaining qualitative insight into the range of mechanisms that control material behavior and how those mechanisms interact to govern material properties and processes. In that context, quantitative evaluation of modeling uncertainty was not a priority. As materials modeling advances, there is increased impetus to employ it in the context of materials design and qualification. This trend is manifested in the establishment of integrated computational materials engineering (ICME) as a growing sub-discipline, as well as by initiatives such as the materials genome in the USA and similar efforts around the globe. Current practice and future needs are described in several recent reports including NASA's Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems. Invariably, these studies point out the need for the field to embrace the challenge of UQ.
The deeply depleted graphene-oxide-semiconductor (D2GOS) junction detector provides an effective architecture for photodetection, enabling direct readout of photogenerated charge. Because of an inherent gain mechanism proportional to graphene's high mobility (μ), this detector architecture exhibits large responsivities and signal-to-noise ratios (SNR). The ultimate sensitivity of the D2GOS junction detector may be limited, however, because of the generation of dark charge originating from interface states at the semiconductor/dielectric junction. Here, we examine the performance limitations caused by dark charge and demonstrate its mitigation via the creation of low interface defect junctions enabled by surface passivation. The resulting devices exhibit responsivities exceeding 10 000 A/W - a value which is 10× greater than that of analogous devices without the passivating thermal oxide. With cooling of the detector, the responsivity further increases to over 25 000 A/W, underscoring the impact of surface generation on performance and thus the necessity of minimizing interfacial defects for this class of photodetector.
The International Energy Agency Technology Collaboration Programme for Ocean Energy Systems (OES) initiated the OES Wave Energy Conversion Modelling Task, which focused on the verification and validation of numerical models for simulating wave energy converters (WECs). The long-term goal is to assess the accuracy of and establish confidence in the use of numerical models used in design as well as power performance assessment of WECs. To establish this confidence, the authors used different existing computational modelling tools to simulate given tasks to identify uncertainties related to simulation methodologies: (i) linear potential flow methods; (ii) weakly nonlinear Froude–Krylov methods; and (iii) fully nonlinear methods (fully nonlinear potential flow and Navier–Stokes models). This article summarizes the code-to-code task and code-to-experiment task that have been performed so far in this project, with a focus on investigating the impact of different levels of nonlinearities in the numerical models. Two different WECs were studied and simulated. The first was a heaving semi-submerged sphere, where free-decay tests and both regular and irregular wave cases were investigated in a code-to-code comparison. The second case was a heaving float corresponding to a physical model tested in a wave tank. We considered radiation, diffraction, and regular wave cases and compared quantities, such as the WEC motion, power output and hydrodynamic loading.
Development of new materials and predictive capabilities of component performance hinges on the ability to accurately digitize "as-built" geometries. X-ray computed tomography (CT) offers a non-destructive method of capturing these details but current methodologies are unable to produce the required fidelity for critical component certification. This project focused on discovering the limitations of existing CT reconstruction algorithms and exploring machine learning (ML) methodologies to overcome these limitations. We found that existing CT reconstruction methods are insufficient for Sandia's critical component certification process and that ML algorithms are a viable path forward to improving the quality of CT images.
Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.
We analyze the dynamics from microsecond-long, atomistic molecular dynamics (MD) simulations of a series of precise poly(ethylene-co-acrylic acid) ionomers neutralized with lithium, with three different spacer lengths between acid groups on the ionomers and at two temperatures. At short times, the intermediate structure factor calculated from the MD simulations is in reasonable agreement with quasi-elastic neutron scattering data for partially neutralized ionomers. For ionomers that are 100% neutralized with lithium, the simulations reveal three dynamic processes in the chain dynamics. The fast process corresponds to hydration librations, the medium-time process corresponds to local conformational motions of the portions of the chains between ionic aggregates, and the long-time process corresponds to relaxation of the ionic aggregates. At 600 K, the dynamics are sufficiently fast to observe the early stages of lithium-ion motion and ionic aggregate rearrangements. In the partially neutralized ionomers with isolated ionic aggregates, the Li-ion-containing aggregates rearrange by a process of merging and breaking up, similar to what has been observed in coarse-grained (CG) simulations. In the 100% neutralized ionomers that contain percolated ionic aggregates, the chains remain pinned by the percolated aggregate at long times, but the lithium ions are able to move along the percolated aggregate. Here, the ion dynamics are also qualitatively similar to those seen in previous CG simulations.
Here, molecular dynamics simulations are used to study relaxation of entangled polymer melts deformed far from equilibrium by uniaxial extensional flow. Melts are elongated to a Hencky strain of 6 at Rouse–Weissenberg numbers from 0.16 to 25, producing states with a wide range of chain alignment. Then flow is ceased and the systems are allowed to relax until twice the equilibrium disentanglement time. The relaxation of the stress is correlated with changes in the conformation of chains and the geometry of the tube confining them. Independent of initial alignment, chains relax to conformations consistent with the equilibrium tube length and diameter on the equilibrium Rouse time. Subsequent relaxation is the same for all systems and controlled by the equilibrium distentanglement time. These results are counter to prior work that indicates orientation causes a large, stretch-dependent reduction in the entanglement density that can only be recovered slowly by reptation on the equilibrium disentanglement time, raising fundamental questions about the nature of entanglement in aligned polymer melts.
The process of ductile fracture in metals often begins with void nucleation at second-phase particles and inclusions. Previous studies of rupture in high-purity face-centered-cubic metals, primarily aluminum (Al), concluded that second-phase particles are necessary for cavitation. A recent study of tantalum (Ta), a body-centered-cubic metal, demonstrated that voids nucleate readily at deformation-induced dislocation boundaries. These same features form in Al during plastic deformation. This study investigates why void nucleation was not previously observed at dislocation boundaries in Al. Here, we demonstrate that void nucleation is impeded in Al by room-temperature dynamic recrystallization (DRX), which erases these boundaries before voids can nucleate at them. If dislocation cells reform after DRX and before specimen separation by necking, voids nucleation is observed. These results indicate that dislocation substructures likely plays an important role in ductile rupture.
Jawaharram, Gowtham S.; Barr, Christopherm; Monterrosa, Anthonym; Hattar, Khalid M.; Averback, Robert S.; Dillon, Shen J.
Irradiation induced creep (IIC) compliance in NiCoFeCrMn high entropy alloys is measured as a function of grain size (30 < x < 80 nm) and temperature (23–500 °C). For 2.6 MeV Ag3+ irradiation at a dose rate of 1.5×10–3 dpa–1s–1 the transition from the recombination to sink limited regimes occurs at ~ 100 °C. In the sink-limited regime, the IIC compliance scales inversely with grain size, consistent with a recently proposed model for grain boundary IIC. The thermal creep rate is also measured; it does not become comparable to the IIC rate, however, until ~ 650 °C. Here, the results are discussed in context of defect kinetics in irradiated HEA systems.
The focus of this document is to record the learning and process development achieved by the completion of the Nano-Engineering of Detector Surfaces to Offer Unprecedented Imager Sensitivity to Soft X-rays and Low Energy Electrons LDRD. The goal of this effort was to study different silicon detector surface preparation methods such as ion implant parameters, and the addition of a quantum 2-layer superlattice. Enabling the preparation of the surface of silicon detectors (front side illuminated or bonded backside illuminated) increases the responsivity of the diode to shallowly absorbed photons. This increased sensitivity in turn allows for greater fidelity in imaging events that emit low soft X-rays or low energy electrons. Prior work has focused on passivating the surface of a silicon detectors with thin layers (tens of nm thick) of materials to reduce surface recombination sites. Measurements of visible light quantum efficiency, electron responsivity, and pulsed x-ray response indicate that detectors with a 2- layer superlattice enjoy a significant benefit over equivalent detectors using an ion implant at the illuminated surface.
For many applications, the promises of additive manufacturing (AM) of rapid development cycles and fabrication of ready-to-use, geometrically-complex parts cannot be realized because of cumbersome thermal postprocessing. This postprocessing is necessary when the nonequilibrium microstructures produced by AM lead to poor material properties. This study investigated if electropulsing, the process of sending high-current-density electrical pulses through a metallic part, could be used to modify the material properties of AM parts. This process has been used to modify conventional wrought materials but has never been applied to AM materials. Two representative AM materials were examined: 316L stainless steel and A1Si10Mg. Two hours of annealing are needed to remove chemical microsegregation in AM 316L; using electropulsing, this was accomplished in 200 seconds. The ductility of AlSil0Mg parts was increased above that of the as-built material using electropulsing. This study demonstrated that electropulsing can be used to modify the microstructures of AM metals.
Background: Lignocellulosic biomass is recognized as a promising renewable feedstock for the production of biofuels. However, current methods for converting biomass into fermentable sugars are considered too expensive and inefficient due to the recalcitrance of the secondary cell wall. Biomass composition can be modified to create varieties that are efficiently broken down to release cell wall sugars. This study focused on identifying the key biomass components influencing plant cell wall recalcitrance that can be targeted for selection in sugarcane, an important and abundant source of biomass. Results: Biomass composition and the amount of glucan converted into glucose after saccharification were measured in leaf and culm tissues from seven sugarcane genotypes varying in fiber composition after no pretreatment and dilute acid, hydrothermal and ionic liquid pretreatments. In extractives-free sugarcane leaf and culm tissue, glucan, xylan, acid-insoluble lignin (AIL) and acid-soluble lignin (ASL) ranged from 20 to 32%, 15% to 21%, 14% to 20% and 2% to 4%, respectively. The ratio of syringyl (S) to guaiacyl (G) content in the lignin ranged from 1.5 to 2.2 in the culm and from 0.65 to 1.1 in the leaf. Hydrothermal and dilute acid pretreatments predominantly reduced xylan content, while the ionic liquid (IL) pretreatment targeted AIL reduction. The amount of glucan converted into glucose after 26 h of pre-saccharification was highest after IL pretreatment (42% in culm and 63.5% in leaf) compared to the other pretreatments. Additionally, glucan conversion in leaf tissues was approximately 1.5-fold of that in culm tissues. Percent glucan conversion varied between genotypes but there was no genotype that was superior to all others across the pretreatment groups. Path analysis revealed that S/G ratio, AIL and xylan had the strongest negative associations with percent glucan conversion, while ASL and glucan content had strong positive influences. Conclusion: To improve saccharification efficiency of lignocellulosic biomass, breeders should focus on reducing S/G ratio, xylan and AIL content and increasing ASL and glucan content. This will be key for the development of sugarcane varieties for bioenergy uses.
Fractures within the earth control rock strength and fluid flow, but their dynamic nature is not well understood. As part of a series of underground chemical explosions in granite in Nevada, we collected and analyzed microfracture density data sets prior to, and following, individual explosions. Our work shows an ~4-fold increase in both open and filled microfractures following the explosions. Based on the timing of core retrieval, filling of some new fractures occurs in as little as 6 wk after fracture opening under shallow (<100 m) crustal conditions. These results suggest that near-surface fractures may fill quite rapidly, potentially changing permeability on time scales relevant to oil, gas, and geothermal energy production; carbon sequestration; seismic cycles; and radionuclide migration from nuclear waste storage and underground nuclear explosions.
Four-dimensional (x, y, z, t) x-ray computed tomography was demonstrated in an optically complex spray using an imaging system consisting of three x-ray sources and three high-speed detectors. The x-ray sources consisted of high-flux rotating anode x-ray tube sources that illuminated the spray from three lines of sight. The absorption, along each absorption path, was collected using a CsI phosphor plate and imaged by a high-speed intensified CMOS camera at 20 kHz. The radiographs were converted to a quantitative equivalent path length (EPL) of liquid using a variable attenuation coefficient to account for beam hardening. The EPL data were then reconstructed using the algebraic reconstruction technique into high-speed time sequences of the three-dimensional liquid mass distribution.
The roles of subgrains, texture, and surface energy during dynamic abnormal grain growth (DAGG) were examined in a commercial-purity Mo rod material. DAGG was observed in this material during tensile deformation at 2023 K (1750 °C). Cooling of specimens after tensile testing was sufficiently rapid to preserve both subgrain structures developed during deformation and several abnormal grains at early stages of growth. These and other microstructural features were characterized to evaluate how subgrains and boundary character influence the early stages of DAGG. Subgrains were observed in the deformed polycrystalline material but were generally absent in newly formed abnormal grains. This was identified as the cause of the sudden drop in flow stress observed at the initiation of DAGG. It is proposed that subgrain intersections with abnormal grain boundaries provide a driving pressure for DAGG. Subgrains within the deformed polycrystals were observed to locally change the boundary curvature at their intersections with abnormal grain boundaries, which likely encouraged growth of the abnormal grains into the deformed polycrystals. Abnormal grains produced by DAGG retained crystallographic orientations and boundary characters that closely resembled those of the polycrystalline material from which they grew. This suggests that neither differences in orientation nor boundary character were important to DAGG in this material.
To elucidate the damage mechanisms in syntactic foams with hollow glass microballoon (GMB) reinforcement and elastomer matrices, in situ X-ray computed tomography mechanical testing was performed on syntactic foams with increasing GMB volume fraction. Image processing and digital volume correlation techniques identified very different damage mechanisms compared to syntactic foams with brittle matrices. In particular, the prevailing mechanism transitioned from dispersed GMB collapse at low volume fraction to clustered GMB collapse at high volume fraction. Moreover, damage initiated and propagated earlier in closely-packed GMBs for all specimens. Both of these trends were attributed to increased interaction between closely-packed GMBs. This was confirmed by statistical analysis of GMB damage, which identified a consistent, inverse relationship between the probability of survival and the local coordination number (Nneighbor) across all specimens.
Carlberg, Kevin T.; Jameson, Antony; Kochenderfer, Mykel J.; Morton, Jeremy; Peng, Liqian; Witherden, Freddie D.
Data I/O poses a significant bottleneck in large-scale CFD simulations; thus, practitioners would like to significantly reduce the number of times the solution is saved to disk, yet retain the ability to recover any field quantity (at any time instance) a posteriori. The objective of this work is therefore to accurately recover missing CFD data a posteriori at any time instance, given that the solution has been written to disk at only a relatively small number of time instances. We consider in particular high-order discretizations (e.g., discontinuous Galerkin), as such techniques are becoming increasingly popular for the simulation of highly separated flows. To satisfy this objective, this work proposes a methodology consisting of two stages: 1) dimensionality reduction and 2) dynamics learning. For dimensionality reduction, we propose a novel hierarchical approach. First, the method reduces the number of degrees of freedom within each element of the high-order discretization by applying autoencoders from deep learning. Second, the methodology applies principal component analysis to compress the global vector of encodings. This leads to a low-dimensional state, which associates with a nonlinear embedding of the original CFD data. For dynamics learning, we propose to apply regression techniques (e.g., kernel methods) to learn the discrete-time velocity characterizing the time evolution of this low-dimensional state. A numerical example on a large-scale CFD example characterized by nearly 13 million degrees of freedom illustrates the suitability of the proposed method in an industrial setting.
Dynamics of melts and solutions of high molecular weight polymers and biopolymers is controlled by topological constraints (entanglements) imposing a sliding chain motion along an effective confining tube. For linear chains, the tube size is determined by universal packing number Pe, the number of polymer strands within a confining tube that is required for chains to entangle. Here we show that in melts of brush-like (graft) polymers, consisting of linear chain backbones with grafted side chains, Pe is not a universal number and depends on the molecular architecture. In particular, we use coarse-grained molecular dynamics simulations to demonstrate that the packing number is a nonmonotonic function of the ratio Rnsc /Rng of the size of the side chains Rnsc to that of the backbone spacer between neighboring grafting points Rng . This parameter characterizes the degree of mutual interpenetration between side chains of the same macromolecule. We show that Pe of brush-like polymers first decreases with increasing side chain grafting density in the dilute side chain regime (Rnsc < Rng ), then begins to increase in the regime of overlapping side chains (Rnsc > Rng ), approaching the value for linear chains in the limit of densely grafted side chains. This dependence of the packing number reflects a crossover from chain-like entanglements in systems with loosely grafted side chains (comb-like polymers) to entanglements between flexible filaments (bottlebrush-like polymers). Our simulation results are in agreement with the experimental data for the dependence of a plateau modulus on the molecular architecture of graft poly(n-butyl acrylates) and poly(norbornene)-graft-poly(lactide) melts.
The impact of dry-etch-induced defects on the electrical performance of regrown, c-plane, GaN p-n diodes where the p-GaN layer is formed by epitaxial regrowth using metal-organic, chemical-vapor deposition was investigated. Diode leakage increased significantly for etched-and-regrown diodes compared to continuously grown diodes, suggesting a defect-mediated leakage mechanism. Deep level optical spectroscopy (DLOS) techniques were used to identify energy levels and densities of defect states to understand etch-induced damage in regrown devices. DLOS results showed the creation of an emergent, mid-gap defect state at 1.90 eV below the conduction band edge for etched-and-regrown diodes. Reduction in both the reverse leakage and the concentration of the 1.90 eV mid-gap state was achieved using a wet chemical treatment on the etched surface before regrowth, suggesting that the 1.90 eV deep level contributes to increased leakage and premature breakdown but can be mitigated with proper post-etch treatments to achieve >600 V reverse breakdown operation.
Recently, we developed a new method for generating effective core potentials (ECPs) using valence energy isospectrality with explicitly correlated all-electron (AE) excitations and norm-conservation criteria. We apply this methodology to the 3rd-row main group elements, creating new correlation consistent ECPs (ccECPs) and also deriving additional ECPs to complete the ccECP table for H-Kr. For K and Ca, we develop Ne-core ECPs, and for the 4p main group elements, we construct [Ar]3d10-core potentials. Scalar relativistic effects are included in their construction. Our ccECPs reproduce AE spectra with significantly better accuracy than many existing pseudopotentials and show better overall consistency across multiple properties. The transferability of ccECPs is tested on monohydride and monoxide molecules over a range of molecular geometries. For the constructed ccECPs, we also provide optimized DZ-6Z valence Gaussian basis sets.
Metasurfaces are an emerging technology that may supplant many of the conventional optics found in imaging devices, displays, and precision scientific instruments. Here, we develop a method for designing optical systems composed of multiple unique metasurfaces aligned in sequence and separated by distances much larger than the design wavelengths. Our approach is based on computational inverse design, also known as the adjoint-gradient method. This technique enables thousands or millions of independent design variables (e.g., the shapes of individual meta-atoms) to be optimized in parallel, with little or no intervention required by the user. The assumptions underlying our method are as follows: we use the local periodic approximation to determine the phase-response of a given meta-atom, we use the scalar wave approximation to propagate light fields between metasurface layers, and we do not consider multiple reflections between metasurface layers (analogous to a sequential-optics ray-tracer). To demonstrate the broad applicability of our method, we use it to design an achromatic doublet metasurface lens, a spectrally-multiplexed holographic element, and an ultra-compact optical neural network for classifying handwritten digits.
Van De Walle, Axel; Sabisch, Julian E.C.; Minor, Andrew M.; Asta, Mark
While rhenium has proven to be an ideal material in fast-cycling high-temperature applications such as rocket nozzles, its prohibitive cost limits its continued use and motivates a search for viable cost-effective substitutes. We show that a simple design principle that trades off average valence electron count and cost considerations proves helpful in identifying a promising pool of candidate substitute alloys: The Mo-Ru-Ta-W quaternary system. We demonstrate how this picture can be combined with a computational thermodynamic model of phase stability, based on high-throughput ab initio calculations, to further narrow down the search and deliver alloys that maintain rhenium's desirable hcp crystal structure. This thermodynamic model is validated with comparisons to known binary phase diagram sections and corroborated by experimental synthesis and structural characterization demonstrating multiprinciple-element hcp solid-solution samples selected from a promising composition range.
The interaction between multiple intense ultrashort laser pulses and solids is known to produce a regular nanoscale surface corrugation. A coupled mechanism has been identified that operates in a specific range of fluences in GaAs that exhibits transient loss of the imaginary part of the dielectric function and Χ2, which produces a unique corrugation known as high spatial frequency laser induced periodic surface structures (HSFL). The final structures have 180 nm periods, and their alignment perpendicular to the laser polarization is first observed in an intermediate morphology with correlation distances of 150 ± 40 nm. Quantum molecular dynamics simulations suggest that HSFL self-assembly is initiated when the intense laser field softens the interatomic binding potential, which leads to an ultrafast generation of point defects. The morphological evolution begins as self-interstitial diffusion, driven by stress relaxation, to the surface producing 1-2 nm tall islands. An ab initio calculation of excited electron concentration combined with a Drude-Lorentz model of the excited GaAs dielectric function is used to determine that the conditions for SPP coupling at HSFL formation fluences are both satisfied and occur at wavelengths that are imprinted into the observed surface morphologies. The evolution of these morphologies is explained as the interplay between surface plasmon polaritons that localize defect generation within the structures present on the previous laser exposure and stress relaxation driven defect diffusion.
The Barrow Hydrogen Fill System is intended for use by the authorized Sandia National Laboratories personnel and its contractors who are involved in research relating to the deployment of weather balloons for the Department of Energy's Atmospheric Radiation Measurement Program (ARM) https://www.arm.gov/ and The National Weather Service (NWS) https://www.weather.gov/. Hydrogen is supplied from a hydrogen generation shelter and outdoor storage tank. The balloons are filled and released by a remote balloon launcher (RBL) commercially available from Vaisala. The RBL was designed to be used with helium or hydrogen and has be recently inspected by Vaisala for hydrogen operation. This system is composed of a pressure regulator, ball valves, pressure relief valves, tubing, hose. All pressurized components are "off the shelf items and have a minimum pressure rating of 300 psi and are assembled using "Best Practices" and trained pressure component installers. Non pressurized components are comprised of industry recommended components and are assembled using "Best Practices" techniques.
Here, experiments were performed within Sandia National Labs’ Multiphase Shock Tube to measure and quantify the shock-induced dispersal of a shock/dense particle curtain interaction. Following interaction with a planar travelling shock wave, schlieren imaging at 75 kHz was used to track the upstream and downstream edges of the curtain. Data were obtained for two particle diameter ranges ($d_{p}=106{-}125$,$300{-}355~\unicode[STIX]{x03BC}\text{m}$) across Mach numbers ranging from 1.24 to 2.02. Using these data, along with data compiled from the literature, the dispersion of a dense curtain was studied for multiple Mach numbers (1.2–2.6), particle sizes ($100{-}1000~\unicode[STIX]{x03BC}\text{m}$) and volume fractions (9–32 %). Data were non-dimensionalized according to two different scaling methods found within the literature, with time scales defined based on either particle propagation time or pressure ratio across a reflected shock. The data refelct that spreading of the particle curtain is a function of the volume fraction, with the effectiveness of each time scale based on the proximity of a given curtain’s volume fraction to the dilute mixture regime. It is observed that volume fraction corrections applied to a traditional particle propagation time scale result in the best collapse of the data between the two time scales tested here. In addition, a constant-thickness regime has been identified, which has not been noted within previous literature.
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
SIERRA/Aero is a compressible fluid dynamics program intended to solve a wide variety compressible fluid flows including transonic and hypersonic problems. This document describes the commands for assembling a fluid model for analysis with this module, henceforth referred to simply as Aero for brevity. Aero is an application developed using the SIERRA Toolkit (STK). The intent of STK is to provide a set of tools for handling common tasks that programmers encounter when developing a code for numerical simulation. For example, components of STK provide field allocation and management, and parallel input/output of field and mesh data. These services also allow the development of coupled mechanics analysis software for a massively parallel computing environment.
The SNL Sierra Mechanics code suite is designed to enable simulation of complex multiphysics scenarios. The code suite is composed of several specialized applications which can operate either in standalone mode or coupled with each other. Arpeggio is a supported utility that enables loose coupling of the various Sierra Mechanics applications by providing access to Framework services that facilitate the coupling. More importantly Arpeggio orchestrates the execution of applications that participate in the coupling. This document describes the various components of Arpeggio and their operability. The intent of the document is to provide a fast path for analysts interested in coupled applications via simple examples of its usage.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton's method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic h-adaptivity and dynamic load balancing are some of Aria's more advanced capabilities.
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
Progress is reported on development and testing of the Fuel/Basket Degradation Model, and the Waste Package Breach Model. The work follows on reported FY19 model development (SNL-ICG 2019). Modeling has addressed questions framed in the deliverable description: 1) how fuel/basket damage, including from seismic ground motion, could impact reactivity; 2) whether dynamic degradation processes (e.g., caused by seismic ground motion) should be targets for further model development; and 3) what are the key model parameters and how do they affect the likelihood and energy of criticality events. For basket structure made from aluminum-based metal-matrix composite material, basket collapse would coincide with loss of neutron absorbers, and would occur within a few hundred years after waste package breach. Degradation models discussed in this report have thus far focused on a generic 32-PWR (pressurized water reactor fuel) basket with aluminum-based plates
The effects of irradiation on 3C-silicon carbide (SiC) and amorphous SiC (a-SiC) are investigated using both in situ transmission electron microscopy (TEM) and complementary molecular dynamics (MD) simulations. The single ion strikes identified in the in situ TEM irradiation experiments, utilizing a 1.7 MeV Au3+ ion beam with nanosecond resolution, are contrasted to MD simulation results of the defect cascades produced by 10-100 keV Si primary knock-on atoms (PKAs). The MD simulations also investigated defect structures that could possibly be responsible for the observed strain fields produced by single ion strikes in the TEM ion beam irradiation experiments. Both MD simulations and in situ TEM experiments show evidence of radiation damage in 3C-SiC but none in a-SiC. Selected area electron diffraction patterns, based on the results of MD simulations and in situ TEM irradiation experiments, show no evidence of structural changes in either 3C-SiC or a-SiC.
Lundh, James S.; Chatterjee, Bikramjit; Song, Yiwen; Baca, Albert G.; Kaplar, Robert J.; Beechem, Thomas E.; Allerman, Andrew A.; Armstrong, Andrew M.; Klein, Brianna A.; Bansal, Anushka; Talreja, Disha; Pogrebnyakov, Alexej; Heller, Eric; Gopalan, Venkatraman; Redwing, Joan M.; Foley, Brian M.; Choi, Sukwon
Improvements in radio frequency and power electronics can potentially be realized with ultrawide bandgap materials such as aluminum gallium nitride (AlxGa1-xN). Multidimensional thermal characterization of an Al0.30Ga0.70N channel high electron mobility transistor (HEMT) was done using Raman spectroscopy and thermoreflectance thermal imaging to experimentally determine the lateral and vertical steady-state operating temperature profiles. An electrothermal model of the Al0.30Ga0.70N channel HEMT was developed to validate the experimental results and investigate potential device-level thermal management. While the low thermal conductivity of this III-N ternary alloy system results in more device self-heating at room temperature, the temperature insensitive thermal and electrical output characteristics of AlxGa1-xN may open the door for extreme temperature applications.
Trust in a microelectronics-based systems can be characterized as the level of confidence that the system is free of subversive alterations inserted by a malicious adversary during system development. Outkin et al. recently developed GPLADD, a game-theoretic framework that enables trust analysis through a set of mathematical models that represent multi-step attack graphs and contention between system attackers and defenders. This paper extends GPLADD to include detection of attacks on development processes and defender decision processes that occur in response to detection events. The paper provides mathematical details for implementing attack detection and demonstrates the models on an example system. The authors further demonstrate how optimal defender strategies vary when solution concepts and objective functions are modified.
This work proposes an approach for latent dynamics learning that exactly enforces physical conservation laws. The method comprises two steps. First, we compute a low-dimensional embedding of the high-dimensional dynamical-system state using deep convolutional autoencoders. This defines a low-dimensional nonlinear manifold on which the state is subsequently enforced to evolve. Second, we define a latent dynamics model that associates with a constrained optimization problem. Specifically, the objective function is defined as the sum of squares of conservation-law violations over control volumes in a finite-volume discretization of the problem; nonlinear equality constraints explicitly enforce conservation over prescribed subdomains of the problem. The resulting dynamics model—which can be considered as a projection-based reduced-order model—ensures that the time-evolution of the latent state exactly satisfies conservation laws over the prescribed subdomains. In contrast to existing methods for latent dynamics learning, this is the only method that both employs a nonlinear embedding and computes dynamics for the latent state that guarantee the satisfaction of prescribed physical properties. Numerical experiments on a benchmark advection problem illustrate the method's ability to significantly reduce the dimensionality while enforcing physical conservation.
Magnetic implosion of cylindrical metallic shells (liners) is an effective method for compressing preheated, premagnetized fusion fuel to thermonuclear conditions [M. R. Gomez et al., Phys. Rev. Lett. 113, 155003 (2014)] but suffers from magneto-Rayleigh–Taylor instabilities (MRTI) that limit the attainable fuel pressure, density, and temperature. A novel method proposed by Schmit et al. [Phys. Rev. Lett. 117, 205001 (2016)] uses a helical magnetic drive field with a dynamic polarization at the outer surface of the liner during implosion, reducing (linear) MRTI growth by one to two orders of magnitude via a solid liner dynamic screw pinch (SLDSP) effect. Our work explores the design features necessary for successful experimental implementation of this concept. Whereas typical experiments employ purely azimuthal drive fields to implode initially solid liners, SLDSP experiments establish a helical drive field at the liner outer surface, resulting in enhanced average magnetic pressure per unit drive current, mild spatial nonuniformities in the magnetic drive pressure, and augmented static initial inductance in the pulsed-power drive circuit. Each of these topics has been addressed using transient magnetic and magnetohydrodynamic simulations; the results have led to a credible design space for SLDSP experiments on the Z Facility. We qualitatively assess the stabilizing effects of the SLDSP mechanism by comparing MRTI growth in a liner implosion simulation driven by an azimuthal magnetic field vs one driven with a helical magnetic field; the results indicate an apparent reduction in MRTI growth when a helical drive field is employed.
Understanding the propagation of radiation damage in a material is paramount to predicting the material damage effects. To date, no current literature has investigated the Threshold Displacement Energy (TDE) of Ca and F atoms in CaF2 through molecular dynamics and simulated statistical analysis. A set of interatomic potentials between Ca-Ca, F-F, and F-Ca were splined, fully characterizing a pure CaF2 simulation cell, by using published Born-Mayer-Huggins, standard ZBL, and Coulomb potentials, with a resulting structure within 1% of standard density and published lattice constants. Using this simulation cell, molecular dynamics simulations were performed with LAMMPS using a simulation that randomly generated 500 Ca and F PKA directions for each incremental set of energies, and a simulation in each of the [1 0 0], [1 1 0], and [1 1 1] directions with 500 trials for each incremental energy. MD simulations of radiation damage in CaF2 are carried out using F and Ca PKAs, with energies ranging from 2 to 200 eV. Probabilistic determinations of the TDE and Threshold Vacancy Energy (TVE) of Ca and F atoms in CaF2 were performed, as well as examining vacancy, interstitial, and antisite production rates over the range of PKA energies. Many more F atoms were displaced from both PKA species, and though F recombination appears more probable than Ca recombination, F vacancy numbers are higher. In conclusion, the higher number of F vacancies than Ca vacancies suggests F Frenkel pairs dominate CaF2 damage.
Recent opacity measurements have inspired a close study of the two-photon contributions to the opacity of hot plasmas. The absorption and emission of radiation is controlled by dipole matrix-elements of electrons in an atom or ion. This paper describes two independent methods to calculate matrix-elements needed for the two-photon opacity and tests the results by the f-sum rule. The usual f-sum rule is extended to a matrix f-sum that offers a rigorous test for bound-bound, bound-free and free-free transitions. An additional higher-order sum-rule for the two-photon transition amplitudes is described. In this work, we obtain a simple parametric representation of a key plasma density effect on the matrix-elements. The perturbation theory calculation of two-photon cross-sections is compared to an independent method based on the solution of the time-dependent Schroedinger equation for an atom or ion in a high-frequency electromagnetic field. This is described as a high frequency Stark effect or AC Stark effect. Two-photon cross sections calculated with the AC Stark code agree with perturbation theory to within about 5%. In addition to this cross check, the AC Stark code is well suited to evaluating important questions such as the variation of two-photon opacity for different elements.
The purpose of this paper is to study a Helmholtz problem with a spectral fractional Laplacian, instead ofthe standard Laplacian. Recently, it has been established that such a fractional Helmholtz problem better captures the underlying behavior in Geophysical Electromagnetics. We establish the well-posedness and regularity of this problem. We introduce a hybrid finite element-spectral approach to discretize it and show well-posedness of the discrete system. In addition, we derive a priori discretization error estimates. Finally, we introduce an efficient solver that scales as well as the best possible solver for the classical integer-order Helmholtz equation. We conclude with several illustrative examples that confirm our theoretical findings.
We extended the fundamental capabilities of tensor decomposition to a broader range of problems - handling new data types and larger problems. This has implications for data analysis across a range of applications in sensor monitoring, cybersecurity, treaty verification, signal processing, etc. Identifies latent structure within data, enabling anomaly detection, process monitoring, scientific discovery.
Understanding the stress development in fluids as they transition to solids is not well-understood. Computational models are needed to represent "birthing stress" for multiphysics applications such as polymer encapsulation around sensitive electronics and additive manufacturing where these stresses can lead to defects such as cracking and voids. The local stress state is also critical to understand and predict the net shape of parts formed in the liquid phase. In this one-year exploratory LDRD, we have worked towards a novel experimental diagnostic to measure the fluid rheology, degree of solidification, and the solid stress development simultaneously. We debugged and made viable a "first-generation" Rheo-Raman system and used it to characterize two types of solidifying systems: paraffin wax, which crystalizes as it solidifies, and thermoset polymers, which form a network of covalent bonds. We used the paraffin wax as a model system to perform flow visualization studies and did some preliminary modeling of the experiment, to demonstrate the inadequacy of the current modeling approaches. This work will inform an advanced fluid constitutive equation that includes a yield stress, temperature dependence, and an evolving viscosity when we pursue the full proposal, which was funded for FY20.
This document will describe the requirements for improvements to the Rad Responder platform to meet the needs of FRMAC Lab Analysis and other users of the sample control and lab analysis modules. The report is broken down into specific sections and organized by the specific deliverables under the FY19 FEMA-NIRT project. This report describes requirements that go beyond what was originally funded under the FY19 FEMA-NIRT project since auxiliary funding is being used on top of FEMA-NIRT funding through the DOE eFRMAC working group. This document describes all the lab analysis requirements for FRMAC Lab Analysis operations. Under each section the reader will find specific user "stories" or use-cases along with specific and technical requirements for each feature. Mock ups and data models will be provided as needed.
We have investigated the utility of femtosecond/picosecond (fs/ps) coherent anti-Stokes Raman scattering(CARS)for simultaneous measurement of temperature, pressure, and velocity in hypersonic flows. Experiments were conducted in underexpanded jets of air and molecular nitrogen to assess CARS diagnostic performance in terms of signal level scaling, measurement precision, and dynamic range. Pure-rotational CARS of the Raman S branch was applied for simultaneous measurement of temperature and pressure. Thermometry was performed by fitting CARS spectra acquired under nearly collision-free conditions by introducing a picosecond CARS probe pulse at zero delay from the femtosecond pump. Pressure could be subsequently obtained by from a second CARS spectral acquisition with a picosecond probe introduced at time delay to sample molecular collisions. CARS velocimetry was attempted by monitoring the Doppler shift of the N2 vibrational, Q-branch spectrum, with both direct spectral resolution and optical heterodyne detection schemes. Doppler shifts from the sub-I-km/s air jet flow proved too small to measure with this approach, prompting us to turn to femtosecond laser electronic excitation tagging (FLEET) for reliable single-laser-shot velocimetry and CARS temperature/pressure measurement. Scaling of the CARS signal level to very low pressure and temperature conditions expected in the Sandia hypersonic wind tunnel (HWI) was performed. CARS measurements of temperature in HWT appear to be very feasible, while prospects for HWT pressure measurements are reasonable.
We propose a novel method for generating anisotropic adaptive Voronoi meshes that conforms to non-manifold curved boundaries. Our novel method modifies the sampling rules for the VoroCrust software to bring the VoroCrust seeds closer to the surface they are representing. This enables the reconstruction of two surfaces bounding a narrow region while filling the space in-between with stretched Voronoi cells.
We document azimuthally dependent seismic scattering at the Source Physics Experiment (SPE) using the large-N array. The large-N array recorded the seismic wavefield produced by the SPE-5 buried chemical explosion, which occurred in April 2016 at the Nevada National Security Site, U.S.A. By selecting a subset of vertical-component geophones from the large-N array, we formed 10 linear arrays, with different nominal source-receiver azimuths as well as six 2D arrays. For each linear array, we evaluate wavefield coherency as a function of frequency and interstation distance. For both the P arrival and post-P arrivals, the coherency is higher in the northeast propagation direction, which is consistent with the strike of the steeply dipping Boundary fault adjacent to the northwest side of the large-N array. Conventional array analysis using a suite of 2D arrays suggests that the presence of the fault may help explain the azimuthal dependence of the seismic-wave coherency for all wave types. This fault, which separates granite from alluvium, may be acting as a vertically oriented refractor and/or waveguide.
A mechanical model is introduced for predicting the initiation and evolution of complex fracture patterns without the need for a damage variable or law. The model, a continuum variant of Newton’s second law, uses integral rather than partial differential operators where the region of integration is over finite domain. The force interaction is derived from a novel nonconvex strain energy density function, resulting in a nonmonotonic material model. The resulting equation of motion is proved to be mathematically well-posed. The model has the capacity to simulate nucleation and growth of multiple, mutually interacting dynamic fractures. In the limit of zero region of integration, the model reproduces the classic Griffith model of brittle fracture. The simplicity of the formulation avoids the need for supplemental kinetic relations that dictate crack growth or the need for an explicit damage evolution law.
In traditional molecular dynamics (MD) simulations, atoms and coarse-grained particles are modeled as point masses interacting via isotropic potentials. For studies where particle shape plays a vital role, more complex models are required. In this paper we describe a spectrum of approaches for modeling aspherical particles, all of which are now available (some recently) as options within the LAMMPS MD package. Broadly these include two classes of models. In the first, individual particles are aspherical, either via a pairwise anisotropic potential which implicitly assigns a simple geometric shape to each particle, or in a more general way where particles store internal state which can explicitly define a complex geometric shape. In the second class of models, individual particles are simple points or spheres, but rigid body constraints are used to create composite aspherical particles in a variety of complex shapes. We discuss parallel algorithms and associated data structures for both kinds of models, which enable dynamics simulations of aspherical particle systems across a wide range of length and time scales. We also highlight parallel performance and scalability and give a few illustrative examples of aspherical models in different contexts.
This paper analyzes how two Kalman Filter (KF) based frequency estimation algorithms react to phase steps. It is demonstrated that phase steps are interpreted as sharp changes in frequency. The paper studies whether the location of the phase step, within the sinusoidal waveform, has any effect on the frequency estimate. Because phase steps are not the product of a permanent change in the underlying frequency, the paper proposes an algorithm to correct frequency estimates deemed erroneous. The algorithm makes use of the residual of the KF to determine when an estimate is incorrect and to trigger a corrective action in which the frequency estimate is replaced by an average of the previous values that were considered accurate. Using synthesized and simulated data with distortions the paper shows the effectiveness of the correction algorithm in fixing frequency estimates.
Credibility of end-to-end CompSim (Computational Simulation) models and their agile execution requires an expressive framework to describe, communicate and execute complex computational tool chains representing the model. All stakeholders from system engineering and customers through model developers and V&V partners need views and functionalities of the workflow representing the model in a manner that is natural to their discipline. In the milestone and in this report we define workflow as a network of computation simulation activities executed autonomously on a distributed set of computational platforms. The FY19 ASC L2 Milestone (6802) for the Integrated Workflow (IWF) project was designed to integrate and improve existing capabilities or develop new functionalities to provide a wide range of stakeholders a coherent and intuitive platform capable of defining and executing CompSim modeling from analysis workflow definition to complex ensemble calculations. The main goal of the milestone was to advance the integrated workflow capabilities to support the weapon system analysts with a production deployment in FY20. Ensemble calculations supporting program decisions include sensitivity analysis, optimization and uncertainty quantification. The goal of the L2 milestone aligned with the ultimate goal of the IWF project is to foster cultural and technical shift toward and integrated CompSim capability based on automated workflows. Specific deliverables were defined in five broad categories: 1) Infrastructure, including development of distributed-computing workflow capability, 2) integration of Dakota (Sandia's sensitivity, optimization and UQ engine) with SAW (Sandia Analysis Workbench), 3) ARG (Automatic Report Generator introspecting analysis artifacts and generating human-readable extensible and archivable reports), 4) Libraries and Repositories aiding capability reuse, and 5) Exemplars to support training, capturing best practices and stress testing of the platform. A set of exemplars was defined to represent typical weapon system qualification CompSim projects. Analyzing the required capabilities and using the findings to plan implementation of required capabilities ensured optimal allocation of development resources focused on production deployment after the L2 is completed. It was recognized early that the end-to-end modeling applications pose a considerable number of diverse risks, and a formal risk tracking process was implemented. The project leveraged products, capabilities and development tasks of IWF partners. SAW, Dakota, Cubit, Sierra, Slycat, and NGA (NexGen Analytics, a small business) contributed to the integrated platform developed during this milestone effort. New products delivered include: a) NGW (Next Generation Workflow) for robust workflow definition and execution, b) Dakota wizards, editor and results visualization, and c) the automatic report generator ARG. User engagement was initiated early in the development process eliciting concrete requirements and actionable feedback to assure that the integrated CompSim capability will have high user acceptance and impact. The current integrated capabilities have been demonstrated and are continually being tested by a set of exemplars ranging from training scenarios to computationally demanding uncertainty analyses. The integrated workflow platform has been deployed on both SRN (Sandia Restricted Network) and SCN (Sandia Classified Network). Computational platforms where the system has been demonstrated span from Windows (Creo the CAD platform chosen by Sandia) to Trinity HPC (Sierra and CTH solvers). Follow up work will focus on deployment at SNL and other sites in the nuclear enterprise (LLNL, KCNSC), training and consulting support to democratize the analysis agility, process health and knowledge management benefits the NGW platform provides.
We present a scheme implementing an a posteriori refinement strategy in the context of a high-order meshless method for problems involving point singularities and fluid–solid interfaces. The generalized moving least squares (GMLS) discretization used in this work has been previously demonstrated to provide high-order compatible discretization of the Stokes and Darcy problems, offering a high-fidelity simulation tool for problems with moving boundaries. The meshless nature of the discretization is particularly attractive for adaptive h-refinement, especially when resolving the near-field aspects of variables and point singularities governing lubrication effects in fluid–structure interactions. We demonstrate that the resulting spatially adaptive GMLS method is able to achieve optimal convergence in the presence of singularities for both the div-grad and Stokes problems. Further, we present a series of simulations for flows of colloid suspensions, in which the refinement strategy efficiently achieved highly accurate solutions, particularly for colloids with complex geometries.
Critical components such as detonators in Sandia's stockpile are heterogeneous materials that rely on accurate constitutive models for each material used in the component. However, experiments to study materials at high pressure are challenging and time consumptive, therefore we turn to modeling tools to refine and predict outcomes beforehand. Efficient models balance absolute physical accuracy against approximate but computationally lightweight constitutive inputs. By using a relatively small number of high fidelity simulations and experiments we have be able to broaden the predictive power of the shock response in metals and polymers. We have tailored the analysis of these simulations to determine a size dependent material strength, which can be used as constitutive model inputs for continuum hydrodynamics codes. For the shocked Cu, MD simulations show a yield strength from RMI jet growth of approximately 450MPa that depends on the details of the free surface geometry. This value is close to the yield strength of 500MPa parameterized for an elastic-perfectly-plastic strength model from experiments at the Dynamic Compression Sector at Argonne National Lab. The same analysis applied to MD simulations of PMMA jetting resulted in no clear determination of yield strength, implying a more complex RMI process in polymeric materials. Atomistic simulations of both materials are shown to be valuable training metrics and show the need for explicit strain rate dependent strength for future improvements in strength models used in continuum codes.
The Department of Defense Science Board has stated that the United States is "not prepared to defend againsr cyber-attacks and that the military could lose "trust in the information and ability to control U.S. systems and forces [including nuclear forces]." One potential weak spot in cyber-security is storing encryption keys in computer memory. This paper explores the use of hardware devices (so-called Physical Unclonable Functions, or PUFs) to generate, in the nuclear weapon itself, unique encryption keys each time they are needed. Not only do we find that this has the potential to mitigate a number of cyberthreats, but such hardware has the potential to greatly diminish the total uncertainty associated with radiation-based warhead authentication procedures; a procedure many analysts feel will be key to future arms control regimes. After outline the use of PUFs in nuclear command, control, and communications--and indicating some of the areas that still require further research--we discuss their application to arms control and warhead authentication
Bedded salt contains interfaces between the host salt and other in situ materials such as clay seams, or different impurities such as anhydrite or polyhalite in contact with the salt. These inhomogeneities are thought to have first-order effects on the closure of nearby drifts and potential roof collapses. Despite their importance, characterizations of the peak shear strength and residual shear strength of interfaces in salt are extremely rare in the published literature. This paper presents results from laboratory experiments designed to measure the mechanical behavior of a bedding interface or clay seam as it is sheared. The series of laboratory direct shear tests reported in this paper were performed on several samples of materials from the Permian Basin in New Mexico. These tests were conducted at numerous normal and shear loads up to the expected in situ pre-mining stress conditions. Tests were performed on samples with a halite/clay contact, a halite/anhydrite contact, a halite/polyhalite contact, and on plain salt samples without an interface for comparison. Intact shear strength values were determined for all of the test samples along with residual values for the majority of the tests. The results indicated only a minor variation in shear strength, at a given normal stress, across all samples. This result was surprising because sliding along clay seams is regularly observed in the underground, suggesting the clay seam interfaces should be weaker than plain salt. Post-test inspections of these samples noted that salt crystals were intrinsic to the structure of the seam, which probably increased the shear strength as compared to a typical clay seam.