Sandia National Laboratories has tested and evaluated a digitizer, the SMART24B, manufactured by Geotech Instruments, LLC. These digitizers are used to record sensor output for seismic and infrasound monitoring applications. The purpose of the digitizer evaluation was to measure the performance characteristics in such areas as power consumption, input impedance, sensitivity, full scale, self-noise, dynamic range, system noise, response, passband, and timing. The SMART24B is Geotech's datalogger intended for borehole deployment in their digitizer product line. The SMART24B is available with either 3 or 6 channels at 24 bit resolution. The digitizer is to be deployed in boreholes, therefore are a minimum number of connections required on the digitizer case as datalogger utilizes a distribution panel, mounted up-hole, serving to breakout power, GPS, serial communications and ethernet connections.
This document serves to guide a researcher through the process of running the Weather Research and Forecasting (WRF) model and incorporating observations into coarse resolution reanalysis products to model atmospheric conditions at high (50 m) resolution. This documentation is specific to WRF and the WRF Preprocessing System (WPS) version 3.8.1 and the Objective Analysis (OBSGRID) code released on April 8, 2016. Output from WRF serves as an input into the Time-Domain Atmospheric Acoustic Propagation Suite (TDAAPS) which performs staggered-grid finite difference modeling of the acoustic velocity pressure system to produce Green's functions through these atmospheric models.
A series of experiments were performed with the objective of achieving an extreme thermal environment by creating a fire whirl in an enclosure in facilities at the Thermal Test Complex (TTC) at Sandia National Laboratories. The motivation for the experiments is based on results from previous experiments performed at Sandia in an igloo representing a mock weapon's storage facility. In that test series, a fire whirl developed within the igloo resulting in extremely high heat flux levels. This environment was created with a pool fire of 4.6-m in diameter and was not under controlled, repeatable conditions. The objective of the current tests is to have the ability to create this environment in a repeatable controlled environment at a smaller scale, namely with a pool fire not above 3-m diameter effectively, thereby allowing for repeatable, cost-effective testing. In FY15, six tests were conducted in the Crosswind Test Facility (XTF), using a 1.77 m square pan. In FY16, three tests were conducted in the Fire Laboratory for Accreditation of Modeling by Experiment (FLAME) using a 3-m diameter pan. Both of these test series utilized the same enclosure. In FY17, a single test was performed in XTF using a 2.7 m square pan using a modified enclosure which included a ceiling. All tests used Jet-A as the fuel. The wind speed and gap width of the enclosure were varied for the FY15 XTF tests and the gap width and effect of insulation on the enclosure walls were varied for the FY16 FLAME tests. Fuel regression rates, heat flux, and gas velocity measurements were obtained. The results from the FY15 and FY16 test series indicate that fuel regression rates and peak heat flux levels are a factor of two higher than non-fire whirl pool fires of equivalent diameter. The results from the FY17 test using an enclosure with a ceiling met the objective of the test series by achieving temperatures of nearly 1400°C and heat flux levels of 400 kW/m2.
Multivariate time-series datasets are intrinsic to the study of dynamic, naturalistic behavior, such as in the applications of finance and motion video analysis. Statistical models provide the ability to identify event patterns in these data under conditions of uncertainty, but researchers must be able to evaluate how well a model uses available information in a dataset for clustering decisions and for uncertainty information. The Hidden Markov Model (HMM) is an established method for clustering time-series data, where the hidden states of the HMM are the clusters. We develop novel methods for quantifying the uncertainty of the performance of and for visualizing the clustering performance and uncertainty of fitting a HMM to multivariate time-series data. We explain the usefulness of uncertainty quantification and visualization with evaluating the performance of clustering models, as well as how information exploitation of time-series datasets can be enhanced. We implement our methods to cluster patterns of scanpaths from raw eye tracking data.
Social network graph models are data structures representing entities (often people, corporations, or accounts) as "vertices" and their interactions as "edges" between pairs of vertices. These graphs are most often total-graph models — the overall structure of edges and vertices in a bidirectional or directional graph are described in global terms and the network is generated algorithmically. We are interested in "egocentrie or "agent-based" models of social networks where the behavior of the individual participants are described and the graph itself is an emergent phenomenon. Our hope is that such graph models will allow us to ultimately reason from observations back to estimated properties of the individuals and populations, and result in not only more accurate algorithms for link prediction and friend recommendation, but also a more intuitive understanding of human behavior in such systems than is revealed by previous approaches. This report documents our preliminary work in this area; we describe several past graph models, two egocentric models of our own design, and our thoughts about the future direction of this research.
We present single-sided 3D image reconstruction and neutron spectrum of non-nuclear material interrogated with a deuterium-tritium neutron generator. The results presented here are a proof-of-principle of an existing technique previously used for nuclear material, applied to non-nuclear material. While we do see excess signatures over background, they do not have the expected form and are currently un-identified.
Recent work done at the University of Florida (UF) revealed a tremendously enhanced germanium diffusion process along silicon/silicon dioxide interfaces during oxidizing anneals, allowing for the controlled formation of Si quantum wires. This project seeks to further explore this unusual germanium behavior during oxidation for the purpose of forming unique and useful nano and quantum structures. Specifically, we propose here to demonstrate for the first time that this phenomenon can be extended to realize OD Si nanostructures through the oxidation of axially heterostructured vertical Si/SiGe pillars. Such structures could be of great interest for applications in integrated optoelectronics, beyond Moore's Law computing, and quantum computing.
The description and notes describe and scope the activities performed under this PHS. All hazards have been identified. Questions are answered correctly and, as necessary, rationale or clarification is provided. All hazards in the HA have been analyzed, including the identification of controls for each hazard. l have reviewed this PHS and concur that its contents are accurate and complete.
The description and notes describe and scope the activities performed under this PHS. All hazards have been identified. Questions are answered correctly and, as necessary, rationale or clarification is provided. All hazards in the HA have been analyzed, including the identification of controls for each hazard. l have performed the above reviews and concur that those items are complete and accurate.
Aerosol jet printing (AJP) is a promising microscale additive manufacturing technology for emerging applications in printed and flexible electronics. However, the more widespread adoption of this emerging technique is hindered by a limited fundamental understanding of the process. This work focuses on a critical and underappreciated aspect of the process, the interaction between drying induced by the sheath gas and impaction. Combining focused experiments with support from numerical modeling, it is shown that these effects have a dramatic impact on key outputs of the process, including deposition rate, resolution, and morphology. These effects can amplify minor changes in ink composition or atomization yield, increasing process sensitivity and drift. Moreover, these effects can confound strategies for in-line process monitoring and control based on empirical observables. Strategies to directly manipulate this annular drying phenomenon are presented, providing a viable tool to tailor and study the process. This work clarifies coupled effects of printer design, ink formulation, and print parameters, establishing a more robust theoretical framework for understanding the AJP process and advancing the maturity of this promising technology.
Seismic signals are composed of the seismic waves (phases) that reach a sensor, similar to the way speech signals are composed of phonemes that reach a listener's ear. Large/small seismic events near/far from a sensor are similar to loud/quiet speakers with high/low-pitched voices. We leverage ideas from speech recognition for the classification of seismic phases at a seismic sensor. Seismic Phase ID is challenging due to the varying paths and distances an event takes to reach a sensor, but there is consistent structure of the makeup (e.g. ordering) of the different phases arriving at the sensor.
Current loss in magnetically insulated transmission lines (MITLs) was investigated using data from experiments conducted on Z and Mykonos. Data from experiments conducted on Z were used to optimize an ion diode current loss model that has been implemented into the transmission line circuit model of Z. Details on the current loss model and comparisons to data from Z experiments have been previously published in a peer-reviewed journal [Hutsel, et al., Phys. Rev. Accel. Beams 21, 030401]. Dedicated power flow experiments conducted on Mykonos investigated current loss in a millimeter-scale anode-cathode gap MITL operated at lineal current densities greater than 410 kA/cm and with electric field stresses in excess of 240 kV/cm where it is expected that both anode and cathode plasmas are formed. The experiment MITLs were exposed to varying vacuum conditions; including vacuum pressure at shot time, time under vacuum, and vacuum storage protocols. The results indicate that the vacuum conditions have an effect on current loss in high lineal current density MITLs.
GaN-on-Si combines the wide bandgap advantages of GaN with the cost and scaling advantages of Si. Sputtered A1N is an attractive nucleation layer material because it reduces Al diffusion into the Si and eliminates a time-intensive preconditioning step in the GaN growth process, but is limited by the poor film quality of PVD A1N films deposited on Si substrates. Sputtering also offers a large degree of control over A1N film properties, including control of the intrinsic stress using substrate biasing. Doping the A1N films with Sc improves the lattice match to A1GaN and GaN films by expanding the a-axis and c-axis lattice parameters. A1N and A10.88Sc0.12N films have been grown on silicon, metal, and sapphire substrates and characterized for properties such as stress, grain size, roughness, and film orientation for use as nucleation layers for MOCVD GaN growth.
Traditional Monte Carlo particle transport codes are expected to run inefficiently on next-generation architectures as they are memory-intensive and highly divergent. Since electrons and photons also behave differently, the future for coupled electron-photon radiation transport looks even worse. This project describes preliminary efforts to improve the performance of Monte Carlo particle transport codes when using accelerators like the graphics processing unit (GPU). Two key issues are addressed: how to handle memory-intensive tallies, and how to reduce divergence. Tallying on the GPU can be done efficiently by post-processing particle data, or by using a feature called warp shuffle for summing scores in parallel during the simulation. Reducing divergence is possible by using an event-based algorithm for particle tracking instead of the traditional history-based one. Although performance tests presented in this work show that the history-based algorithm generally outperformed the event-based one for simple problems, this outcome will likely change as the complexity of the code increases.
Stress corrosion cracks (SCC) represent a major concern for the structural integrity of engineered metal structures. In hazardous or restricted-access environments, remote detection of corrosion or SCC frequently relies on visual methods; however, with standard VT-1 visual inspection techniques, probabilities of SCC detection are low. Here, we develop and evaluate an improved optical sensor for SCC in restricted access-environments by combining a robotically controlled camera/fiber-optic based probe with software-based super-resolution imaging (SRI) techniques to increase image quality and detection of SCC. SRI techniques combine multiple images taken at different viewing angles, locations, or rotations, to produce a single higher- resolution composite image. We have created and tested an imaging system and algorithms for combining optimized, controlled camera movements and super- resolution imaging, improving SCC detection probabilities, and potentially revolutionizing techniques for remote visual inspections of any type.
This report explores the potential for reducing the fields and the quality factor within a system cavity by introducing microwave absorbing materials. Although the concept of introducing absorbing (lossy) materials within a cavity to drive the interior field levels down is well known, increasing the loading into a complex weapon cavity specifically for improved electromagnetic performance has not, in general, been considered, and this will be the subject of this work. We compare full-wave simulations to experimental results, demonstrating the applicability of the proposed method.
Current designs for spent fuel transportation casks cannot ensure a cask's integrity during shipment, nor is there any verifiable means of maintaining continuity of knowledge (CoK) on a cask's contents. Spent fuel destined for encapsulation plants or geological repositories requires additional containment and surveillance (C/S) measures during shipment. Following final safeguards accountancy measurements on spent fuel assemblies, the shipment of verified assemblies will require unprecedented reliance on maintaining CoK on the fuel inside transport casks. Such increased reliance is due to the lack of reverification of spent fuel following encapsulation into disposal canisters and by meeting the requirement of dual C/S measures during such fuel shipments according to recommendations made by the Application of Safeguards to Geological Repositories (ASTOR) International Atomic Energy Agency (IAEA) expert group. By designing spent fuel transportation casks with effective seals integrated into their design, CoK can be more effectively maintained than by ad hoc C/S measures because seal integration ensures that a cask has not been tampered with. Externally applied seals might not be able to provide such assurance for currently designed spent fuel transportation casks, although some combination of seals, detectors, and/or a technology that can verify canister integrity might provide this assurance. This paper examines the design criteria for integrating safeguards seals into transportation casks and provides recommendations for near-term applications.
The rise of low-power neuromorphic hardware has the potential to change high-performance computing; however much of the focus on brain-inspired hardware has been on machine learning applications. A low-power solution for solving partial differential equations could radically change how we approach large-scale computing in the future. The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by grid cells in the brain. The second method tracks the densities of random walkers at each spatial location directly. We present the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions. Finally, we present implementations of these algorithms on neuromorphic hardware.
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices. The treatment allows one to assume the cross sections are distributed with a multivariate normal distribution, lognormal distribution, or truncated normal distribution.