Synthetic biology is an interdisciplinary field that aims to engineer biological systems for useful purposes. Organism engineering often requires the optimization of individual genes and/or entire biological pathways (consisting of multiple genes). Advances in DNA sequencing and synthesis have recently begun to enable the possibility of evaluating thousands of gene variants and hundreds of thousands of gene combinations. However, such large-scale optimization experiments remain cost-prohibitive to researchers following traditional molecular biology practices, which are frequently labor-intensive and suffer from poor reproducibility. Liquid handling robotics may reduce labor and improve reproducibility, but are themselves expensive and thus inaccessible to most researchers. Microfluidic platforms offer a lower entry price point alternative to robotics, and maintain high throughput and reproducibility while further reducing operating costs through diminished reagent volume requirements. Droplet microfluidics have shown exceptional promise for synthetic biology experiments, including DNA assembly, transformation/transfection, culturing, cell sorting, phenotypic assays, artificial cells and genetic circuits.
A major challenge in the commercialization of additive manufactured (AM) materials and processes is the ability to achieve acceptance of processes and products. There has been some progress towards acceptance has been made by adapting legacy qualification paradigms to match with the very limited process control and monitoring offered by AM machines. The opportunity for in-situ measurement can provide process monitoring and control perhaps changing the way we qualify parts however it is limited by lack of adequate process measurement methods. New measurement techniques, sensors and correlations to relevant phenomena are needed that enable process control and monitoring for consistently producing high quality articles. Beyond process data we need to characterize uncertainties of performance in all aspects of material, process and final part. These are prerequisites to achieving articles that are indeed worthy of materials characterization efforts that establish a microstructural reference of desirable performance through process-structure-property relations. Only then can industry apply physics based understanding of the material, part and process to probabilistically predict performance of an AM part. Our paper provides a brief overview, discussion of hurdles and key areas where R&D investment is needed.
Here, the effect of a linear accelerator’s (LINAC’s) microstructure (i.e., train of narrow pulses) on devices and the associated transient photocurrent models are investigated. The data indicate that the photocurrent response of Si-based RF bipolar junction transistors and RF p-i-n diodes is considerably higher when taking into account the microstructure effects. Similarly, the response of diamond, SiO2, and GaAs photoconductive detectors (standard radiation diagnostics) is higher when taking into account the microstructure. This has obvious hardness assurance implications when assessing the transient response of devices because the measured photocurrent and dose rate levels could be underestimated if microstructure effects are not captured. Indeed, the rate the energy is deposited in a material during the microstructure peaks is much higher than the filtered rate which is traditionally measured. In addition, photocurrent models developed with filtered LINAC data may be inherently inaccurate if a device is able to respond to the microstructure.
Molecular simulations of the adsorption of representative organic molecules onto the basal surfaces of various clay minerals were used to assess the mechanisms of enhanced oil recovery associated with salinity changes and water flooding. Simulations at the density functional theory (DFT) and classical levels provide insights into the molecular structure, binding energy, and interfacial behavior of saturate, aromatic, and resin molecules near clay mineral surfaces. Periodic DFT calculations reveal binding geometries and ion pairing mechanisms at mineral surfaces while also providing a basis for validating the classical force field approach. Through classical molecular dynamics simulations, the influence of aqueous cations at the interface and the role of water solvation are examined to better evaluate the dynamical nature of cation-organic complexes and their coadsorption onto the clay surfaces. The extent of adsorption is controlled by the hydrophilic nature and layer charge of the clay mineral. All organic species studied showed preferential adsorption on hydrophobic mineral surfaces. However, the anionic form of the resin (decahydro-2-naphthoic acid), expected to be prevalent at near-neutral pH conditions in petroleum reservoirs, readily adsorbs to the hydrophilic kaolinite surface through a combination of cation pairing and hydrogen bonding with surface hydroxyl groups. Analysis of cation-organic pairing in both the adsorbed and desorbed states reveals a strong preference for organic anions to coordinate with divalent calcium ions rather than monovalent sodium ions, lending support to current theories regarding low-salinity water flooding.
Low-salinity water flooding, a method of enhanced oil recovery, consists of injecting low ionic strength fluids into an oil reservoir in order to detach oil from mineral surfaces in the underlying formation. Although highly successful in practice, the approach is not completely understood at the molecular scale. Molecular dynamics simulations have been used to investigate the effect of surface protonation on the adsorption of an anionic crude oil component on clay mineral edge surfaces. A set of interatomic potentials appropriate for edge simulations has been applied to the kaolinite (010) surface in contact with an aqueous nanopore. Decahydro-2-napthoic acid in its deprotonated form (DHNA-) was used as a representative resin component of crude oil, with monovalent and divalent counterions, to test the observed trends in low-salinity water flooding experiments. Surface models include fully protonated (neutral) and deprotonated (negative) edge sites, which require implementation of a new deprotonation scheme. The surface adsorptive properties of the kaolinite edge under neutral and deprotonated conditions have been investigated for low and high DHNA- concentrations with Na+ and Ca2+ as counterions. The tendency of DHNA- ions to coordinate with divalent (Ca2+) rather than monovalent (Na+) ions greatly influences adsorption tendencies of the anion. Additionally, the formation of net positively charged surface sites due to Ca2+ at deprotonated sites results in increased DHNA- adsorption. Divalent cations such as Ca2+ are able to efficiently bridge surface sites and organic anions. Replacing those cations with monovalent cations such as Na+ diminishes the bridging mechanism, resulting in reduced adsorption of the organic species. A clear trend of decreased DHNA- adsorption is observed in the simulations as Ca2+ is replaced by Na+ for deprotonated surfaces, as would be expected for oil detachment from reservoir formations following a low-salinity flooding event.
James Wait was a pioneer in electromagnetic methods in geophysical exploration and wave propagation. In his career, he published more than 860 papers and wrote 8 books on subjects ranging from geo-electromagnetism to lightning. He left an undeniable mark on every subject he has tackled, and his work on layered media, propagation along thin wires and induced polarization are seminal and widely cited throughout the world.
Nanocrystals (NCs) can self-assemble into ordered superlattices with collective properties, but the ability for controlling NC assembly remains poorly understandable toward achievement of desired superlattice. This work regulates several key variables of PbS NC assembly (e.g., NC concentration and solubility, solvent type, evaporation rate, seed mediation and thermal treatment), and thoroughly exploits the nucleation and growth as well as subsequent superlattice transformation of NC assembles and underneath mechanisms. PbS NCs in toluene self-assemble into a single face-centered-cubic (fcc) and body-centered-cubic (bcc) superlattice, respectively, at concentrations ≤17.5 and ≥70 mg/mL, but an intermediate concentration between them causes the coexistence of the two superlattices. Differently, NCs in hexane or chloroform self-assemble into only a single bcc superlattice. Distinct controls of NC assembly in solvent with variable concentrations confirm the NC concentration/solubility mediated nucleation and growth of superlattice, in which an evaporation-induced local gradient of NC concentration causes simultaneous nucleation of the two superlattices. The observation for the dense packing planes of NCs in fast growing fcc rather than bcc reveals the difference of entropic driving forces responsible for the two distinct superlattices. Decelerating the solvent evaporation does not amend the superlattice symmetry, but improves the superlattice crystallinity. In addition to shrinking the superlattice volume, thermal treatment also transforms the bcc to an fcc superlattice at 175 °C. Through a seed-meditated growth, the concentration-dependent superlattice does not change lattice symmetry over the course of continuous growth, whereas the newly nucleated secondary small nuclei through a concentration change have relatively higher surface energy and quickly dissolve in solution, providing additional NC sources for the ripening of the primarily nucleated larger and stable seeds. The observations under multiple controls of assembly parameters not only provide insights into the nucleation and growth as well as transformation of various superlattice polymorphs but also lay foundation for controlled fabrication of desired superlattice with tailored property.
This report develops and documents linear and nonlinear constitutive relations implemented in ALEGRA-FE (ferroelectric). A thermodynamic framework is created to describe the electromechanical system in the form of a free energy functional. Constitutive relations are derived by taking series expansions of the free energy functional with respect to the independent fields. First order expansion terms yield linear constitutive relations and higher order expansion terms yield non-linear constitutive relations. This document serves as supplement to Section 4 of Sandia Report SAND2013-7363, Rev 3. Methods for implementation of kinematic relations of piezoelectric models and rotation of material principal axes are covered in the supplemented report. Additional discussion on phase velocity calculation is also presented.
Aryal, Dipak; Agrawal, Anupriya; Perahia, Dvora; Grest, Gary S.
Controlling the structure and dynamics of thin films of ionizable polymers at water interfaces is critical to their many applications. As the chemical diversity within one polymer is increased, controlling the structure and dynamics of the polymer, which is a key to their use, becomes a challenge. Here molecular dynamics simulations (MD) are used to obtain molecular insight into the structure and dynamics of thin films of one such macromolecule at the interface with water. The polymer consists of an ABCBA topology with randomly sulfonated polystyrene (C), tethered symmetrically to flexible poly(ethylene-r-propylene) blocks (B), and end-capped by a poly(t-butylstyrene) block (A). The compositions of the interfacial and bulk regions of thin films of the ABCBA polymers are followed as a function of exposure time to water. We find that interfacial rearrangements take place where buried ionic segments migrate toward the water interface. The hydrophobic blocks collapse and rearrange to minimize their exposure to water. The water that initially drives interfacial reengagements breaks the ionic clusters within the film, forming a dynamic hydrophilic internal network within the hydrophobic segments.
This research presents a predictive engine that integrates into an on-line optimal control planner for electrical microgrids. This controller models the behavior of the underlying system over a specified time horizon and then solves for a control over this period. In an electrical microgrid, such predictions are challenging to obtain in the presence of errors in the sensor information. The likelihood of instrumentation errors increases as microgrids become more complex and cyber threats more common. In order to overcome these difficulties, details are provided about a predictive engine robust to errors.
A 6-mm by 6-mm by 50-mm bar of stilbene was coupled on both ends to silicon photomultipliers (SiPMs) to assess the detector's position sensitivity to interactions throughout the bar. A Na-22 gamma ray source was collimated with a pair of lead bricks to produce a source beam that was used to irradiate five positions along the length of the bar. A logarithmic relationship between the ratio of the pulse heights obtained from the two SiPMs and the position of the collimated source was established. The standard deviation of the distribution of ratios from each measurement was propagated through the functional form to determine position resolution. The position resolution along the length of the bar was determined to have an average value of 4.9 mm.
We investigate the feasibility of constructing a data-driven distance metric for use in null-hypothesis testing in the context of arms-control treaty verification. The distance metric is used in testing the hypothesis that the available data are representative of a certain object or otherwise, as opposed to binary-classification tasks studied previously. The metric, being of strictly quadratic form, is essentially computed using projections of the data onto a set of optimal vectors. These projections can be accumulated in list mode. The relatively low number of projections hampers the possible reconstruction of the object and subsequently the access to sensitive information. The projection vectors that channelize the data are optimal in capturing the Mahalanobis squared distance of the data associated with a given object under varying nuisance parameters. The vectors are also chosen such that the resulting metric is insensitive to the difference between the trusted object and another object that is deemed to contain sensitive information. Data used in this study were generated using the GEANT4 toolkit to model gamma transport using a Monte Carlo method. For numerical illustration, the methodology is applied to synthetic data obtained using custom models for plutonium inspection objects. The resulting metric based on a relatively low number of channels shows moderate agreement with the Mahalanobis distance metric for the trusted object but enabling a capability to obscure sensitive information.
Our work uses market analysis and simulation to explore the potential of public charging infrastructure to spur US battery electric vehicle (BEV) sales, increase national electrified mileage, and lower greenhouse gas (GHG) emissions. By employing both scenario and parametric analysis for policy driven injection of public charging stations we find the following: (1) For large deployments of public chargers, DC fast chargers are more effective than level 2 chargers at increasing BEV sales, increasing electrified mileage, and lowering GHG emissions, even if only one DC fast charging station can be built for every ten level 2 charging stations. (2) A national initiative to build DC fast charging infrastructure will see diminishing returns on investment at approximately 30,000 stations. (3) Some infrastructure deployment costs can be defrayed by passing them back to electric vehicle consumers, but once those costs to the consumer reach the equivalent of approximately 12¢/kWh for all miles driven, almost all gains to BEV sales and GHG emissions reductions from infrastructure construction are lost.
The discontinuous Petrov–Galerkin (DPG) methodology of Demkowicz and Gopalakrishnan (2010, 2011) guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. A key question that has not yet been answered in general – though there are some results for Poisson, e.g.– is how best to precondition the DPG system matrix, so that iterative solvers may be used to allow solution of large-scale problems. In this paper, we detail a strategy for preconditioning the DPG system matrix using geometric multigrid which we have implemented as part of Camellia (Roberts, 2014, 2016), and demonstrate through numerical experiments its effectiveness in the context of several variational formulations. We observe that in some of our experiments, the behavior of the preconditioner is closely tied to the discrete test space enrichment. We include experiments involving adaptive meshes with hanging nodes for lid-driven cavity flow, demonstrating that the preconditioners can be applied in the context of challenging problems. We also include a scalability study demonstrating that the approach – and our implementation – scales well to many MPI ranks.
Peste des Petits Ruminants (PPR) is an infectious disease affecting goats and sheep. PPR has a mortality rate of 80% and a morbidity rate of 100% in naïve herds. This disease is currently of concern to Afghani goat and sheep herders as conditions in Afghanistan are conducive to the disease becoming an epidemic. PPR is similar to Rinderpest, but is not as well studied. There is a lack of empirical data on how the disease spreads or effective large-scale mitigation strategies. We developed a herd-level, event-driven model of PPR, using memoryless state transitions, to study how the virus propagates through a herd, and to identify effective control strategies for disparate herd configurations and environments. This model allows us to perform Sensitivity Analyses (SA) on environmental and disease parameters for which we do not have empirical data and to simulate the effectiveness of various control strategies. We find that reducing the amount of time from the identification of PPR in a herd to the vaccination of the herd will radically reduce the number of deaths that result from PPR. The goal of this model is to give policy makers a tool to develop effective containment strategies for managing outbreaks of PPR.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD, we refer the reader to Sierra/SD, Users Notes. Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer_notes manual, the user's notes and of course the material in the open literature.
The purpose of this appendix is to provide a consistent approach for establishing the hazard category for a nuclear facility as required in 10 CFR 830, Nuclear Safety Management, Subpart B, "Safety Basis Requirements," Section 202 (b)(3). As defined, this approach is consistent with DOE-STD-1027-92 Change Notice No. 1, Hazard Categorization and Accident Analysis Techniques for Compliance with DOE Order 5480.23, Nuclear Safety Analysis Reports (hereafter DOE-STD-1027-92), and facilitates the use of updated dosimetry and release fractions as provided in NNSA SD G 1027 Admin Change 1, Guidance on Using Release Fraction and Modern Dosimetric Information Consistently with DOE STD 1027-92, Hazard Categorization and Accident Analysis Techniques for Compliance with DOE Order 5480.23, Nuclear Safety Analysis Reports, Change Notice No. 1 (hereafter NNSA SD G 1027).
National Technology & Engineering Solutions of Sandia, LLC, manages the facilities and infrastructure of Sandia National Laboratories (SNL) on behalf of the U.S. Department of Energy/National Nuclear Security Administration (DOE/NNSA). Activities in support of the SNL mission take place at primary facilities located in Albuquerque, New Mexico, and Livermore, California, as well as at supporting facilities located in Nevada, Hawaii, Alaska, Texas, and Washington, D.C. As required by the Management and Operating (M&O) Contract (DE-NA0003525) and DOE orders, standards, and guidance, the M&O contractor maintain a safety basis process, which helps to identify and evaluate hazards related to facilities, operations, and activities at both Sandia-controlled premises (i.e., onsite) and non-Sandia-controlled premises (i.e., offsite) to adequately protect workers, the public, and the environment. This manual describes the SNL safety basis process, which is managed by the Environment, Safety, and Health (ES&H) Planning Department. Safety basis personnel ensure that a centralized and consistent process is implemented for safety basis development, documentation, and evaluation.
The noise performance of infrared detectors can be improved through utilization of thinner detector layers which reduces thermal and generation-recombination noise currents. However, some infrared detector materials suffer from weak optical absorption and thinning the detector layer can lead to incomplete absorption of the incoming infrared photons which reduces detector quantum efficiency. Here, we show how subwavelength metallic nanoantennas can be used to boost the efficiency of photon absorption for thin detector layers, thereby achieving overall enhanced detector performance.
Electrochemistry is necessarily a science of interfacial processes, and understanding electrode/electrolyte interfaces is essential to controlling electrochemical performance and stability. Undesirable interfacial interactions hinder discovery and development of rational materials combinations. By example, we examine an electrolyte, magnesium(II) bis(trifluoromethanesulfonyl)imide (Mg(TFSI)2) dissolved in diglyme, next to the Mg metal anode, which is purported to have a wide window of electrochemical stability. However, even in the absence of any bias, using in situ tender X-ray photoelectron spectroscopy, we discovered an intrinsic interfacial chemical instability of both the solvent and salt, further explained using first-principles calculations as driven by Mg2+ dication chelation and nucleophilic attack by hydroxide ions. The proposed mechanism appears general to the chemistry near or on metal surfaces in hygroscopic environments with chelation of hard cations and indicates possible synthetic strategies to overcome chemical instability within this class of electrolytes.
The Chemistry Science Investigation: Dognapping Workshop was designed to (i) target and inspire fourth grade students to view themselves as Junior Scientists before their career decisions are solidified; (ii) enable hands-on experience in fundamental scientific concepts; (iii) increase public interaction with science, technology, engineering, and mathematical personnel by providing face-to-face opportunities; (iv) give teachers a pathway forward for scientific resources; (v) meet the New Mexico K-5 Science Benchmark Performance Standards; (vi) most importantly, ensure everyone has fun! For this workshop, the students are told they will be going to see a Chemistry Magic Show, but the performance is stopped when the Chemistry Dog is reportedly stolen. The students first clear their names using a series of interactive stations and then apply a number of science experiments to solve the mystery. This report describes the workshop in detail, which is suitable for large (100 students per day) audiences but has flexibility to be modified for much smaller groups. An identical survey was given three times (before, immediately after, and 2 months after the workshop) to determine the impact on the students' perception of science and scientists as well as determine the effectiveness in relaying scientific concepts through retention time. Survey responses indicate that scientific information pertaining to the workshop is retained for up to 2 months.
In situ neutron diffraction measurements were completed for this study during tensile and compressive deformation of stainless steel 304L additively manufactured (AM) using a high power directed energy deposition process. Traditionally produced wrought 304L material was also studied for comparison. The AM material exhibited roughly 200 MPa higher flow stress relative to the wrought material. Crystallite size, crystallographic texture, dislocation density, and lattice strains were all characterized to understand the differences in the macroscopic mechanical behavior. The AM material’s initial dislocation density was about 10 times that of the wrought material, and the flow strength of both materials obeyed the Taylor equation, indicating that the AM material’s increased yield strength was primarily due to greater dislocation density. Finally, a ~50 MPa flow strength tension/compression asymmetry was observed in the AM material, and several potential causes were examined.
Glick, Joseph A.; Khasawneh, Mazin A.; Niedzielski, Bethany M.; Loloee, Reza; Pratt, W.P.; Birge, Norman O.; Gingrich, E.C.; Kotula, Paul G.; Missert, Nancy
Josephson junctions containing ferromagnetic layers are of considerable interest for the development of practical cryogenic memory and superconducting qubits. Such junctions exhibit a ground-state phase shift of π for certain ranges of ferromagnetic layer thicknesses. We present studies of Nb based micron-scale elliptically shaped Josephson junctions containing ferromagnetic barriers of Ni81Fe19 or Ni65Fe15Co20. By applying an external magnetic field, the critical current of the junctions is found to follow characteristic Fraunhofer patterns and display sharp switching behavior suggestive of single-domain magnets. The high quality of the Fraunhofer patterns enables us to extract the maximum value of the critical current even when the peak is shifted significantly outside the range of the data due to the magnetic moment of the ferromagnetic layer. The maximum value of the critical current oscillates as a function of the ferromagnetic barrier thickness, indicating transitions in the phase difference across the junction between values of zero and π. We compare the data to previous work and to models of the 0-π transitions based on existing theories.
Solar photovoltaic systems provide cost savings to the property owner in terms of avoided electricity costs that accrue over the system lifetime. From an investment standpoint, the equipment and the value of the energy generated can potentially increase the underlying property value. This first-of-a-kind study presents real market data collected from real estate appraisers using the PV Value® tool to develop a market value for solar as part of a property sale or refinance. Aggregated results at the state level are discussed for California, Arizona and Massachusetts, using 2015 and 2016 data where appraisers used the income capitalization approach to develop a market value for solar. Additional data collection using future transaction data could reveal market-specific trends and insights at the zip code, city and metropolitan statistical area (MSA) levels.
One of the major challenges of simulating flow and transport in the far field of a geologic repository in crystalline host rock is related to reproducing the properties of the fracture network over the large volume of rock with sparse fracture characterization data. Various approaches have been developed to simulate flow and transport through the fractured rock. The approaches can be broadly divided into Discrete Fracture Network (DFN) and Equivalent Continuum Model (ECM). The DFN explicitly represents individual fractures, while the ECM uses fracture properties to determine equivalent continuum parameters. We compare DFN and ECM in terms of upscaled observed transport properties through generic fracture networks. The major effort was directed on making the DFN and ECM approaches similar in their conceptual representations. This allows for separating differences related to the interpretation of the test conditions and parameters from the differences between the DFN and ECM approaches. The two models are compared using a benchmark test problem that is constructed to represent the far field (1 × 1 × 1 km3) of a hypothetical repository in fractured crystalline rock. The test problem setting uses generic fracture properties that can be expected in crystalline rocks. The models are compared in terms of the: 1) effective permeability of the domain, and 2) nonreactive solute breakthrough curves through the domain. The principal differences between the models are mesh size, network connectivity, matrix diffusion and anisotropy. We demonstrate how these differences affect the flow and transport. We identify the factors that should be taken in consideration when selecting an approach most suitable for the site-specific conditions.
Cubic zirconium tungstate (α-ZrW2O8), a notorious negative thermal expansion (NTE) material, has been investigated within the framework of density functional perturbation theory (DFPT), combined with experimental characterization to assess and validate computational results. Spectroscopic, mechanical and thermodynamic properties have been derived from DFPT calculations. A systematic comparison of DFPT-simulated infrared, Raman, and phonon density-of-state spectra with Fourier transform far-/mid-infrared and Raman data collected in this study, as well as with available inelastic neutron scattering measurements, shows the supe-rior accuracy of the PBEsol exchange-correlation functional over standard PBE calculations. The thermal evolution of the Grüneisen parameter computed within the quasi-harmonic approximation exhibits negative values below the Debye temperature, consistent with the observed NTE characteristics of α-ZrW2O8. The standard molar heat capacity is predicted to be C$0\atop{P}$=193.8 and 192.2 J.mol-1.K-1 with PBE and PBEsol, respectively, ca. 7% lower than calorimetric data. In conclusion, these results demonstrate the accuracy of the DFPT/PBEsol approach for studying the spectroscopic, mechanical and thermodynamic properties of materials with anomalous thermal expansion.
Directed energy deposition (DED) and forged austenitic stainless steels possess distinct microstructures, but may exhibit similar mechanical properties. In this study, annealing is used to evolve the microstructures of these materials, and scanning electron microscopy techniques are used to probe the similarities and differences of the microstructure-property relationships. A strong correlation between geometrically necessary dislocation (GND) density and hardness is observed for the forged material. Finally, a more complex relationship is observed in the DED material and is attributed to the thermally driven dissolution of the solidification microstructure.
Woven fiber, laminated composites allow the design engineer to create high strength parts, but the effectiveness of the final processed part is greatly diminished through weak or nonexistent bonds between the composite and the substrate to which it is bonded. Additionally, these layered laminates are commonly made by curing the resin infused carbon fiber fabrics in predefined layers and then bonding them to another composite or a metallic structure using either a pre-cure or a co-cure method. The focus of this study is the identification of the defect caused by a disbond or a delamination located at the interface between a composite laminate stack and the substrate to which it is bonded. We present a nondestructive approach using various ultrasonic methods to identify the existence of the bond between composite and composite-to-metal interface. This paper explores contact and immersion ultrasound methods using pulse-echo for evaluating the composite material and adhesive bondline and the signal attenuation undergone by the wave as it propagates through the composite. Finally, a summary of the detection and analysis techniques developed to identify disbonds, including Fast Fourier Transform analysis of the immersion data, is presented. Lastly, each of the methods evaluated in this study is able to detect the transition from bonded to unbonded sections at the bondline from either side of the bonded part, with the immersion technique providing a significantly higher resolution of the edge of the bondline.
Aerosol Deposition (AD) is a unique thick film deposition technology that is capable of depositing ceramic, metallic, or composite films through the acceleration, impact and consolidation of dry, fine sized (~0.1-1μm) particle feedstock delivered by a carrier gas towards a substrate [akedo]. Additionally, the use of fine particle feedstock is necessary in order for typically brittle materials (i.e., ceramics) to exhibit sufficient plasticity and non-brittle fracturing that is the key mechanism to coating consolidation [Sarobol], resulting in a dense, nano-crystalline grain size deposition.
Titanium alloys such as Ti-6Al-4V and Ti-6Al-7Nb are widely used in the biomedical industry as structural implant materials. The strain-rate sensitivity of tensile strength is now assessed using a modified Kocks-Mecking formulation for hardening. The operative scale of the microstructural strengthening is designated by the coefficieint cb that can be determined from measurements of plastic strain, yield and ultimate strength. It is found that although that strength varies slightly with strain rate, the scale of the microstructure cb remains nearly constant for each material. Finally, a low cb-value of 14 is computed for Ti-6Al-4V that is consistent with the refined twophase microstructure needed to enhance both ductility beyond yielding and its ultimate strength.
In this study, we have made reasonable cookoff predictions of large-scale explosive systems by using pressure-dependent kinetics determined from small-scale experiments. Scale-up is determined by properly accounting for pressure generated from gaseous decomposition products and the volume that these reactive gases occupy, e.g. trapped within the explosive, the system, or vented. The pressure effect on the decomposition rates has been determined for different explosives by using both vented and sealed experiments at low densities. Low-density explosives are usually permeable to decomposition gases and can be used in both vented and sealed configurations to determine pressure-dependent reaction rates. In contrast, explosives that are near the theoretical maximum density (TMD) are not as permeable to decomposition gases, and pressure-dependent kinetics are difficult to determine. Ignition in explosives at high densities can be predicted by using pressure-dependent rates determined from the low-density experiments as long as gas volume changes associated with bulk thermal expansion are also considered. In the current work, cookoff of the plastic-bonded explosives PBX 9501 and PBX 9502 is reviewed and new experimental work on LX-14 is presented. Reactive gases are formed inside these heated explosives causing large internal pressures. The pressure is released differently for each of these explosives. For PBX 9501, permeability is increased and internal pressure is relieved as the nitroplasticizer melts and decomposes. Internal pressure in PBX 9502 is relieved as the material is damaged by cracks and spalling. For LX-14, internal pressure is not relieved until the explosive thermally ignites. The current paper is an extension of work presented at the 26th ICDERS symposium [1].
Abuse tests are designed to determine the safe operating limits of HEV\PHEV energy storage devices. Testing is intended to achieve certain worst-case scenarios to yield quantitative data on cell\module\pack response, allowing for failure mode determination and guiding developers toward improved materials and designs. Standard abuse tests with defined start and end conditions are performed on all devices to provide comparison between technologies. New tests and protocols are developed and evaluated to more closely simulate real-world failure conditions. When scaling from cell to the battery level, a detailed understanding of cell interactions provides insight on safety performance. Single point failures from a cell or group of cells can be initiated by a number of triggers including internal short circuit, misuse or abuse, or component failure at the battery or system level. Propagation of a single failure event (regardless of the initiation trigger) through an entire battery, system, or vehicle is an unacceptable outcome with regards to EV battery safety. In this FY, our work has focused on evaluating the propagation of a single cell thermal runaway event through a battery using a variety of design considerations with an emphasis on passive thermal management impacts. This has been coupled with thermal modeling by NREL for these testing conditions. In addition, alternative failure initiation methods have been evaluated to provide direct comparisons of possible energy injection between modes. This data was compiled to better identify what propagation test method is appropriate given certain battery designs. Expanding the analysis of short circuit current during failure propagation has been done for EV relevant chemistries. Ongoing test development and validation to obtain these values has been achieved. While robust mechanical models for vehicles and vehicle components exist, there is a gap for mechanical modeling of EV batteries. The challenge with developing a mechanical model for a battery is the heterogeneous nature of the materials and components (polymers, metals, metal oxides, liquids). Our work will provide empirical data on the mechanical behavior of batteries under compressive load to understand how a battery may behave in a vehicle crash scenario. This work is performed in collaboration with the U.S. Council for Automotive Research (USCAR) and Computer Aided Engineering of Batteries (CAEBAT). These programs have supported the design and development of a drop tower testing apparatus to close the gap between cell/string level testing and full scale crash testing with true dynamic rate effects.
Miljkovic, Nenad; Pilawa-Podgurski, Robert; Foulkes, Thomas; Oh, Junho; Birbarah, Patrick; Neely, Jason C.
Demand for enhanced cooling technologies within various commercial and consumer applications has increased in recent decades due to electronic devices becoming more energy dense. This study demonstrates jumping-droplet based electric-field-enhanced (EFE) condensation as a potential method to achieve active hot spot cooling in electronic devices. To test the viability of EFE condensation, we developed an experimental setup to remove heat via droplet evaporation from single and multiple high power gallium nitride (GaN) transistors acting as local hot spots (4.6 mm x 2.6 mm). An externally powered circuit was developed to direct jumping droplets from a copper oxide (CuO) nanostructured superhydrophobic surface to the transistor hot spots by applying electric fields between the condensing surface and the transistor. Heat transfer measurements were performed in ambient air (22-25°C air temperature, 20-45% relative humidity) to determine the effects of gap spacing (2-4 mm), electric field (50-250 V/cm), and heat flux (demonstrated to 13 W/cm2). EFE condensation was shown to enhance the heat transfer from the local hot spot by ≈ 200% compared to cooling without jumping and by 20% compared to non-EFE jumping. Dynamic switching of the electric field for a two-GaN system reveals the potential for active cooling of mobile hot spots. The opportunity for further cooling enhancement by the removal of non-condensable gases promises hot spot heat dissipation rates approaching 120 W/cm2. This work provides a framework for the development of active jumping droplet based vapor chambers and heat pipes capable of spatial and temporal thermal dissipation control.
One of the largest transitions in the power system today is the shift to a more sustainable and resilient power system. This is being driven by public opinion, changes in regulatory policies, and advancements in smart grid technologies. The most noticeable changes taking place is the integration of distributed energy sources (DERs); this study uses the term DER in the most general way as a resource that can be manipulated to alter energy delivery and flow in the transmission and distribution networks. Also, here it is preferred to focus on energy as the true need while power is a function of the equipment rating. As such, wind and solar, demand that can be manipulated, electric vehicles, electric energy storage, thermal storage, and storage in water system are all considered DERs. These additions to the distribution system are evolving the operation of distribution feeders into microgrids- communication, computing, and control-enabled resources that produce, transport, and utilize energy in a manner that provides cost, reliability, and resilience benefits. As this evolution progresses, the planning and operational management (scheduling and control) must explicitly include the consideration of risk. The management of system risk is currently in the purview of the utility and will likely remain so in the future. However, as each microgrid, as well as federation of microgrids, sees autonomy in order to provide maximum benefits to their constituents, they must assume responsibility to manage their internal risk. The primary scope of this study is the scheduling of resources in a distribution feeder(s) operating as microgrids. The study explores a distribution algorithm to develop the transactive schedule for the DERs, to minimize cost and risk over a time horizon, and an initial laboratory-scale to conduct implementation on distributed hardware. Results from case studies are presented that show that solutions derived by the distributed algorithm are valid. This study also discusses the continuing work on the expansion of: 1) the distributed algorithm from a deterministic to stochastic optimization formulation, and 2) implementation of the distributed algorithm into real-time simulation within the Power System laboratory at New Mexico State University (NMSU) and expanding to the Southwest Technology Development Institute located at NMSU where actual solar, energy storage, and demand response resources are installed.
This report describes the application of an approach for determining grid modernization investments that can best improve the resilience of communities. Under the direction of the US Department of Energy's Grid Modernization Laboratory Consortium, Sandia National Laboratories (Sandia) and Los Alamos National Laboratory (Los Alamos) collaborated with community stakeholders in New Orleans, Louisiana on grid modernization strategies for resilience. Past disruptions to the electric grid in New Orleans have contributed to an inability to provide citizens with adequate access to a wide range of infrastructure services. Using a performance-based resilience metric, Sandia and Los Alamos performed analysis on how to improve access to infrastructure services across New Orleans after a major disruption using a system of resilience nodes. Resilience nodes rely on a combination of urban planning with grid investment planning for resilience in order to design clustered infrastructure assets with highly resilient electrical supply. Results of the analysis led to suggestion of 22 draft resilience node locations that can provide a wide range of infrastructure services equitably to New Orleans citizens. This report serves as a proof-of-concept for the Urban Resilience Planning Process, and describes several gaps that should be overcome in order to integrate resilience planning between electric utilities and local governments.
This report summarizes the activities that Sandia National Laboratories undertook in support of the Si anode Fundamentals program managed by the Vehicle Technology Office of the Department of Energy. The program is led by the National Renewable Energy Laboratory, and Sandia is one of four laboratories (including Oak Ridge National Laboratories and Berkeley National Laboratories) included in the program. The initial set of activities included establishing the baseline protocols for cell assembly and testing, and executing a number of round robin style tests to compare data collected under nominally identical conditions at each of the participating laboratories to ensure that similar results were obtained and that no extraneous secondary factors were affecting the results. Because the nature of the interface between electrode and electrolyte was in question, as well as how the interface evolved over time and electrochemical cycling, an effort to build “model” interfaces based upon previously observed lithium silicate structures within the native film.
Materials that incorporate hydrogen and helium isotopes are of great interest at Sandia. The Ion Beam Lab at SNL-NM has invented techniques using micron to mm-size MeV ion beams to recoil these light isotopes (Elastic Recoil Detection or ERD) that can very accurately make such measurements. However, there are many measurements that would benefit NW and DOE that require much better resolution. To address these and many other issues, this LDRD demonstrated that neutral H atoms could be recoiled through a thin film by 70 keV electrons and detected with a Channeltron electron multiplier (CEM). The electrons were steered away from the CEM by strong permanent magnets. This proved the feasibility that the high energy electrons from a transmission-electron-microscope-TEM can potentially be used to recoil and subsequently detect (e-ERD), quantify and map the concentration of H and He isotopes with nm resolution.
This the final report of the LDRD project entitled "Realizing the Power of Near-Term Quantum Technologies", which was tasked with laying a theoretical foundation and computational framework for quantum simulation on quantum devices, to support both future Sandia efforts and the broader academic research effort in this area. The unifying theme of the project has been the desire to delineate more clearly the interface between existent classical computing resources that are vast and reliable with emerging quantum computing resources that will be scarce and unreliable for the foreseeable future. We seek to utilize classical computing resources to judge the efficacy of quantum devices for quantum simulation tasks and determine when they exceed the performance of classical devices, thereby achieving "quantum supremacy". This task was initially pursued by adapting the general concept of "parameter space compression" to quantum simulation. An inability to scale this analysis efficiently to large-scale simulations precipitated a shift in focus to assessing quantum supremacy of a specific quantum device, a 1D Bose gas trapped in an optical lattice, that was more amenable to large-scale analysis. We also seek to reconstruct unobserved information from limited observations of a quantum device to enhance their utility. This task was initially pursued as an application of maximum entropy reconstruction. Initial attempts to improve entropy approximations for direct reconstruction by free energy minimization proved to be more difficult than expected, and the focus shifted to the development of a quantum thermostat to facilitate indirect reconstruction by evolving a quantum Markov process. An efficient quantum thermostat is broadly useful for quantum state preparation in almost any quantum simulation task. In the middle of the project, a small opportunistic investment was made in a high-risk experiment to build an analog quantum simulator out of hole quantum dots in Ge/SiGe heterostructures. While a useful simulator was not produced, hole quantum dots at a Ge/SiGe interface have been successfully observed for the first time.
Several jurisdictions with critical tunnel infrastructure have expressed the need to understand the risks and implications of traffic incidents in tunnels involving hydrogen fuel cell vehicles. A risk analysis was performed to estimate what scenarios were most likely to occur in the event of a crash. The results show that the most likely consequence is no additional hazard from the hydrogen, although some factors need additional data and study to validate. This includes minor crashes and scenarios with no release or ignition. When the hydrogen does ignite, it is most likely a jet flame from the pressure relief device release due to a hydrocarbon fire. This scenario was considered in detailed modeling of specific tunnel configurations, as well as discussion of consequence concerns from the Massachusetts Department of Transportation. Localized concrete spalling may result where the jet flame impinges the ceiling, but this is not expected to occur with ventilation. Structural epoxy remains well below the degradation temperature. The total stress on the steel structure was significantly lower than the yield stress of stainless steel at the maximum steel temperature even when the ventilation was not operational. As a result, the steel structure will not be compromised. It is important to note that the study took a conservative approach in several factors, so observed temperatures should be lower than predicted by the models.
The proposed use of the subject property is for the purpose of adding a parking lot to serve the increase in customer vehicles that is occurring as the 9940 Main Complex is more heavily utilized, and as the 2009 Expansion areas come online as operational training facilities. The subject property would be used only for parking, not for testing or training activities. The parking lot would have a gravel surface. Current and future work at the 9940 Main Complex involves arming, fuzing, and firing of explosives and the testing of explosive systems components in both terrestrial and aquatic settings. It also involves specialized training activities for a variety of first responder customers, both DOE and non-DOE agencies.
An assessment of two methodologies used at Sandia National Laboratories to model mechanical interfaces is performed on the Ministack finite element model. One method uses solid mechanics models to model contacting surfaces with Coulomb frictional contact to capture the physics. The other, termed the structural dynamics reduced order model, models the interface with a simplified whole joint model using four-parameter Iwan elements. The solid mechanics model resolves local kinematics at the interface while the simplified structural dynamics model is significantly faster to simulate. One of the current challenges to using the whole joint model is that it requires calibration to data. A novel approach is developed to calibrate the reduced structural dynamics model using data from the solid mechanics model to match the global dynamics of the system. This is achieved by calibrating to amplitude dependent frequency and damping of the system modes, which are estimated using three different approaches.
Ehrmann, Robert S.; Wilcox, Benjamin; White, Edward B.; Maniaci, David C.
Wind farm operators observe production deficits as machines age. Quantifying deterioration on individual components is difficult, but one potential explanation is accumulation of blade surface roughness. Historically, wind turbine airfoils were designed for lift to be insensitive to roughness by simulating roughness with trip strips. However, roughness was still shown to negatively affect performance. Furthermore, experiments illustrated distributed roughness is not properly simulated by trip strips. To understand how real-world roughness affects performance, field measurements of turbine-blade roughness were made and simulated on a NACA 633-418 airfoil in a wind tunnel. Insect roughness and paint chips were characterized and recreated as distributed roughness and a forward-facing step. Distributed roughness was tested in three heights and five density configurations. The model chord Reynolds number was varied between 0.8 to 4.8 x 106. Measurements of lift, drag, pitching moment, and boundary-layer transition were completed. Results indicate minimal effect from paint-chip roughness. As distributed roughness height and density increase, lift-curve slope, maximum lift, and lift-to-drag ratio decrease. As Reynolds number increases, bypass transition occurs earlier. The critical roughness Reynolds number varies between 178 to 318, within the historical range. Little sensitivity to pressure gradient is observed. At a chord Reynolds number of 3.2 x 106, the maximum lift-to-drag ratio decreases 40% for 140 m roughness, corresponding to a 2.3% loss in annual energy production. Simulated performance loss compares well to measured performance loss on an in-service wind turbine.
Langel, Christopher M.; Chow, Raymond C.; Van Dam, C.P.; Maniaci, David C.
A computational investigation has been performed to better understand the impact of surface roughness on the flow over a contaminated surface. This report highlights the implementation and development of the roughness amplication model in the flow solver OVER FLOW-2. The model, originally proposed by Dassler, Kozulovic, and Fiala, introduces an additional scalar field roughness amplification quantity. This value is explicitly set at rough wall boundaries using surface roughness parameters and local flow quantities. The additional transport equation allows non-local effects of surface roughness to be accounted for downstream of rough sections. This roughness amplification variable is coupled with the Langtry-Menter model and used to modify the criteria for transition. Results from at plate test cases show good agreement with experimental transition behavior on the flow over varyings and grain roughness heights. Additional validation studies were performed on a NACA 0012 airfoil with leading edge roughness. The computationally predicted boundary layer development demonstrates good agreement with experimental results. New tests using varying roughness configurations have been carried out at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel to provide further calibration of the roughness amplification method. An overview and preliminary results are provided of this concurrent experimental investigation.
This report details work at Sandia National Laboratories in development of a lithium-ion battery management system (BMS) designed to detect the state of charge (SOC) and state of health (SOH) of a battery. The goal was to create a BMS that provides advanced SOH information without adding complexity to the hardware already required for monitoring battery safety. The hardware is designed to have low processor requirements and relatively low cost components, while offering several high end battery management options like communication, automatic SOC detection, capacity tracking, and multiple SOH characteristics. The methods for detecting capacity include coulomb counting and resistance-compensated voltage calculations. Several methods for assessing the SOH were also considered, including deviations in capacity using coulomb counting, DC resistance analysis, and time domain resistance analysis.
Transient simulation in circuit simulation tools, such as SPICE and Xyce, depend on scalable and robust sparse LU factorizations for efficient numerical simulation of circuits and power grids. As the need for simulations of very large circuits grow, the prevalence of multicore architectures enable us to use shared memory parallel algorithms for such simulations. A parallel factorization is a critical component of such shared memory parallel simulations. We develop a parallel sparse factorization algorithm that can solve problems from circuit simulations efficiently, and map well to architectural features. This new factorization algorithm exposes hierarchical parallelism to accommodate irregular structure that arise in our target problems. It also uses a hierarchical two-dimensional data layout which reduces synchronization costs and maps to memory hierarchy found in multicore processors. We present an OpenMP based implementation of the parallel algorithm in a new multithreaded solver called Basker in the Trilinos framework. We present performance evaluations of Basker on the Intel SandyBridge and Xeon Phi platforms using circuit and power grid matrices taken from the University of Florida sparse matrix collection and from Xyce circuit simulation. Basker achieves a geometric mean speedup of 5.91× on CPU (16 cores) and 7.4× on Xeon Phi (32 cores) relative to state-of-the-art solver KLU. Basker outperforms Intel MKL Pardiso solver (PMKL) by as much as 30× on CPU (16 cores) and 7.5× on Xeon Phi (32 cores) for low fill-in circuit matrices. Furthermore, Basker provides 5.4× speedup on a challenging matrix sequence taken from an actual Xyce simulation.
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ∼10−11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.
This is a high-risk effort to leverage knowledge gained from previous work, which focused on detector development leading to better energy resolution and reconstruction errors. This work seeks to enable applications that require precise elemental characterization of materials, such as chemical munitions remediation, offering the potential to close current detection gaps.
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
A coupled electrochemical/thermochemical cycle was investigated to produce hydrogen from renewable resources. Like a conventional thermochemical cycle, this cycle leverages chemical energy stored in a thermochemical working material that is reduced thermally by solar energy. However, in this concept, the stored chemical energy only needs to be partially, but not fully, capable of splitting steam to produce hydrogen. To complete the process, a proton-conducting membrane is driven to separate hydrogen as it is produced, thus shifting the thermodynamics toward further hydrogen production. This novel coupled-cycle concept provides several benefits. First, the required oxidation enthalpy of the reversible thermochemical material is reduced, enabling the process to occur at lower temperatures. Second, removing the requirement for spontaneous steam-splitting widens the scope of materials compositions, allowing for less expensive/more abundant elements to be used. Lastly, thermodynamics calculations suggest that this concept can potentially reach higher efficiencies than photovoltaic-to-electrolysis hydrogen production methods. This Exploratory Express LDRD involved assessing the practical feasibility of the proposed coupled cycle. A test stand was designed and constructed and proton-conducting membranes were synthesized. While the full proof of concept was not achieved, the individual components of the experiment were validated and new capabilities that can be leveraged by a variety of programs were developed.
The intent of this guide is to provide a set of “best practices” for leaders to promote diversity and facilitate inclusion within their organization and throughout Sandia National Laboratories. These “best practices” are derived from personal experiences and build upon existing resources at Sandia to help us effect change to realize an inclusive work environment. As leaders, we play a critical role in setting the vision and shaping the culture of the organization by communicating expectations and modeling inclusive behavior. The “best practices” in this guide are presented in the spirit of promoting a learning culture that values continuous improvement in the ongoing effort to make diversity and inclusion an integral part of all that we do at Sandia. This guide seeks to articulate the importance of leading through example, taking positive actions, raising awareness of practices that provide an inclusive environment, and creating a space that welcomes diverse perspectives and input.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of the of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.
An In-Situ Bioremediation (ISB) Pilot Test Treatability Study is planned at Sandia National Laboratories, New Mexico (SNL/NM) Technical Area-V (TA-V) Groundwater Area of Concern. The Treatability Study is designed to gravity inject an electron-donor substrate and bioaugmentation bacteria into groundwater using an injection well. The constituents of concern (COCs) are nitrate and trichloroethene (TCE). The Pilot Test Treatability Study will evaluate the effectiveness of bioremediation and COC treatment over a prescribed period of time. Results of the pilot test will provide data that will be used to evaluate the cost and effectiveness of a fullscale system.
Glass-ceramic seals may be the future of hermetic connectors at Sandia National Laboratories. They have been shown capable of surviving higher temperatures and pressures than amorphous glass seals. More advanced finite-element material models are required to enable model-based design and provide evidence that the hermetic connectors can meet design requirements. Glass-ceramics are composite materials with both crystalline and amorphous phases. The latter gives rise to (non-linearly) viscoelastic behavior. Given their complex microstructures, glass-ceramics may be thermorheologically complex, a behavior outside the scope of currently implemented constitutive models at Sandia. However, it was desired to assess if the Simplified Potential Energy Clock (SPEC) model is capable of capturing the material response. Available data for SL 16.8 glass-ceramic was used to calibrate the SPEC model. Model accuracy was assessed by comparing model predictions with shear moduli temperature dependence and high temperature 3-point bend creep data. It is shown that the model can predict the temperature dependence of the shear moduli and 3- point bend creep data. Analysis of the results is presented. Suggestions for future experiments and model development are presented. Though further calibration is likely necessary, SPEC has been shown capable of modeling glass-ceramic behavior in the glass transition region but requires further analysis below the transition region.
Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element model of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.