Graphene, a planar, atomically thin form of carbon, has unique electrical and material properties that could enable new high performance semiconductor devices. Graphene could be of specific interest in the development of room-temperature, high-resolution semiconductor radiation spectrometers. Incorporating graphene into a field-effect transistor architecture could provide an extremely high sensitivity readout mechanism for sensing charge carriers in a semiconductor detector, thus enabling the fabrication of a sensitive radiation sensor. In addition, the field effect transistor architecture allows us to sense only a single charge carrier type, such as electrons. This is an advantage for room-temperature semiconductor radiation detectors, which often suffer from significant hole trapping. Here we report on initial efforts towards device fabrication and proof-of-concept testing. This work investigates the use of graphene transferred onto silicon and silicon carbide, and the response of these fabricated graphene field effect transistor devices to stimuli such as light and alpha radiation.
Sustainability is a critical national security issue for the U.S. and other nations. Sandia National Laboratories (SNL) is already a global leader in sustainability science and technology (SS&T) as documented in this report. This report documents the ongoing work conducted this year as part of the Sustainability Innovation Foundry (SIF). The efforts of the SIF support Sandia's national and international security missions related to sustainability and resilience revolving around energy use, water use, and materials, both on site at Sandia and externally. The SIF leverages existing Sandia research and development (R&D) in sustainability science and technology to support new solutions to complex problems. The SIF also builds on existing Sandia initiatives to support transformation of Sandia into a fully sustainable entity in terms of materials, energy, and water use. In the long term, the SIF will demonstrate the efficacy of sustainability technology developed at Sandia through prototyping and test bed approaches and will provide a common platform for support of solutions to the complex problems surrounding sustainability. Highlights from this year include the Sustainability Idea Challenge, improvements in facilities energy use, lectures and presentations from relevant experts in sustainability [Dr. Barry Hughes, University of Denver], and significant development of the Institutional Transformation (IX) modeling tools to support evaluation of proposed modifications to the SNL infrastructure to realize energy savings.
Material response to dynamic loading is often dominated by microstructure (grain structure, porosity, inclusions, defects). An example critically important to Sandia's mission is dynamic strength of polycrystalline metals where heterogeneities lead to localization of deformation and loss of shear strength. Microstructural effects are of broad importance to the scientific community and several institutions within DoD and DOE; however, current models rely on inaccurate assumptions about mechanisms at the sub-continuum or mesoscale. Consequently, there is a critical need for accurate and robust methods for modeling heterogeneous material response at this lower length scale. This report summarizes work performed as part of an LDRD effort (FY11 to FY13; project number 151364) to meet these needs.
We review the edge element formulation for describing the kinematics of hyperelastic solids. This approach is used to frame the problem of remapping the inverse deformation gradient for Arbitrary Lagrangian-Eulerian (ALE) simulations of solid dynamics. For hyperelastic materials, the stress state is completely determined by the deformation gradient, so remapping this quantity effectively updates the stress state of the material. A method, inspired by the constrained transport remap in electromagnetics, is reviewed, according to which the zero-curl constraint on the inverse deformation gradient is implicitly satisfied. Open issues related to the accuracy of this approach are identified. An optimization-based approach is implemented to enforce positivity of the determinant of the deformation gradient. The efficacy of this approach is illustrated with numerical examples.
Advances in technology for electrochemical energy storage require increased understanding of electrolyte/electrode interfaces, including the electric double layer structure, and processes involved in charging of the interface, and the incorporation of this understanding into quantitative models. Simplified models such as Helmholtz's electric double-layer (EDL) concept don't account for the molecular nature of ion distributions, solvents, and electrode surfaces and therefore cannot be used in predictive, high-fidelity simulations for device design. This report presents theoretical results from models that explicitly include the molecular nature of the electrical double layer and predict critical electrochemical quantities such as interfacial capacitance. It also describes development of experimental tools for probing molecular properties of electrochemical interfaces through optical spectroscopy. These optical experimental methods are designed to test our new theoretical models that provide descriptions of the electric double layer in unprecedented detail.
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Special nuclear material (SNM) detection has applications in nuclear material control, treaty verification, and national security. The neutron and gamma-ray radiation signature of SNMs can be indirectly observed in scintillator materials, which fluoresce when exposed to this radiation. A photomultiplier tube (PMT) coupled to the scintillator material is often used to convert this weak fluorescence to an electrical output signal. The fluorescence produced by a neutron interaction event differs from that of a gamma-ray interaction event, leading to a slightly different pulse in the PMT output signal. The ability to distinguish between these pulse types, i.e., pulse shape discrimination (PSD), has enabled applications such as neutron spectroscopy, neutron scatter cameras, and dual-mode neutron/gamma-ray imagers. In this research, we explore the use of compressive sensing to guide the development of novel mixed-signal hardware for PMT output signal acquisition. Effectively, we explore smart digitizers that extract sufficient information for PSD while requiring a considerably lower sample rate than conventional digitizers. Given that we determine the feasibility of realizing these designs in custom low-power analog integrated circuits, this research enables the incorporation of SNM detection into wireless sensor networks.
We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three differentiating features: (i) its core contains nonlinear Poisson, effective mass Schrodinger, and Configuration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltages in DQDs for multiple desired targets. The Poisson solver includes MaxwellBoltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the effect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum confinement increases capacitance when the number of electrons is fixed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design comparison and optimization.
It is well known that the spectrum of a signal can be calculated with a Discrete Fourier Transform (DFT), where best resolution is achieved by processing the entire data set. However, in some situations it is advantageous to use a staged approach, where data is first processed within subapertures, and the results are then combined and further processed to a final result. An artifact of this approach is the creation of grating lobes in the final response. The nature of the grating lobes, including their amplitude and spacing, is an artifact of window taper functions, subaperture offsets, and subaperture processing parameters. We assess these factors and exemplify their effects.
The "location" of the radar is the reference location to which the radar measures range. This is typically the antenna's "phase center". However, the antenna's phase center is not generally obvious, and may not correspond to any seemingly obvious physical location, such as the focal point of a dish reflector. This report calculates the phase center of an offset-fed dish reflector antenna.
Capabilities are developed, verified and validated to generate constitutive responses using material and geometric measurements with representative volume elements (RVE). The geometrically accurate RVEs are used for determining elastic properties and damage initiation and propagation analysis. Finite element modeling of the meso-structure over the distribution of characterizing measurements is automated and various boundary conditions are applied. Plain and harness weave composites are investigated. Continuum yarn damage, softening behavior and an elastic-plastic matrix are combined with known materials and geometries in order to estimate the macroscopic response as characterized by a set of orthotropic material parameters. Damage mechanics and coupling effects are investigated and macroscopic material models are demonstrated and discussed. Prediction of the elastic, damage, and failure behavior of woven composites will aid in macroscopic constitutive characterization for modeling and optimizing advanced composite systems.
The wireless communications channel is innately insecure due to the broadcast nature of the electromagnetic medium. Many techniques have been developed and implemented in order to combat insecurities and ensure the privacy of transmitted messages. Traditional methods include encrypting the data via cryptographic methods, hiding the data in the noise floor as in wideband communications, or nulling the signal in the spatial direction of the adversary using array processing techniques. This work analyzes the design of signaling constellations, i.e. modulation formats, to combat eavesdroppers from correctly decoding transmitted messages. It has been shown that in certain channel models the ability of an adversary to decode the transmitted messages can be degraded by a clever signaling constellation based on lattice theory. This work attempts to optimize certain lattice parameters in order to maximize the security of the data transmission. These techniques are of interest because they are orthogonal to, and can be used in conjunction with, traditional security techniques to create a more secure communication channel.
Digital in-line holography is an optical technique which can be applied to measure the size, three-dimensional position, and three-component velocity of disperse particle fields. This work summarizes recent developments at Sandia National Laboratories focused on improvement in measurement accuracy, experimental validation, and applications to multiphase flows. New routines are presented which reduce the uncertainty in measured position along the optical axis to a fraction of the particle diameter. Furthermore, application to liquid atomization highlights the ability to measure complex, three-dimensional structures. Finally, investigation of particles traveling at near sonic conditions prove accuracy despite significant experimental noise due to shock-waves.
This report describes an FY13 effort to develop the latest version of the Sandia Cooler, a breakthrough technology for air-cooled heat exchangers that was developed at Sandia National Laboratories. The project was focused on fabrication, assembly and demonstration of ten prototype systems for the cooling of high power density electronics, specifically high performance desktop computers (CPUs). In addition, computational simulation and experimentation was carried out to fully understand the performance characteristics of each of the key design aspects. This work culminated in a parameter and scaling study that now provides a design framework, including a number of design and analysis tools, for Sandia Cooler development for applications beyond CPU cooling.
In response to the challenges related to the increasing size and complexity of systems, organizations have recognized the need to integrate human considerations in the beginning stages of systems development. Human Systems Integration (HSI) seeks to accomplish this objective by incorporating human factors within systems engineering (SE) processes and methodologies, which is the focus of this paper. A representative set of HSI methods from multiple sources are organized, analyzed, and mapped to the systems engineering Vee-model. These methods are then consolidated and evaluated against the SE process and Models-Based Systems Engineering (MBSE) methodology to determine where and how they could integrate within systems development activities in the form of specific enhancements. Overall conclusions based on these evaluations are presented and future research areas are proposed.
Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.
People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.
This document describes the form and use of three supplemental capabilities added to Goma during 1998 -- augmenting conditions, automatic continuation and linear stability analysis. Augmenting conditions allow the addition of constraints and auxiliary conditions which describe the relationship between unknowns, boundary conditions, material properties and post-processing extracted quantities. Automatic continuation refers to a family of algorithms (zeroth and first order here, single and multi-parameter) that allow tracking steady-state solution paths as material parameters or boundary conditions are varied. The stability analysis capability in Goma uses the method of small disturbances and superposition of normal modes to test the stability of a steady- state flow, i.e., it determines if the disturbance grows or decays in time.
The primary metric for the viability of these next generation nuclear power plants will be the cost of generated electricity. One important component in achieving these objectives is the development of power conversion technologies that maximize the electrical power output of these advanced reactors for a given thermal power. More efficient power conversion systems can directly reduce the cost of nuclear generated electricity and therefore advanced power conversion cycle research is an important area of investigation for the Generation IV Program. Brayton cycles using inert or other gas working fluids, have the potential to take advantage of the higher outlet temperature range of Generation IV systems and allow substantial increases in nuclear power conversion efficiency, and potentially reductions in power conversion system capital costs compared to the steam Rankine cycle used in current light water reactors. For the Very High Temperature Reactor (VHTR), Helium Brayton cycles which can operate in the 900 to 950 C range have been the focus of power conversion research. Previous Generation IV studies examined several options for He Brayton cycles that could increase efficiency with acceptable capital cost implications. At these high outlet temperatures, Interstage Heating and Cooling (IHC) was shown to provide significant efficiency improvement (a few to 12%) but required increased system complexity and therefore had potential for increased costs. These scoping studies identified the potential for increased efficiency, but a more detailed analysis of the turbomachinery and heat exchanger sizes and costs was needed to determine whether this approach could be cost effective. The purpose of this study is to examine the turbomachinery and heat exchanger implications of interstage heating and cooling configurations. In general, this analysis illustrates that these engineering considerations introduce new constraints to the design of IHC systems that may require different power conversion configurations to take advantage of the possible efficiency improvement. Very high efficiency gains can be achieved with the IHC approach, but this can require large low pressure turbomachinery or heat exchanger components, whose cost may mitigate the efficiency gain. One stage of interstage cooling is almost always cost effective, but careful optimization of system characteristics is needed for more complex configurations. This report summarizes the primary factors that must be considered in evaluating this approach to more efficient cycles, and the results of the engineering analysis performed to explore these options for Generation IV high temperature reactors.
In lithium ion batteries, Li+ intercalation into electrodes is induced by applied voltages, which are in turn associated with free energy changes of Li+ transfer (ΔGt) between the solid and liquid phases. Using ab initio molecular dynamics (AIMD) and thermodynamic integration techniques, we compute ΔGt for the virtual transfer of a Li+ from a LiC6 anode slab, with pristine basal planes exposed, to liquid ethylene carbonate confined in a nanogap. The onset of delithiation, at ΔGt = 0, is found to occur on LiC6 anodes with negatively charged basal surfaces. These negative surface charges are evidently needed to retain Li+ inside the electrode and should affect passivation (“SEI”) film formation processes. Fast electrolyte decomposition is observed at even larger electron surface densities. By assigning the experimentally known voltage (0.1 V vs Li+/Li metal) to the predicted delithiation onset, an absolute potential scale is obtained. This enables voltage calibrations in simulation cells used in AIMD studies and paves the way for future prediction of voltage dependences in interfacial processes in batteries.
22nd Annual Conference on Behavior Representation in Modeling and Simulation, BRiMS 2013 - Co-located with the International Conference on Cognitive Modeling
Modeling agent behaviors in complex task environments requires the agent to be sensitive to complex stimuli such as the positions and actions of varying numbers of other entities. Entity state updates may be received asynchronously rather than on a coordinated clock signal, so the world state must be estimated based on the most recent information available for each entity. The simulation environment is likely to be distributed across several computers over a network. This paper presents the Relational Blackboard (RBB), which is a framework developed to address these needs with clarity and efficiency. The purpose of this paper is to explain the concepts used to represent and process spatio-temporal data in the RBB framework so researchers in related areas can apply the concepts and software to their own problems of interest; detailed description of our own research will be found in other papers. The software is freely available under the BSD open-source license at http://rbb.sandia.gov.
The Whipple bumper is a space shield designed to protect a space station from the most hazardous orbital space debris environment. A series of numerical simulations has been performed using the multi-dimensional hydrodynamics code CTH to estimate the effectiveness of the thin Whipple bumper design. These simulations are performed for impact velocities of ~ 10 km/s which are now accessible by experiments using the Sandia hypervelocity launcher facility. For a ~ 10 km/s impact by a 0.7 gm aluminum flier plate, the experimental results indicate that the debris cloud resulting upon impact of the bumper shield by the flier plate, completely penetrates the sub-structure. The CTH simulations also predict complete penetration by the subsequent debris cloud.
A rezone stencil for ALE shock calculations has been developed based on a stabilized variant of the serendipity element. This rezone stencil is compared to the Winslow rezone stencil. Unlike the Winslow stencil, which equalizes element volumes as well as node angles, the serendipity stencil equalizes node angles only. This may be advantageous for calculations involving strong density gradients such as those associated with shock compression.
Hydrocode simulations constitute an important tool at Sandia National Laboratories and elsewhere for analyzing complex two- and three-dimensional systems. However, current vector supercomputers do not provide a growth path to enable fast, routine, and cost-effective simulations of large problems. Future, massively-parallel computers will provide a solution. Sandia has already developed simplified versions of the production hydrocode CTH for the Connection Machine and the nCUBE massively-parallel supercomputers. The parallel versions solve problems in two-dimensional, multi-fluid, shock-wave physics. Code development strategy, coding methodology, visualization techniques and performance results for this work are described.
Computational and analytical investigations have been performed for laser-driven flyer plate experiments and compared with data. These have resulted in a more fundamental understanding of the physical processes in the laser absorption, and acceleration of the flyers by the expansion of laser produced plasma. The analytical work describes the dependences of final foil velocity, while the computational simulations permit a detailed look at energy coupling and foil acceleration issues.
Because of complications associated with temperature heterogeneities in shocked metal powders, time-resolved radiation pyrometer measurements of shock temperatures in powders with particle sizes greater than a few tens of microns cannot be made under normal laboratory conditions with uniaxial loading durations limited to about one microsecond. Fortunately, for highly porous, reactive powders, the difference between shock and postshock temperature is negligible. For loading conditions similar to those that have yielded reaction products in recovery experiments, there is no evidence of any chemical reaction in a coarse (-325 mesh) nickel/aluminum powder mixture within the first 6 μs of shock arrival, based on constraints on postshock temperatures provided by thermal radiation measurements. This result is in contrast to that for a micron-sized nickel/aluminum mixture, for which there is evidence of significant reaction on a time scale of 100 ns under similar shock loading conditions.
In this review we discuss recent research on driving self assembly of magnetic particle suspensions subjected to alternating magnetic fields. The variety of structures and effects that can be induced in such systems is remarkably broad due to the large number of variables involved. The alternating field can be uniaxial, biaxial or triaxial, the particles can be spherical or anisometric, and the suspension can be dispersed throughout a volume or confined to a soft interface. In the simplest case the field drives the static or quasi-static assembly of unusual particle structures, such as sheets, networks and open-cell foams. More complex, emergent collective behaviors evolve in systems that can follow the time-dependent field vector. In these cases energy is continuously injected into the system and striking °ow patterns and structures can arise. In fluid volumes these include the formation of advection and vortex lattices. At air-liquid and liquid-liquid interfaces striking dynamic particle assemblies emerge due to the particle-mediated coupling of the applied field to surface excitations. These out-of-equilibrium interface assemblies exhibit a number of remarkable phenomena, including self-propulsion and surface mixing. In addition to discussing various methods of driven self assembly in magnetic suspensions, some of the remarkable properties of these novel materials are described.
This report summarizes the first year’s effort on the Enceladus project, under which Sandia was asked to evaluate the potential advantages of adiabatic quantum computing for analyzing large data sets in the near future, 5-to-10 years from now. We were not specifically evaluating the machine being sold by D-Wave Systems, Inc; we were asked to anticipate what future adiabatic quantum computers might be able to achieve. While realizing that the greatest potential anticipated from quantum computation is still far into the future, a special purpose quantum computing capability, Adiabatic Quantum Optimization (AQO), is under active development and is maturing relatively rapidly; indeed, D-Wave Systems Inc. already offers an AQO device based on superconducting flux qubits. The AQO architecture solves a particular class of problem, namely unconstrained quadratic Boolean optimization. Problems in this class include many interesting and important instances. Because of this, further investigation is warranted into the range of applicability of this class of problem for addressing challenges of analyzing big data sets and the effectiveness of AQO devices to perform specific analyses on big data. Further, it is of interest to also consider the potential effectiveness of anticipated special purpose adiabatic quantum computers (AQCs), in general, for accelerating the analysis of big data sets. The objective of the present investigation is an evaluation of the potential of AQC to benefit analysis of big data problems in the next five to ten years, with our main focus being on AQO because of its relative maturity. We are not specifically assessing the efficacy of the D-Wave computing systems, though we do hope to perform some experimental calculations on that device in the sequel to this project, at least to provide some data to compare with our theoretical estimates.
The subject of this work is the development of models for the numerical simulation of matter, momentum, and energy balance in heterogeneous materials. These are materials that consist of multiple phases or species or that are structured on some (perhaps many) scale(s). By computational mechanics we mean to refer generally to the standard type of modeling that is done at the level of macroscopic balance laws (mass, momentum, energy). We will refer to the flow or flux of these quantities in a generalized sense as transport. At issue here are the forms of the governing equations in these complex materials which are potentially strongly inhomogeneous below some correlation length scale and are yet homogeneous on larger length scales. The question then becomes one of how to model this behavior and what are the proper multi-scale equations to capture the transport mechanisms across scales. To address this we look to the area of generalized stochastic process that underlie the transport processes in homogeneous materials. The archetypal example being the relationship between a random walk or Brownian motion stochastic processes and the associated Fokker-Planck or diffusion equation. Here we are interested in how this classical setting changes when inhomogeneities or correlations in structure are introduced into the problem. Aspects of non-classical behavior need to be addressed, such as non-Fickian behavior of the mean-squared-displacement (MSD) and non-Gaussian behavior of the underlying probability distribution of jumps. We present an experimental technique and apparatus built to investigate some of these issues. We also discuss diffusive processes in inhomogeneous systems, and the role of the chemical potential in diffusion of hard spheres is considered. Also, the relevance to liquid metal solutions is considered. Finally we present an example of how inhomogeneities in material microstructure introduce fluctuations at the meso-scale for a thermal conduction problem. These fluctuations due to random microstructures also provide a means of characterizing the aleatory uncertainty in material properties at the mesoscale.