Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.
Concern is growing in the High-Performance Computing community regarding the reliability of proposed exascale systems. Current research has shown that the expected reliability of these machines will greatly reduce their scalability. In constrast to current fault tolerance methods whose reliability focus is only the application, this project investigates the benefits integrating reliability mechcanisms in the operating system and runtime, as well as the appli- cation. More specifically, this project has three broad contributions in the field: First, using failure logs from current leadership-class high-performance computing systems, we outline the failures common on these large-scale systems. Second, we describe a novel memory pro- tection mechcanism capable of protecting common observed failures that uses the similarity inherrant in many OS and applications state, thereby reducing overheads. Finally, using an analogy with OS jitter, we develop a highly effecient simulator capable predicting the performance of resilience methods at the scales expected for future extreme-scale systems.
Silver-containing mordenite (MOR) is a longstanding benchmark for radioiodine capture, reacting with molecular iodine (I2) to form AgI. However the mechanisms for organoiodine capture are not well understood. Here we investigate the capture of methyl iodide from complex mixed gas streams by combining chemical analysis of the effluent gas stream with in depth characterization of the recovered sorbent.
We are purusing an understand of the durability and materials processability of the low temperature sintering Bi-Si oxide Glass Composite Material (GCM)1 Waste Form for iodine capture materials. The chemical and physical controls over iodine release from candidate 129I waste forms must be quantified to predict long-term waste form effectiveness.
As computer systems grow in both size and complexity, the need for applications and run-time systems to adjust to their dynamic environment also grows. The goal of the RAAMP LDRD was to combine static architecture information and real-time system state with algorithms to conserve power, reduce communication costs, and avoid network contention. We devel- oped new data collection and aggregation tools to extract static hardware information (e.g., node/core hierarchy, network routing) as well as real-time performance data (e.g., CPU uti- lization, power consumption, memory bandwidth saturation, percentage of used bandwidth, number of network stalls). We created application interfaces that allowed this data to be used easily by algorithms. Finally, we demonstrated the benefit of integrating system and application information for two use cases. The first used real-time power consumption and memory bandwidth saturation data to throttle concurrency to save power without increasing application execution time. The second used static or real-time network traffic information to reduce or avoid network congestion by remapping MPI tasks to allocated processors. Results from our work are summarized in this report; more details are available in our publications [2, 6, 14, 16, 22, 29, 38, 44, 51, 54].
This report summarizes the results generated in FY13 for cable insulation in support of the Department of Energy's Light Water Reactor Sustainability (LWRS) Program, in collaboration with the US-Argentine Binational Energy Working Group (BEWG). A silicone (SiR) cable, which was stored in benign conditions for %7E30 years, was obtained from Comision Nacional de Energia Atomica (CNEA) in Argentina with the approval of NA-SA (Nucleoelectrica Argentina Sociedad Anonima). Physical property testing was performed on the as-received cable. This cable was artificially aged to assess behavior with additional analysis. SNL observed appreciable tensile elongation values for all cable insulations received, indicative of good mechanical performance. Of particular note, the work presented here provides correlations between measured tensile elongation and other physical properties that may be potentially leveraged as a form of condition monitoring (CM) for actual service cables. It is recognized at this point that the polymer aging community is still lacking the number and types of field returned materials that are desired, but Sandia National Laboratories (SNL) -- along with the help of others -- is continuing to work towards that goal. This work is an initial study that should be complimented with location-mapping of environmental conditions of Argentinean plant conditions (dose and temperature) as well as retrieval, analysis, and comparison with in- service cables.
An experiment platform has been designed to study vacuum power flow in magnetically insulated transmission lines (MITLs). The platform was driven by the 400-GW Mykonos-V accelerator. The experiments conducted quantify the current loss in a millimeter-gap MITL with respect to vacuum conditions in the MITL for two different gap distances, 1.0 and 1.3 mm. The current loss for each gap was measured for three different vacuum pump down times. As a ride along experiment, multiple shots were conducted with each set of hardware to determine if there was a conditioning effect to increase current delivery on subsequent shots. The experiment results revealed large differences in performance for the 1.0 and 1.3 mm gaps. The 1.0 mm gap resulted in current loss of 40%-60% of peak current. The 1.3 mm gap resulted in current losses of less than 5% of peak current. Classical MITL models that neglect plasma expansion predict that there should be zero current loss, after magnetic insulation is established, for both of these gaps. The experiments result s indicate that the vacuum pressure or pump down time did not have a significant effect on the measured current loss at vacuum pressures between 1e-4 and 1e-5 Torr. Additionally, there was not repeatable evidence of a conditioning effect that reduced current loss for subsequent full-energy shots on a given set of hardware. It should be noted that the experiments conducted likely did not have large loss contributions due to ion emission from the anode due to the relatively small current densi-ties (25-40 kA/cm) in the MITL that limited the anode temperature rise due to ohmic heating. The results and conclusions from these experiments may have limited applicability to MITLs of high current density (>400 kA/cm) used in the convolute and load region of the Z which experience temperature increases of >400° C and generate ion emission from anode surfaces.
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance are investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.
As part of the Light Water Reactor Sustainability Program, science - based engineering approaches were employed to address cable degradation behavior under a range of exposure environments. Experiments were conducted with the goal to provide best guidance for aged material states, remaining life and expected performance under specific conditions for a range of cable materials. Generic engineering tests, which focus on rapid accelerated aging and tensile elongation, were combined with complementary methods from polymer degradation science. Sandia's approach, building on previous years' efforts, enabled the generation of some of the necessary data supporting the development of improved lifetime predictions models, which incorporate known material behaviors and feedback from field - returned 'aged' cable materials. Oxidation rate measurements have provided access to material behavior under low dose rate thermal conditions, where slow degradation is not apparent in mechanical property changes. Such data have shown aging kinetics consistent with established radiation - thermal degradation models.
This project focused on the use of a sorbent, carbonated apatite, to immobilize selenium in the environment. It is know that apatite will sorb selenium and based on the mechanism of sorption it is theorized that carbonated apatite will be more effective that pure apatite. Immobilization of selenium in the environment is through the use of a sorbent in a permeable reactive barrier (PRB). A PRB can be constructed by trenching and backfill with the sorbent or in the case of apatite as the sorbent formed in situ using the apatite forming solution of Moore (2003, 2004). There is very little data on selenium sorption by carbonated apatite in the literature. Therefore, in this work, the basic sorptive properties of carbonated apatite were investigated. Carbonated apatite was synthesized by a precipitation method and characterized. Batch selenium kinetic and equilibrium experiments were performed. The results indicate the carbonated apatite contained 9.4% carbonate and uptake of selenium as selenite was rapid; 5 hours for complete uptake of selenium vs. more than 100 hours for pure hydroxyapatite reported in the literature. Additionally, the carbonated apatite exhibited significantly higher distribution coefficients in equilibrium experiments than pure apatite under similar experimental conditions. The next phase of this work will be to seek additional funds to continue the research with the goal of eventually demonstrating the technology in a field application.
The present work addresses the need for solid-state, fast neutron discriminating scintillators that possess higher light yields and faster decay kinetics than existing organic scintillators. These respective attributes are of critical importance for improving the gamma-rejection capabilities and increasing the neutron discrimination performance under high-rate conditions. Two key applications that will benefit from these improvements include large-volume passive detection scenarios as well as active interrogation search for special nuclear materials. Molecular design principles were employed throughout this work, resulting in synthetically tailored materials that possess the targeted scintillation properties.
Fontaine, Arnold A.; Straka, William A.; Meyer, Richard S.; Jonson, Michael L.
As interest in waterpower technologies has increased over the last few years, there has been a growing need for a public database of measured data for these devices. This would provide a basic understanding of the technology and means to validate analytic and numerical models. Through collaboration between Sandia National Laboratories, Penn State University Applied Research Laboratory, and University of California, Davis, a new marine hydrokinetic turbine rotor was designed, fabricated at 1:8.7-scale, and experimentally tested to provide an open platform and dataset for further study and development. The water tunnel test of this three-bladed, horizontal-axis rotor recorded power production, blade loading, near-wake characterization, cavitation effects, and noise generation. This report documents the small-scale model test in detail and provides a brief discussion of the rotor design and an initial look at the results with comparison against low-order modeling tools. Detailed geometry and experimental measurements are released to Sandia National Laboratories as a data report addendum.
Grain boundary complexions are distinct equilibrium structures and compositions of a grain boundary and complexion transformations are transition from a metastable to an equilibrium complexion at a specific thermodynamic and geometric conditions. Previous work indicates that, in the case of doped alumina, a complexion transition that increased the mobility of transformed boundaries and resulted in abnormal grain growth also caused a decrease in the mean relative grain boundary energy as well as an increase in the anisotropy of the grain boundary character distribution (GBCD). The current work will investigate the hypothesis that the rates of complexion transitions that result in abnormal grain growth (AGG) depend on grain boundary character and energy. Furthermore, the current work expands upon this understanding and tests the hypothesis that it is possible to control when and where a complexion transition occurs by controlling the local grain boundary energy distribution.
Several radiation effects projects in the Ion Beam Lab (IBL) have recently required two disparate charged particle beams to simultaneously strike a single sample through a single port of the target chamber. Because these beams have vastly different mass–energy products (MEP), the low-MEP beam requires a large angle of deflection toward the sample by a bending electromagnet. A second electromagnet located further upstream provides a means to compensate for the small angle deflection experienced by the high-MEP beam during its path through the bending magnet. This paper derives the equations used to select the magnetic fields required by these two magnets to achieve uniting both beams at the target sample. A simple result was obtained when the separation of the two magnets was equivalent to the distance from the bending magnet to the sample, and the equation is given by: Bs= 1/2(rc/rs) Bc, where Bs and Bc are the magnetic fields in the steering and bending magnet and rc/rs is the ratio of the radii of the bending magnet to that of the steering magnet. This result is not dependent upon the parameters of the high MEP beam, i.e. energy, mass, charge state. Therefore, once the field of the bending magnet is set for the low-MEP beam, and the field in the steering magnet is set as indicted in the equation, the trajectory path of any high-MEP beam will be directed into the sample.
This milestone was the 2nd in a series of Tri-Lab Co-Design L2 milestones supporting ‘Co-Design’ efforts in the ASC program. It is a crucial step towards evaluating the effectiveness of proxy applications in exploring code performance on next generation architectures. All three labs evaluated the performance of 2 proxy applications on modern architectures and/or testbeds for pre-production hardware. The results are captured in this document as well as annotated presentations from all 3 laboratories.
Experiments were conducted with a Backward Bent Duct Buoy (BBDB) oscillating water column wave energy conversion device with a scaling factor of 50 at HMRC at University College Cork, Ireland. Results were compared to numerical performance models. This work experimentally verified the migration of the natural resonance location of the water column due to hydrodynamic coupling for a floating non- axisymmetric device without a power conversion chain PCC present. In addition, the experimental results verified the performance model with a PCC of the same non- axisymmetric device when both floating and grounded.
To reduce the price of the reference Backward Bent Duct Buoy (BBDB), a study was done analyzing the effects of reducing the mooring line length, and a new mooring design was developed. It was found that the overall length of the mooring lines could be reduced by 1290 meters, allowing a significant price reduction of the system. In this paper, we will first give a description of the model and the storm environment it will be subject to. We will then give a recommendation for the new mooring system, followed by a discussion of the severe weather simulation results, and an analysis of the conservative and aggressive aspects of the design.
The technical basis for salt disposal of nuclear waste resides in salt’s favorable physical, mechanical and hydrological characteristics. Undisturbed salt formations are impermeable. Upon mining, the salt formation experiences damage in the near-field rock proximal to the mined opening and salt permeability increases dramatically. The volume of rock that has been altered by such damage is called the disturbed rock zone (DRZ).
The safe transport of spent nuclear fuel and high-level radioactive waste is an important aspect of the waste management system of the United States. The Nuclear Regulatory Commission (NRC) currently certifies spent nuclear fuel rail cask designs based primarily on numerical modeling of hypothetical accident conditions augmented with some small scale testing. However, NRC initiated a Package Performance Study (PPS) in 2001 to examine the response of full-scale rail casks in extreme transportation accidents. The objectives of PPS were to demonstrate the safety of transportation casks and to provide high-fidelity data for validating the modeling. Although work on the PPS eventually stopped, the Blue Ribbon Commission on America’s Nuclear Future recommended in 2012 that the test plans be re-examined. This recommendation was in recognition of substantial public feedback calling for a full-scale severe accident test of a rail cask to verify evaluations by NRC, which find that risk from the transport of spent fuel in certified casks is extremely low. This report, which serves as the re-assessment, provides a summary of the history of the PPS planning, identifies the objectives and technical issues that drove the scope of the PPS, and presents a possible path for moving forward in planning to conduct a full-scale cask test. Because full-scale testing is expensive, the value of such testing on public perceptions and public acceptance is important. Consequently, the path forward starts with a public perception component followed by two additional components: accident simulation and first responder training. The proposed path forward presents a series of study options with several points where the package performance study could be redirected if warranted.
We applied Scanning Probe Microscopy and Density Functional Theory (DFT) to discover the basics of how adsorbates wet insulating substrates, addressing a key question in geochemistry. To allow experiments on insulating samples we added Atomic Force Microscopy (AFM) capability to our existing UHV Scanning Tunneling Microscope (STM). This was accomplished by integrating and debugging a commercial qPlus AFM upgrade. Examining up-to-40-nm-thick water films grown in vacuum we found that the exact nature of the growth spirals forming around dislocations determines what structure of ice, cubic or hexagonal, is formed at low temperature. DFT revealed that wetting of mica is controlled by how exactly a water layer wraps around (hydrates) the K+ ions that protrude from the mica surface. DFT also sheds light on the experimentally observed extreme sensitivity of the mica surface to preparation conditions: K atoms can easily be rinsed off by water flowing past the mica surface.
In 2012, Hurricane Sandy devastated much of the U.S. northeast coastal areas. Among those hardest hit was the small community of Hoboken, New Jersey, located on the banks of the Hudson River across from Manhattan. This report describes a city-wide electrical infrastructure design that uses microgrids and other infrastructure to ensure the city retains functionality should such an event occur in the future. The designs ensure that up to 55 critical buildings will retain power during blackout or flooded conditions and include analysis for microgrid architectures, performance parameters, system control, renewable energy integration, and financial opportunities (while grid connected). The results presented here are not binding and are subject to change based on input from the Hoboken stakeholders, the integrator selected to manage and implement the microgrid, or other subject matter experts during the detailed (final) phase of the design effort.
Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.
TDI foams of nominal density from 10 to 45 pound per cubic foot were decomposed within a heated stainless steel container. The pressure in the container and temperatures measured by thermocouples were recorded with each test proceeding to an allowed maximum pressure before venting. Two replicate tests for each of four densities and two orientations in gravity produced very consistent pressure histories. Some thermal responses demonstrate random sudden temperature increases due to decomposition product movement. The pressurization of the container due to the generation of gaseous products is more rapid for denser foams. When heating in the inverted orientation, where gravity is in the opposite direction of the applied heat flux, the liquefied decomposition products move towards the heated plate and the pressure rises more rapidly than in the upright configuration. This effect is present at all the densities tested but becomes more pronounced as density of the foam is decreased. A thermochemical material model implemented in a transient conduction model solved with the finite element method was compared to the test data. The expected uncertainty of the model was estimated using the mean value method and importance factors for the uncertain parameters were estimated. The model that was assessed does not consider the effect of liquefaction or movement of gases. The result of the comparison is that the model uncertainty estimates do not account for the variation in orientation (no gravitational affects are in the model) and therefore the pressure predictions are not distinguishable due to orientation. Temperature predictions were generally in good agreement with the experimental data. Predictions for response locations on the outside of the can benefit from reliable estimates associated with conduction in the metal. For the lighter foams, temperatures measured on the embedded component fall well with the estimated uncertainty intervals indicating the energy transport rate through the decomposed region appears to be accurately estimated. The denser foam tests were terminated at maximum allowed pressure earlier resulting in only small responses at the component. For all densities the following statements are valid: The temperature response of the embedded component in the container depends on the effective conductivity of the foam which attempts to model energy transport through the decomposed foam and on the stainless steel specific heat. The pressure response depends on the activation energy of the reactions and the density of the foam and the foam specific heat and effective conductivity. The temperature responses of other container locations depend heavily on the boundary conditions and the stainless steel conductivity and specific heat.
Parylene C is used in a device because of its conformable deposition and other advantages. Techniques to study Parylene C aging were developed, and "lessons learned" that could be utilized for future studies are the result of this initial study. Differential Scanning Calorimetry yielded temperature ranges for Parylene C aging as well as post-deposition treatment. Post-deposition techniques are suggested to improve Parylene C performance. Sample preparation was critical to aging regimen. Short-term (%7E40 days) aging experiments with free standing and ceramic-supported Parylene C films highlighted "lessons learned" which stressed further investigations in order to refine sample preparation (film thickness, single sided uniform coating, machine versus laser cutting, annealing time, temperature) and testing issues ("necking") for robust accelerated aging of Parylene C.
Carrier recombination due to defects can have a major impact on device performance. The rate of defect-induced carrier recombination is determined by both defect levels and carrier capture cross-sections. Kohn-Sham density functional theory (DFT) has been widely and successfully used to predict defect levels in semiconductors and insulators, but only recently has work begun to focus on using DFT to determine carrier capture cross-sections. Lang and Henry worked out the fundamental theory of carrier-capture cross-sections in the 1970s and showed that, in most cases, room temperature carrier-capture cross-sections differ between defects primarily due to differences in the carrier capture activation energies. Here, we present an approach to using DFT to calculate carrier capture activation energies that does not depend on perturbation theory or an assumed configuration coordinate, and we demonstrate this approach for the -3/-2 level of the Ga vacancy in wurtzite GaN.
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.
Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation result is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.
A wide spectrum of photonics activities Sandia is engaged in such as solid state lighting, photovoltaics, infrared imaging and sensing, quantum sources, rely on nanoscale or ultrasubwavelength light-matter interactions (LMI). The fundamental understanding in confining electromagnetic power and enhancing electric fields into ever smaller volumes is key to creating next generation devices for these programs. The prevailing view is that a resonant interaction (e.g. in microcavities or surface-plasmon polaritions) is necessary to achieve the necessary light confinement for absorption or emission enhancement. Here we propose new paradigm that is non-resonant and therefore broadband and can achieve light confinement and field enhancement in extremely small areas [~(λ/500)^2 ]. The proposal is based on a theoretical work[1] performed at Sandia. The paradigm structure consists of a periodic arrangement of connected small and large rectangular slits etched into a metal film named double-groove (DG) structure. The degree of electric field enhancement and power confinement can be controlled by the geometry of the structure. The key operational principle is attributed to quasistatic response of the metal electrons to the incoming electromagnetic field that enables non-resonant broadband behavior. For this exploratory LDRD we have fabricated some test double groove structures to enable verification of quasistatic electronic response in the mid IR through IR optical spectroscopy. We have addressed some processing challenges in DG structure fabrication to enable future design of complex sensor and detector geometries that can utilize its non-resonant field enhancement capabilities.].
The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on full-eld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use full-eld exper- imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im- plementation and evaluation of VFM primarily for nite-deformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor- relation (DIC) optical technique that provides full-eld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu- tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.
We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and many core architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby uncertainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communication. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.
The work performed in this project has demonstrated the feasibility to use hydrodynamic focusing of two fluid steams to create a novel micro printing technology for electronics and other high performance applications. Initial efforts focused solely on selective evaporation of the sheath fluid from print stream provided insight in developing a unique print head geometry allowing excess sheath fluid to be separated from the print flow stream for recycling/reuse. Fluid flow models suggest that more than 81 percent of the sheath fluid can be removed without affecting the print stream. Further development and optimization is required to demonstrate this capability in operation. Print results using two-fluid hydrodynamic focusing yielded a 30 micrometers wide by 0.5 micrometers tall line that suggests that the cross-section of the printed feature from the print head was approximately 2 micrometers in diameter. Printing results also demonstrated that complete removal of the sheath fluid is not necessary for all material systems. The two-fluid printing technology could enable printing of insulated conductors and clad optical interconnects. Further development of this concept should be pursued.
The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - To - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic (thermo - MLEP) material model has be endefined for S S304L and the Simplified Potential Energy Clock nonlinear viscoelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to model predictions.
From June 9th thru June 13th 2014, members of the Federal Radiological Monitoring and Assessment Center (FRMAC), the Environmental Protection Agency (EPA) and the Department of Energy Radiological Assistance Program (DOE RAP) Region-3 participated in a joint nuclear incident emergency response exercise at the Savannah River Site (SRS) near Aiken, South Carolina. The purpose of this exercise was to strengthen the interoperability relationship between the FRMAC, RAP, and the EPA Mobile Environmental Radiation Laboratory (MERL) stationed in Montgomery, Alabama. The exercise was designed to allowed members of the DOE RAP Region-3 team to collect soil, water, vegetation and air samples from SRS and submit them through an established FRMAC hotline. Once received and processed through the hotline, FRMAC delivered the samples to the EPA MERL for sample preparation and laboratory radiological analysis. Upon completion of laboratory analysis, data was reviewed and submitted back to FRMAC via an electronic data deliverable (EDD). As part of the exercise, an evaluation was conducted to identify gaps and potential improvements in each step of the processes. Additionally, noteworthy practices and potential future areas of interoperability between FRMAC and EPA were acknowledged. The exercise also provided a unique opportunity for FRMAC personnel to observe EPA sample receipt and sample preparation processes and to gain familiarity with the MERL laboratory instrumentation and radiation detection capabilities. The observations and lessons-learned from this exercise will be critical for developing a more efficient, integrated response for future interactions between the FRMAC and EPA assets.
From June 24th thru June 26th 2014, members of the Federal Radiological Monitoring and Assessment Center (FRMAC), FRMAC Fly Away Laboratory, and the Environmental Protection Agency (EPA) participated in a joint nuclear incident emergency response/round robin exercise at the EPA facility in Las Vegas, Nevada. The purpose of this exercise was to strengthen the interoperability relationship between the FRMAC Fly Away Laboratory (FAL) and the EPA Mobile Environmental Radiation Laboratory (MERL) stationed in Las Vegas, Nevada. The exercise was designed to allow for immediate delivery of pre-staged, spiked samples to the EPA MERL and the FAL for sample preparation and radiological analysis. Upon completion of laboratory analysis, data was reviewed and submitted back to the FRMAC via an electronic data deliverable (EDD). In order to conduct a laboratory inter-comparison study, samples were then traded between the two laboratories and re-counted. As part of the exercise, an evaluation was conducted to identify gaps and potential areas for improvements for FRMAC, FAL and EPA operations. Additionally, noteworthy practices and potential future areas of interoperability opportunities between the FRMAC, FAL and EPA were acknowledged. The exercise also provided a unique opportunity for FRMAC personnel to observe EPA sample receipt and sample preparation processes and to gain familiarity with the MERL laboratory instrumentation and radiation detection capabilities. The areas for potential improvements and interoperability from this exercise will be critical for developing a more efficient, integrated response for future interactions between the FRMAC and EPA MERL assets.
Here we investigated the microstructural response of various Pd physically vapor deposited films and Er and ErD2 samples prepared from neutron Tube targets to implanted He via in situ ion irradiation transmission electron microscopy and subsequent in situ annealing experiments. Small bubbles formed in both systems during implantation, but did not grow with increasing fluence or a short duration room temperature aging (weeks). Annealing produced large cavities with different densities in the two systems. The ErD2 showed increased cavity nucleation compared to Er. The spherical bubbles formed from high fluence implantation and rapid annealing in both Er and ErD2 cases differed from microstructures of naturally aged tritiated samples. Further work is still underway to determine the transition in bubble shape in the Er samples, as well as the mechanism for evolution in Pd films.
Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these models can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.
Vigilance, or sustained attention, involves the ability to maintain focus and remain alert for prolonged periods of time. Problems associated with the ability to sustain attention were first identified in real-world combat situations during World War II, and they continue to abound and evolve as new and different types of situations requiring vigilance arise. This paper provides a review of the vigilance literature that describes the primary psychophysical, task, environmental, pharmacological, and individual factors that impact vigilance performance. The paper also describes how seminal findings from vigilance research apply specifically to the task of sentry duty. The strengths and weaknesses of a human sentry and options to integrate human and automated functions for vigilance tasks are discussed. Finally, techniques that may improve vigilance performance for sentry duty tasks are identified.
In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF assimilated meteorology fields, making it possible to perform a hybrid simulation, in which the Eulerian model (CMAQ) can be used to compute the initial condi- tion needed by the Lagrangian model, while the source-receptor relationships for a large state vector can be efficiently computed using the Lagrangian model in its backward mode. In ad- dition, CMAQ has a complete treatment of atmospheric chemistry of a suite of traditional air pollutants, many of which could help attribute GHGs from different sources. The inference of emissions sources using atmospheric observations is cast as a Bayesian model calibration problem, which is solved using a variety of Bayesian techniques, such as the bias-enhanced Bayesian inference algorithm, which accounts for the intrinsic model deficiency, Polynomial Chaos Expansion to accelerate model evaluation and Markov Chain Monte Carlo sampling, and Karhunen-Lo %60 eve (KL) Expansion to reduce the dimensionality of the state space. We have established an atmospheric measurement site in Livermore, CA and are collect- ing continuous measurements of CO2 , CH4 and other species that are typically co-emitted with these GHGs. Measurements of co-emitted species can assist in attributing the GHGs to different emissions sectors. Automatic calibrations using traceable standards are performed routinely for the gas-phase measurements. We are also collecting standard meteorological data at the Livermore site as well as planetary boundary height measurements using a ceilometer. The location of the measurement site is well suited to sample air transported between the San Francisco Bay area and the California Central Valley.
Time dependent deformation in the form of creep and stress relaxation is not often considered a factor when designing structural alloy parts for use at room temperature. However, creep and stress relaxation do occur at room temperature (0.09-0.21 Tm for alloys in this report) in structural alloys. This report will summarize the available literature on room temperature creep, present creep data collected on various structural alloys, and finally compare the acquired data to equations used in the literature to model creep behavior. Based on evidence from the literature and fitting of various equations, the mechanism which causes room temperature creep is found to include dislocation generation as well as exhaustion.
III-nitride laser diodes (LDs) are an interesting light source for solid-state lighting (SSL). Modelling of LDs is performed to reveal the potential advantages over traditionally used light-emitting diodes (LEDs). The first, and most notable, advantage is LDs have higher efficiency at higher currents when compared to LEDs. This is because Auger recombination that causes efficiency droop can no longer grow after laser threshold. Second, the same phosphor-converted methods used with LEDs can also be used with LDs to produce white light with similar color rendering and color temperature. Third, producing white light from direct emitters is equally challenging for both LEDs and LDs, with neither source having a direct advantage. Lastly, the LD emission is directional and can be more readily captured and focused, leading to the possibility of novel and more compact luminaires. These advantages make LDs a compelling source for future SSL.
The potential for producing biofuels from algae has generated much excitement based on projections of large oil yields with relatively little land use. However, numerous technical challenges remain for achieving market parity with conventional non-renewable liquid fuel sources. Among these challenges, the energy intensive requirements of traditional cell rupture, lipid extraction, and residuals fractioning of microalgae biomass have posed significant challenges to the nascent field of algal biotechnology. Our novel approach to address these problems was to employ low cost solution-state methods and biochemical engineering to eliminate the need for extensive hardware and energy intensive methods for cell rupture, carbohydrate and protein solubilization and hydrolysis, and fuel product recovery using consolidated bioprocessing strategies. The outcome of the biochemical deconstruction and conversion process consists of an emulsion of algal lipids and mixed alcohol products from carbohydrate and protein fermentation for co-extraction or in situ transesterification.
Seismic attenuation is defined as the loss of the seismic wave amplitude as the wave propagates excluding losses strictly due to geometric spreading. Information gleaned from seismic waves can be utilized to solve for the attenuation properties of the earth. One method of solving for earth attenuation properties is called t*. This report will start by introducing the basic theory behind t* and delve into inverse theory as it pertains to how the algorithm called tstarTomog inverts for attenuation properties using t* observations. This report also describes how to use the tstarTomog package to go from observed data to a 3-D model of attenuation structure in the earth.
We have developed two neutron detector systems based on time-encoded imaging and demonstrated their applicability toward non-proliferation missions. The 1D-TEI system was designed for and evaluated against the ability to detect Special Nuclear Material (SNM) in very low signal to noise environments; in particular, very large stand-off and/or weak sources that may be shielded. We have demonstrated significant detection (>5 sigma) of a 2.8e5 n/s neutron fission source at 100 meters stand-off in 30 min. If scaled to an IAEA significant quantity of Pu, we estimate that this could be reduced to as few as ~5 minutes. In contrast to simple counting detectors, this was accomplished without the need of previous background measurements. The 2D-TEI system was designed for high resolution spatial mapping of distributions of SNM and proved feasibility of twodimensional fast neutron imaging using the time encoded modulation of rates on a single pixel detector. Because of the simplicity of the TEI design, there is much lower systematic uncertainty in the detector response typical coded apertures. Other imaging methods require either multiple interactions (e.g. neutron scatter camera or Compton imagers), leading to intrinsically low efficiencies, or spatial modulation of the signal (e.g., Neutron Coded Aperture Imager (Hausladen, 2012)), which requires a complicated, high channel count, and expensive position sensitive detector. In contrast, a single detector using a time-modulated collimator can encode directional information in the time distribution of detected events. This is the first investigation of time-encoded imaging for nuclear nonproliferation applications.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. Trademarks The information herein is subject to change without notice. Copyright c 2002-2014 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. Xyce 's expression library is based on that inside Spice 3F5 developed by the EECS Department at the University of California. The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. All other trademarks are property of their respective owners. Contacts Bug Reports (Sandia only) http://joseki.sandia.gov/bugzilla http://charleston.sandia.gov/bugzilla World Wide Web http://xyce.sandia.gov http://charleston.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only)
The overall goal of the WATCHMAN project is to experimentally demonstrate the potential of water Cerenkov antineutrino detectors as a tool for remote monitoring of nuclear reactors. In particular, the project seeks to field a large prototype gadolinium-doped, water-based antineutrino detector to demonstrate sensitivity to a power reactor at ~10 kilometer standoff using a kiloton scale detector. The technology under development, when fully realized at large scale, could provide remote near-real-time information about reactor existence and operational status for small operating nuclear reactors.
A series of laboratory experiments were undertaken to demonstrate the feasibility of two dimensional time-encoded imaging. A prototype two-dimensional time encoded imaging system was designed and constructed. Results from imaging measurements of single and multiple point sources as well as extended source distributions are presented. Time encoded imaging has proven to be a simple method for achieving high resolution two-dimensional imaging with potential to be used in future arms control and treaty verification applications.
Future pulsed power systems may rely on linear transformer driver (LTD) technology. The LTD's will be the building blocks for a driver that can deliver higher current than the Z-Machine. The LTD's would require tens of thousands of low inductance ( %3C 85nH), high voltage (200 kV DC) switches with high reliability and long lifetime ( 10 4 shots). Sandia's Z-Machine employs 36 megavolt class switches that are laser triggered by a single channel discharge. This is feasible for tens of switches but the high inductance and short switch life- time associated with the single channel discharge are undesirable for future machines. Thus the fundamental problem is how to lower inductance and losses while increasing switch life- time and reliability. These goals can be achieved by increasing the number of current-carrying channels. The rail gap switch is ideal for this purpose. Although those switches have been extensively studied during the past decades, each effort has only characterized a particular switch. There is no comprehensive understanding of the underlying physics that would allow predictive capability for arbitrary switch geometry. We have studied rail gap switches via an extensive suite of advanced diagnostics in synergy with theoretical physics and advanced modeling capability. Design and topology of multichannel switches as they relate to discharge dynamics are investigated. This involves electrically and optically triggered rail gaps, as well as discrete multi-site switch concepts.
An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by a BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.
The goal of this Exploratory Express project was to expand the understanding of the physical properties of our recently discovered class of materials consisting of metal-organic frameworks with electroactive ‘guest’ molecules that together form an electrically conducting charge-transfer complex (molecule@MOF). Thin films of Cu3(BTC)2 were grown on fused silica using solution step-by-step growth and were infiltrated with the molecule tetracyanoquinodimethane (TCNQ). The infiltrated MOF films were extensively characterized using optical microscopy, scanning electron microscopy, Raman spectroscopy, electrical conductivity, and thermoelectric properties. Thermopower measurements on TCNQ@Cu3(BTC)2 revealed a positive Seebeck coefficient of ~400 μV/k, indicating that holes are the primary carriers in this material. The high value of the Seebeck coefficient and the expected low thermal conductivity suggest that molecule@MOF materials may be attractive for thermoelectric power conversion applications requiring low cost, solution-processable, and non-toxic active materials.
The ability to integrate ceramics with other materials has been limited due to high temperature (>800°C) ceramic processing. Recently, researchers demonstrated a novel process, aerosol deposition (AD), to fabricate ceramic films at room temperature (RT). In this process, sub-micron sized ceramic particles are accelerated by pressurized gas, impacted on the substrate, plastically deformed, and form a dense film under vacuum. This AD process eliminates high temperature processing thereby enabling new coatings and device integration, in which ceramics can be deposited on metals, plastics, and glass. However, knowledge in fundamental mechanisms for ceramic particles to deform and form a dense ceramic film is still needed and is essential in advancing this novel RT technology. In this work, a combination of experimentation and atomistic simulation was used to determine the deformation behavior of sub-micron sized ceramic particles; this is the first fundamental step needed to explain coating formation in the AD process. High purity, single crystal, alpha alumina particles with nominal sizes of 0.3 µm and 3.0 µm were examined. Particle characterization, using transmission electron microscopy (TEM), showed that the 0.3 µm particles were relatively defect-free single crystals whereas 3.0 µm particles were highly defective single crystals or particles contained low angle grain boundaries. Sub-micron sized Al2O3 particles exhibited ductile failure in compression. In situ compression experiments showed 0.3µm particles deformed plastically, fractured, and became polycrystalline. Moreover, dislocation activity was observed within these particles during compression. These sub-micron sized Al2O3 particles exhibited large accumulated strain (2-3 times those of micron-sized particles) before first fracture. In agreement with the findings from experimentation, atomistic simulations of nano-Al2O3 particles showed dislocation slip and significant plastic deformation during compression. On the other hand, the micron sized Al2O3 particles exhibited brittle fracture in compression. In situ compression experiments showed 3µm Al2O3 particles fractured into pieces without observable plastic deformation in compression. Particle deformation behaviors will be used to inform Al2O3 coating deposition parameters and particle-particle bonding in the consolidated Al2O3 coatings.
A series of design studies were performed to inv estigate the effects of flatback airfoils on blade performance and weight for large blades using the Sandi a 100-meter blade designs as a starting point. As part of the study, the effects of varying the blade slenderness on blade structural performance was investigated. The advantages and disadvantages of blad e slenderness with respect to tip deflection, flap- wise & edge-wise fatigue resistance, panel buckling capacity, flutter speed, manufacturing labor content, blade total weight, and aerodynamic design load magn itude are quantified. Following these design studies, a final blade design (SNL100-03) was prod uced, which was based on a highly slender design using flatback airfoils. The SNL100-03 design with flatback airfoils has weight of 49 tons, which is about 16% decrease from its SNL100-02 predecessor that used conventional sharp trailing edge airfoils. Although not systematically optimized, the SNL100 -03 design study provides an assessment of and insight into the benefits of flatback airfoils for la rge blades as well as insights into the limits or negative consequences of high blade slenderness resulting from a highly slender SNL100-03 planform as was chosen in the final design definition. This docum ent also provides a description of the final SNL100-03 design definition and is intended to be a companion document to the distribution of the NuMAD blade model files for SNL100-03, which are made publicly available. A summary of the major findings of the Sandia 100-meter blade development program, from the initial SNL100-00 baseline blade through the fourth SNL100-03 blade study, is provided. This summary includes the major findings and outcomes of blade d esign studies, pathways to mitigate the identified large blade design drivers, and tool development that were produced over the course of this five-year research program. A summary of large blade tec hnology needs and research opportunities is also presented.
A MS Excel program has been written that calculates accidental, or unintentional, ion channeling in cubic bcc, fcc and diamond lattice crystals or polycrystalline materials. This becomes an important issue when simulating the creation by energetic neutrons of point displacement damage and extended defects using beams of ions. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different powers of the argument. The program then offers an extremely convenient way to calculate axial and planar half-angles and minimum yield or dechanneling probabilities, effects on half-angles of amorphous overlayers, accidental channeling probabilities for randomly oriented crystals or crystallites, and finally a way to automatically generate stereographic projections of axial and planar channeling half-angles. The program can generate these projections and calculate these probabilities for axes and [hkl] planes up to (555).
This report summarizes the results of doctoral research that explored the cost impact of acquiring complex government systems jointly. The report begins by reviewing recent evidence that suggests that joint programs experience greater cost growth than non-joint programs. It continues by proposing an alternative approach for studying cost growth on government acquisition programs and demonstrates the utility of this approach by applying it to study the cost of jointness on three past programs that developed environmental monitoring systems for low-Earth orbit. Ultimately, the report concludes that joint programs' costs grow when the collaborating government agencies take action to retain or regain their autonomy. The report provides detailed qualitative and quantitative data in support of this conclusion and generalizes its findings to other joint programs that were not explicitly studied here. Finally, it concludes by presenting a quantitative model that assesses the cost impacts of jointness and by demonstrating how government agencies can more effectively architect joint programs in the future.
Sandia National Laboratories (SNL) was tasked to conduct an evaluation of the legacy computing systems of the now-closed Yucca Mountain Project (YMP) to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, in the event that the License Application (LA) review by the U.S. Nuclear Regulatory Commission (NRC) is re-started and involves additional requests for information (RAIs). Six problem areas or components of the computing system were identified and subsequently resolved or improved to ensure the operational readiness of the TSPA-LA model capability on the server cluster. As part of this readiness review, the legacy TSPA computational cluster that was relocated from the SNL YMP Lead Lab Project Office in Las Vegas, Nevada to the SNL offices in Albuquerque, New Mexico was replaced with new hardware. Three floating licenses of Goldsim Version 9.60.300 were installed on the new cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software was tested and installed to support the TSPA- type analysis on the server cluster. TSPA-LA modeling cases were tested and verified for the model reproducibility on the current server cluster. All test runs were executed on multiple processors on the server cluster utilizing the Goldsim distributed processing capability, and all runs were completed successfully. The model reproducibility verification was evaluated by two approaches: numerical value comparison and graphical comparison, and the analysis demonstrated an excellent reproducibility of the TSPA-LA model runs on the server cluster. The current server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
At Sandia National Laboratories in New Mexico (SNL/NM), the design, construction, operation, and maintenance of facilities is guided by industry standards, a graded approach, and the systematic analysis of life cycle benefits received for costs incurred. The design of the physical plant must ensure that the facilities are "fit for use," and provide conditions that effectively, efficiently, and safely support current and future mission needs. In addition, SNL/NM applies sustainable design principles, using an integrated whole-building design approach, from site planning to facility design, construction, and operation to ensure building resource efficiency and the health and productivity of occupants. The safety and health of the workforce and the public, any possible effects on the environment, and compliance with building codes take precedence over project issues, such as performance, cost, and schedule. These design standards generally apply to all disciplines on all SNL/NM projects. Architectural and engineering design must be both functional and cost-effective. Facility design must be tailored to fit its intended function, while emphasizing low-maintenance, energy-efficient, and energy-conscious design. Design facilities that can be maintained easily, with readily accessible equipment areas, low maintenance, and quality systems. To promote an orderly and efficient appearance, architectural features of new facilities must complement and enhance the existing architecture at the site. As an Architectural and Engineering (A/E) professional, you must advise the Project Manager when this approach is prohibitively expensive. You are encouraged to use professional judgment and ingenuity to produce a coordinated interdisciplinary design that is cost-effective, easily contractible or buildable, high-performing, aesthetically pleasing, and compliant with applicable building codes. Close coordination and development of civil, landscape, structural, architectural, fire protection, mechanical, electrical, telecommunications, and security features is expected to ensure compatibility with planned functional equipment and to facilitate constructability. If portions of the design are subcontracted to specialists, delivery of the finished design documents must not be considered complete until the subcontracted portions are also submitted for review. You must, along with support consultants, perform functional analyses and programming in developing design solutions. These solutions must reflect coordination of the competing functional, budgetary, and physical requirements for the project. During design phases, meetings between you and the SNL/NM Project Team to discuss and resolve design issues are required. These meetings are a normal part of the design process. For specific design-review requirements, see the project-specific Design Criteria. In addition to the design requirements described in this manual, instructive information is provided to explain the sustainable building practice goals for design, construction, operation, and maintenance of SNL/NM facilities. Please notify SNL/NM personnel of design best practices not included in this manual, so they can be incorporated in future updates. You must convey all documents describing work to the SNL/NM Project Manager in both hard copy and in an electronic format compatible with the SNL/NM-prescribed CADD and other software packages, and in accordance with a SNL/NM approved standard format. Print all hard copy versions of submitted documents (excluding drawings and renderings) double-sided when practical.