Direct Digital Manufacturing techniques such as laser ablation are proposed for the fabrication of lower cost, miniaturized, and lightweight integrated assemblies with high performance requirements. This paper investigates the laser ablation of a Ti/Cu/Pt/Au thin film metal stack on fired low temperature cofired ceramic (LTCC) surfaces using a 355 nm Nd.YAG diode pumped laser ablation system. It further investigates laser ablation applications using unfil ed, or 'green', LTCC materials: (1) through one layer of a laminated stack of unfiled LTCC tape to a buried thick film conductor ground plane, and (2) in unfiled Au thick films. The UV laser power profile and part fixturing were optimized to address defects such as LTCC microcracking, thin film adhesion failures, and redeposition of Cu and Pt. An alternate design approach to minimize ablation time was tested for efficiency in manufacture. Multichip Modules (MCM) were tested for solder ability', solder leach resistance, and wire bondabilify. Scanning election microscopy (SEM) as well as cross sections and microanalytical techniques were used in this study.
Experiments on the UNR Zebra generator with Load Current Multiplier (LCM) allow for implosions of larger sized wire array loads than at standard current of 1 MA. Advantages of larger sized planar wire array implosions include enhanced energy coupling to plasmas, better diagnostic access to observable plasma regions, and more complex geometries of the wire loads. The experiments with larger sized wire arrays were performed on 1.5 MA Zebra with LCM (the anode-cathode gap was 1 cm, which is half the gap used in the standard mode). In particular, larger sized multi-planar wire arrays had two outer wire planes from mid-atomic-number wires to create a global magnetic field (gmf) and plasma flow between them. A modified central plane with a few Al wires at the edges was put in the middle between outer planes to influence gmf and to create Al plasma flow in the perpendicular direction (to the outer arrays plasma flow). Such modified plane has different number of empty slots: it was increased from 6 up to 10, hence increasing the gap inside the middle plane from 4.9 to 7.7 mm, respectively. Such load configuration allows for more independent study of the flows of L-shell mid-atomic-number plasma (between the outer planes) and K-shell Al plasma (which first fills the gap between the edge wires along the middle plane) and their radiation in space and time. We demonstrate that such configuration produces higher linear radiation yield and electron temperatures as well as advantages of better diagnostics access to observable plasma regions and how the load geometry (size of the gap in the middle plane) influences K-shell Al radiation. In particular, K-shell Al radiation was delayed compared to L-shell mid-atomic-number radiation when the gap in the middle plane was large enough (when the number of empty slots was increased up to ten).
In this study, the authors developed an approach for accurately quantifying the helium content in a gas mixture also containing hydrogen and methane using commercially available getters. The authors performed a systematic study to examine how both H2 and CH4 can be removed simultaneously from the mixture using two SAES St 172® getters operating at different temperatures. The remaining He within the gas mixture can then be measured directly using a capacitance manometer. The optimum combination involved operating one getter at 650 °C to decompose the methane, and the second at 110 °C to remove the hydrogen. This approach eliminated the need to reactivate the getters between measurements, thereby enabling multiple measurements to be made within a short time interval, with accuracy better than 1%. The authors anticipate that such an approach will be particularly useful for quantifying the He-3 in mixtures that include tritium, tritiated methane, and helium-3. The presence of tritiated methane, generated by tritium activity, often complicates such measurements.
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
Solid rocket propellant plume temperatures have been measured using spectroscopic methods as part of an ongoing effort to specify the thermal-chemical-physical environment in and around a burning fragment of an exploded solid rocket at atmospheric pressures. Such specification is needed for launch safety studies where hazardous payloads become involved with large fragments of burning propellant. The propellant burns in an off-design condition producing a hot gas flame loaded with burning metal droplets. Each component of the flame (soot, droplets and gas) has a characteristic temperature, and it is only through the use of spectroscopy that their temperature can be independently identified.
Proceedings of ExaMPI 2014: Exascale MPI 2014 - held in conjunction with SC 2014: The International Conference for High Performance Computing, Networking, Storage and Analysis
Advances in node-level architecture and interconnect technology needed to reach extreme scale necessitate a reevaluation of long-standing models of computation, in particular bulk synchronous processing. The end of Dennard-scaling and subsequent increases in CPU core counts each successive generation of general purpose processor has made the ability to leverage parallelism for communication an increasingly critical aspect for future extreme-scale application performance. But the use of massive multithreading in combination with MPI is an open research area, with many proposed approaches requiring code changes that can be unfeasible for important large legacy applications already written in MPI. This paper covers the design and initial evaluation of an extension of a massive multithreading runtime system supporting dynamic parallelism to interface with MPI to handle fine-grain parallel communication and communication-computation overlap. Our initial evaluation of the approach uses the ubiquitous stencil computation, in three dimensions, with the halo exchange as the driving example that has a demonstrated tie to real code bases. The preliminary results suggest that even for a very well-studied and balanced workload and message exchange pattern, co-scheduling work and communication tasks is effective at significant levels of decomposition using up to 131,072 cores. Furthermore, we demonstrate useful communication-computation overlap when handling blocking send and receive calls, and show evidence suggesting that we can decrease the burstiness of network traffic, with a corresponding decrease in the rate of stalls (congestion) seen on the host link and network.
Cyr, Eric C.; Chaudhry, Jehanzeb H.; Liu, Kuo; Manteuffel, Thomas A.; Olson, Luke N.; Tang, Lei
In this paper we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-ofinterest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-ofinterest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to support the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be defined through estimation. The objectives of this study deal with the computational examination of the four-point flexural characterization of a carbon fiber composite material. Utilizing a novel, orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior. Lastly, the results of the parameter study are combined with the orthotropic material model to estimate any relevant material properties that could not be determined through experimentation (e.g., in-plane compressive strength). Results indicate that a sensitivity analysis and parameter study can be used to optimize the material definition process. Furthermore, the discussed techniques are validated with experimental data provided for the flexural characterization of the described carbon fiber composite material.
Radiation magnetohydrodynamic r-z simulations are performed of recent Ar shots on the refurbished Z generator to examine the effective ion temperature as determined from the observed line width of the He-γ line. While many global radiation properties can be matched to experimental results, the Doppler shifts due to velocity gradients at stagnation cannot reproduce the large experimentally determined width corresponding to an effective ion temperature of 50 keV. Ion viscous heating or magnetic bubbles are considered, but understanding the width remains an unsolved challenge.
A new compact Z-pinch x-ray hohlraum design with parallel-driven x-ray sources was experimentally demonstrated in a full configuration with a central target and tailored shine shields (to provide a symmetric temperature distribution on the target) at the 1.7 MA Zebra generator. This presentation reports on the joint success of two independent lines of research. One of these was the development of new sources - planar wire arrays (PWAs). PWAs turned out to be a prolific radiator. Another success was the drastic improvement in energy efficiency of pulsed-power systems, such as the Load Current Multiplier (LCM). The Zebra/LCM generator almost doubled the plasma load current to 1.7 MA. The two above-mentioned innovative approaches were used in combination to produce a new compact hohlraum design for ICF, as jointly proposed by SNL and UNR. Good agreement between simulated and measured radiation temperature of the central target is shown. Experimental comparison of PWAs with planar foil liners (PFL) - another viable alternative to wire array loads at multi-MA generators show promising data. Results of research at the University of Nevada Reno allowed for the study of hohlraum coupling physics at University-scale generators. The advantages of new hohlraum design applications for multi-MA facilities with W or Au double PWAs or PFL x-ray sources are discussed.
Olson, Derek; Luskin, Mitchell; Shapeev, Alexander V.; Bochev, Pavel B.
We present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. We present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.
Resistive random access memory (ReRAM), or memristors, may be capable of significantly improve the efficiency of neuromorphic computing, when used as a central component of an analog hardware accelerator. However, the significant electrical variation within a device and between devices degrades the maximum efficiency and accuracy which can be achieved by a ReRAMbased neuromorphic accelerator. In this report, the electrical variability is characterized, with a particular focus on that which is due to fundamental, intrinsic factors. Analytical and ab initio models are presented which offer some insight into the factors responsible for this variability.
We present PuLP, a parallel and memory-efficient graph partitioning method specifically designed to partition low-diameter networks with skewed degree distributions. Graph partitioning is an important Big Data problem because it impacts the execution time and energy efficiency of graph analytics on distributed-memory platforms. Partitioning determines the in-memory layout of a graph, which affects locality, intertask load balance, communication time, and overall memory utilization of graph analytics. A novel feature of our method PuLP (Partitioning using Label Propagation) is that it optimizes for multiple objective metrics simultaneously, while satisfying multiple partitioning constraints. Using our method, we are able to partition a web crawl with billions of edges on a single compute server in under a minute. For a collection of test graphs, we show that PuLP uses 8-39× less memory than state-of-the-art partitioners and is up to 14.5× faster, on average, than alternate approaches (with 16-way parallelism). We also achieve better partitioning quality results for the multi-objective scenario.
Two of the more recent developments in thermal transport simulations are the incorporation of multiscale models and requirements for verification, validation, and uncertainty quantification to provide actionable simulation results. The aleatoric uncertainty is investigated for a two component mixture containing a high thermal conductivity and a low thermal conductivity material. The microstructure is varied from a coarse size of 1/8 the domain length to a fine scale of 1/256 the domain length and for volume fractions of high thermal conductivity material from 0 to 1. The uncertainty in the temperatures is greatest near the percolation threshold of around 0.4 and for the coarsest microstructures. Statistical representations of the aleatoric uncertainty for heterogeneous materials are necessary and need to be passed between scales in multiscale simulations of thermal transport.
Developing a big picture understanding of a severe accident is extremely challenging. Operating crews and emergency response teams are faced with rapidly evolving circumstances, uncertain information, distributed expertise, and a large number of conflicting goals and priorities. Severe accident management guidance (SAMGs) provides support for collecting information and assessing the state of a nuclear power plant during severe accidents. However, SAMGs developers cannot anticipate every possible accident scenario. Advanced Probabilistic Risk Assessment (PRA) methods can be used to explore an extensive space of possible accident sequences and consequences. Using this advanced PRA to develop a decision support system can provide expanded support for diagnosis and response. In this paper, we present an approach that uses dynamic PRA to develop risk-informed "Smart SAMGs". Bayesian Networks form the basis of the faster-than-real-time decision support system. The approach leverages best-available information from plant physics simulation codes (e.g., MELCOR). Discrete Dynamic Event Trees (DDETs) are used to provide comprehensive coverage of the potential accident scenario space. This paper presents a methodology to develop Smart procedures and provides an example model created for diagnosing the status of the ECCS valves in a generic iPWR design.
Through the Department of Energy (DOE)/Office of Nuclear Energy (NE), Used Fuel Disposition Campaign (UFDC), numerous institutions are working to address issues associated with the extended storage and transportation of used nuclear fuel. In 2012, this group published a technical analysis which identified technical gaps that could be addressed to better support the technical basis for the extended storage and transportation of used nuclear fuel. This paper summarizes some of the current work being performed to close some of those high priority gaps. The areas discussed include: 1. developing thermal profiles of waste storage packages, 2. investigating the stresses experienced by fuel cladding and how that might affect cladding integrity, 3. understanding real environmental conditions that could lead to cask stress corrosion cracking, 4. quantifying the stress and strain fuel assemblies experience during normal truck transport and 5. performing a full-scale ten-year confirmatory demonstration of dry cask storage. Data from these R&D activities will reduce important technical gaps and allow us to better assess the risks associated with extended storage and transportation of used nuclear fuel.
Sandia and Semprius have partnered to evaluate the operational performance of a 3.5 kW (nominal) R&D system using 40 Semprius modules. Eight months of operational data has been collected and evaluated. Analysis includes determination of Pmp, Imp and Vmp at CSTC conditions, Pmp as a function of DNI, effect of wind speed on module temperature and seasonal variations in performance. As expected, on-sun Pmp and Imp of the installed system were found to be ~10% lower than the values determined from flash testing at CSTC, while Vmp was found to be nearly identical to the results of flash testing. The differences in the flash test and outdoor data are attributed to string mismatch, soiling, seasonal variation in solar spectrum, discrepancy in the cell temperature model, and uncertainty in the power and current reported by the inverter. An apparent limitation to the degree of module cooling that can be expected from wind speed was observed. The system was observed to display seasonal variation in performance, likely due to seasonal variation in spectrum.
The solar spectrum varies with atmospheric conditions and composition, and can have significant impacts on the output power performance of each junction in a concentrating solar photovoltaic (CPV) system, with direct implications on the junction that is current-limiting. The effect of changing solar spectrum on CPV module power production has previously been characterized by various spectral performance parameters such as air mass (AM) for both single and multi-junction module technologies. However, examinations of outdoor test results have shown substantial uncertainty contributions by many of these parameters, including air mass, for the determination of projected power and energy production. Using spectral data obtained from outdoor spectrometers, with a spectral range of 336nm-1715nm, this investigation examines precipitable water (PW), aerosol and dust variability effects on incident spectral irradiance. This work then assesses air mass and other spectral performance parameters, including a new atmospheric component spectral factor (ACSF), to investigate iso-cell, stacked multijunction and single-junction c-Si module performance data directly with measured spectrum. This will then be used with MODTRAN5® to determine if spectral composition can account for daily and seasonal variability of the short-circuit current density Jsc and the maximum output power Pmp values. For precipitable water, current results show good correspondence between the modeled atmospheric component spectral factor and measured data with an average rms error of 0.013, for all three iso-cells tested during clear days over a one week time period. Results also suggest average variations in ACSF factors with respect to increasing precipitable water of 8.2%/cmH2O, 1.3%/cmH2O, 0.2%/cmH2O and 1.8%/cmH2O for GaInP, GaAs, Ge and c-Si cells, respectively at solar noon and an AM value of 1.0. For ozone, the GaInP cell had the greatest sensitivity to increasing ozone levels with an ACSF variation of 0.07%/cmO3. For the desert dust wind study, consistent ACSF behavior between all iso-cells and c-Si was found, with only significant reductions beyond 40mph.
The State-of-the-Art Reactor Consequence Analyses (SOARCA) project for the Peach Bottom Atomic Power Station (the pilot boiling-water reactor) and Surry Power Station (the pilot pressurized-water reactor) represents the most complex deterministic MELCOR analyses performed to date. Uncertainty analyses focusing on input parameter uncertainty are now under way for one scenario at each pilot plant. Analyzing the uncertainty in parameters requires technical justification for the selection of each parameter to include in the analyses and defensible rationale for the associated distributions. This paper describes the methodology employed in the selection of parameters and corresponding distributions for the Surry uncertainty analysis, and insights from applying the methodology to the MELCOR parameters.
We have examined how different cleaning processes affect the laser induced damage threshold of antireflection coatings for large dimension, Z-Backlighter laser optics at Sandia National Laboratories. Laser damage thresholds were measured after the coatings were created, and again 4 months later to determine which cleaning processes were most effective. There is a nearly twofold increase in laser induced damage threshold between the antireflection coatings that were cleaned and those that were not cleaned. The laser-induced damage threshold results also revealed that every antireflection coating had a high defect density, despite the cleaning process used, which indicates that improvements to either the cleaning or deposition processes should provide even higher laser induced damage thresholds.
Spacecraft state-of-health (SOH) analysis typically consists of limit-checking to compare incoming measurand values against their predetermined limits. While useful, this approach requires significant engineering insight along with the ability to evolve limit values over time as components degrade and their operating environment changes. In addition, it fails to take into account the effects of measurand combinations, as multiple values together could signify an imminent problem. A more powerful approach is to apply data mining techniques to uncover hidden trends and patterns as well as interactions among groups of measurands. In an internal research and development effort, software engineers at Sandia National Laboratories explored ways to mine SOH data from a remote sensing spacecraft. Because our spacecraft uses variable sample rates and packetized telemetry to transmit values for 30,000 measurands across 700 unique packet IDs, our data is characterized by a wide disparity of time and value pairs. We discuss how we summarized and aligned this data to be efficiently applied to data mining algorithms. We apply supervised learning including decision tree and principal component analysis and unsupervised learning including k-means and orthogonal partitioning clustering and one-class support vector machine to four different spacecraft SOH scenarios after the data preprocessing step. Our experiment results show that data mining is a very good low-cost and high-payoff approach to SOH analysis and provides an excellent way to exploit vast quantities of time-series data among groups of measurands in different scenarios. Our scenarios show that the supervised cases were particularly useful in identifying key contributors to anomalous events, and the unsupervised cases were well-suited for automated analysis of the system as a whole. The developed underlying models can be updated over time to accurately represent a changing operating environment and ultimately to extend the mission lifetime of our valuable space assets.
When considering the future of offshore wind energy, developing cost effective methods of harnessing the offshore wind resource represents a significant challenge which must be overcome to make offshore wind a viable option. As the majority of the capital investment in offshore wind is in the form of infrastructure and operation and maintenance costs, reducing these expenditures could greatly reduce the cost of energy (COE) for an offshore wind project. Sandia National Laboratory and its partners (TU Delft, University of Maine, Iowa State, and TPI Composites) believe that vertical axis wind turbines (VAWTs) offer multiple advantages over other rotor configurations considering this new COE breakdown. The unique arrangement of a VAWT allows the heavy generator and related components to be located at the base of the tower as opposed to the top, as is typical of a horizontal axis wind turbine (HAWT). This configuration lowers the topside CG which reduces the platform stability requirements, leading to smaller and cheaper platforms. Additionally this locates high maintenance systems close to the ocean surface thus increasing maintainability. To support this project and the general wind research community, the Offshore Wind ENergy Simulation (OWENS) toolkit is being developed in conjunction with Texas A&M as an open source, modular aero-elastic analysis code with the capability to analyze floating VAWTS. The OWENS toolkit aims to establish a robust and flexible finite element framework and VAWT mesh generation utility, coupled with a modular interface that allows users to integrate easily with existing codes, such as aerodynamic and hydrodynamic codes.
We have examined how different cleaning processes affect the laser induced damage threshold of antireflection coatings for large dimension, Z-Backlighter laser optics at Sandia National Laboratories. Laser damage thresholds were measured after the coatings were created, and again 4 months later to determine which cleaning processes were most effective. There is a nearly twofold increase in laser induced damage threshold between the antireflection coatings that were cleaned and those that were not cleaned. The laser-induced damage threshold results also revealed that every antireflection coating had a high defect density, despite the cleaning process used, which indicates that improvements to either the cleaning or deposition processes should provide even higher laser induced damage thresholds.
Anthrax poses a significant threat to National Security as demonstrated by the terrorist attacks targeting the US Postal Service and Hart Building. Anthrax outbreaks commonly occur in livestock. Consequently, Bacillus anthracis is routinely isolated, propagated, and maintained to diagnose the disease. This practice increases laboratories' repositories of the agent, escalating the risk that it can be stolen. We have developed BaDX (2014 R&D100 Awardee), a credit-card sized diagnostic device for use in ultra-low resource environments that is low cost, requires no power, instrumentation or equipment to operate, no cold chain, self-decontaminates post-assay, and is operable by individuals with little/no technical training.
Insights developed within the U.S. nuclear weapon system safety community may benefit system safety design, assessment, and management activities in other high consequence domains. The approach of assured nuclear weapon safety has been developed that uses the Nuclear Safety Design Principles (NSDPs) of incompatibility, isolation, and inoperability to design safety features, organized into subsystems such that each subsystem contributes to safe system responses in independent and predictable ways given a wide range of environmental contexts. The central aim of the approach is to provide a robust technical basis for asserting that a system can meet quantitative safety requirements in the widest context of possible adverse or accident environments, while using the most concise arrangement of safety design features and the fewest number of specific adverse or accident environment assumptions. Rigor in understanding and applying the concept of independence is crucial for the success of the approach. This paper provides a basic description of the assured nuclear weapon safety approach, in a manner that illustrates potential application to other domains. There is also a strong emphasis on describing the process for developing a defensible technical basis for the independence assertions between integrated safety subsystems.
Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Measurements conducted in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. Despite the different answers obtained by varied self-calibrations, each implementation locked onto an apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Therefore, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.
AIAA AVIATION 2014 -19th AIAA International Space Planes and Hypersonic Systems and Technologies Conference
Marineau, Eric C.; Moraru, C.G.; Lewis, Daniel R.; Norris, Joseph D.; Lafferty, John F.; Wagnild, Ross M.; Smith, Justin
Boundary-layer transition and stability data were obtained at Mach 10 in the Arnold Engineering Development Complex (AEDC) Hypervelocity Wind Tunnel 9 on a 1.5-m long, 7-deg cone at unit Reynolds numbers between 1.8 and 31 million per meter. A total of 24 runs were performed at angles-of-attack between 0 and 10-deg on sharp and blunted cones with nose radii between 5.1 and 50.8-mm. The transition location was determined with coaxial thermocouples and temperature sensitive paint while stability measurements were obtained using high-frequency response pressure sensors. Mean flow and boundary layer-stability computations were also conducted and compared with the experiment. The effect of angle-of-attack and bluntness on the transition location displays similar trends compared to historical hypersonic wind tunnel data at similar Mach and Reynolds numbers. The N factor at start of transition on sharp cones increases with unit Reynolds number. Values between 4 and 7 were observed. The N factor at start of transition significantly decreases as bluntness increases and is successfully correlated with the ratio of transition location to entropy layer swallowing length. Good agreement between the computed and measured spatial amplification rates and most amplified 2nd mode frequencies are obtained for sharp and moderately blunted cones. For large bluntness, where the ratio of transition to entropy swallowing length is below 0.1, 2nd mode waves were not observed before the start of transition on the frustum.
The discrete ordinates method is a popular and versatile technique for deterministically solving the radiative transport which governs the exchange of radiant energy within a fluid or gas mixture. It is the most common 'high fidelity' technique used to approximate the radiative contribution in combined-mode heat transfer applications. A major drawback of the discrete ordinates method is that the solution of the discretized equations may involve nonphysical oscillations due to the nature of the discretization in the angular space. These ray effects occur in a wide range of problems including those with steep temperature gradients either at the boundary or within the medium, discontinuities in the boundary emissivity due to the use of multiple materials or coatings, internal edges or corners in non-convex geometries, and many others. Mitigation of these ray effects either by increasing the number of ordinate directions or by filtering or smoothing the solution can yield significantly more accurate results and enhanced numerical stability for combined mode codes. When ray effects are present, the solution is seen to be highly dependent upon the relative orientation of the geometry and the global reference frame. This is an undesirable property. A novel ray effect mitigation technique is proposed. By averaging the computed solution for various orientations, the number of ordinate directions may be artificially increased in a trivially parallelizable way. This increases the frequency and decreases the amplitude of the ray effect oscillations. As the number of considered orientations increases a rotationally invariant solution is approached which is quite accurate. How accurate this solution is and how rapidly it is approached is problem dependent. Uncertainty in the smooth solution achieved after considering a relatively small number of orientations relative to the rotationally invariant solution may be quantified.
The flow over aircraft bays exhibits many characteristics of cavity flows, namely resonant pressures that can create high structural loading. Most studies have represented these bays as rectangular cavities; however, this simplification neglects many features of the actual flight geometry which could affect the unsteady pressure field and resulting loading in the bay. To address this shortcoming, a complex cavity geometry was developed to incorporate more realistic aircraft-bay features including shaped inlets and internal cavity variations. A parametric study of these features at Mach 1.5, 2.0, and 2.5 was conducted to identify key differences from simple rectangular cavity flows. The frequency of the basic rectangular cavity modes could be predicted by theory; however, most complex geometries shifted these frequencies. Geometric changes that constricted the flow tended to enhance cavity modes and create higher pressure fluctuations. Other features, such as a leading edge ramp, lifted the shear layer higher with respect to the aft cavity wall and led to cavity tone suppression. Complex features that introduced spanwise non-uniformity into the shear layer also led to a reduction of cavity tones, especially at the aft end of the cavity.
Spent nuclear fuel reprocessing may involve some hazardous liquids that may explode under accident conditions. Explosive accidents may result in energetic dispersion of the liquid. The atomized liquid represents a major hazard of this class of event. The magnitude of the aerosol source term is difficult to predict, and historically has been estimated from correlations based on marginally relevant data. A technique employing a coupled finite element structural dynamics and control volume computational fluid dynamics has been demonstrated previously for a similar class of problems. The technique was subsequently evaluated for detonation events. Key to the calculations is the use of a Taylor Analogy Break-up (TAB) based model for predicting the aerodynamic break-up of the liquid drops in the air environment, and a dimensionless parameter for defining the chronology of the mass and momentum coupling. This paper presents results of liquid aerosolization from an explosive event.
43rd ASES National Solar Conference 2014, SOLAR 2014, Including the 39th National Passive Solar Conference and the 2nd Meeting of Young and Emerging Professionals in Renewable Energy
Northern Arizona University (NAU) and the Southwestern Indian Polytechnic Institute (SIPI) conducted a pre- feasibility study for utility-scale solar power on the Jemez Pueblo in New Mexico. Student groups at NAU and SIPI analyzed four different 40-MW solar power projects to understand whether or not such plants built on tribal lands are technically and financially feasible. The NREL System Advisor Model (SAM) was employed to analyze the following four alternatives: fixed, horizontal-axis photovoltaic (PV); fixed, tilted-at-latitude PV; horizontal, single-axis tracking PV; and a solar-thermal "power tower" plant. Under supervision from faculty, the student teams predicted the energy production and net present value for the four options. This paper presents details describing the solar power plants analyzed, the results of the SAM analyses, and a sensitivity analysis of the predicted performance to key input variables. Overall, solar power plants on the Jemez Pueblo lands appear to pass the test for financial feasibility.
Spent nuclear fuel reprocessing may involve some hazardous liquids that may explode under accident conditions. Explosive accidents may result in energetic dispersion of the liquid. The atomized liquid represents a major hazard of this class of event. The magnitude of the aerosol source term is difficult to predict, and historically has been estimated from correlations based on marginally relevant data. A technique employing a coupled finite element structural dynamics and control volume computational fluid dynamics has been demonstrated previously for a similar class of problems. The technique was subsequently evaluated for detonation events. Key to the calculations is the use of a Taylor Analogy Break-up (TAB) based model for predicting the aerodynamic break-up of the liquid drops in the air environment, and a dimensionless parameter for defining the chronology of the mass and momentum coupling. This paper presents results of liquid aerosolization from an explosive event.
Predicting the behavior of solid fuels in response to a fire is a complex endeavor. Heterogeneity, charring, and intumescence are a few examples of the many challenges presented by some common materials. If one desires to employ a 3-dimensional computational fluid dynamics (CFD) model for fire, an accurate solid combustion model for materials at the domain boundary is often desirable. Methods for such modeling are not currently mature, and this is a current topic of research. For some practical problems, it may be acceptable to abstract the surface combustible material as a 1-dimensional reacting boundary condition. This approach has the advantage of being a relatively simple model, and may provide acceptably accurate predictions for problems of interest. Such a model has recently been implemented in Sandia's low-Mach number CFD code for reacting flows, the SIERRA/FUEGO code. Theory for the implemented model is presented. The thermal transport component of the model is verified by approximating a 1-D conduction problem with a closed form solution. The code is further demonstrated by predicting the fire behavior of a block of burning plexiglas (PMMA). The predictions are compared to the reported data from a corresponding experimental program. The predictions are also used to evaluate the sensitivity of model parameters through a sensitivity study using the same test configuration.
We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A = PLTLT PT , where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. The current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.
The objective is to calculate the probability, PF, that a device will fail when its inputs, x, are randomly distributed with probability density, p (x), e.g., the probability that a device will fracture when subject to varying loads. Here failure is defined as some scalar function, y (x), exceeding a threshold, T. If evaluating y (x) via physical or numerical experiments is sufficiently expensive or PF is sufficiently small, then Monte Carlo (MC) methods to estimate PF will be unfeasible due to the large number of function evaluations required for a specified accuracy. Importance sampling (IS), i.e., preferentially sampling from “important” regions in the input space and appropriately down-weighting to obtain an unbiased estimate, is one approach to assess PF more efficiently. The inputs are sampled from an importance density, pʹ (x). We present an adaptive importance sampling (AIS) approach which endeavors to adaptively improve the estimate of the ideal importance density, p* (x), during the sampling process. Our approach uses a mixture of component probability densities that each approximate p* (x). An iterative process is used to construct the sequence of improving component probability densities. At each iteration, a Gaussian process (GP) surrogate is used to help identify areas in the space where failure is likely to occur. The GPs are not used to directly calculate the failure probability; they are only used to approximate the importance density. Thus, our Gaussian process adaptive importance sampling (GPAIS) algorithm overcomes limitations involving using a potentially inaccurate surrogate model directly in IS calculations. This robust GPAIS algorithm performs surprisingly well on a pathological test function.
The objective is to calculate the probability, PF, that a device will fail when its inputs, x, are randomly distributed with probability density, p (x), e.g., the probability that a device will fracture when subject to varying loads. Here failure is defined as some scalar function, y (x), exceeding a threshold, T. If evaluating y (x) via physical or numerical experiments is sufficiently expensive or PF is sufficiently small, then Monte Carlo (MC) methods to estimate PF will be unfeasible due to the large number of function evaluations required for a specified accuracy. Importance sampling (IS), i.e., preferentially sampling from “important” regions in the input space and appropriately down-weighting to obtain an unbiased estimate, is one approach to assess PF more efficiently. The inputs are sampled from an importance density, pʹ (x). We present an adaptive importance sampling (AIS) approach which endeavors to adaptively improve the estimate of the ideal importance density, p* (x), during the sampling process. Our approach uses a mixture of component probability densities that each approximate p* (x). An iterative process is used to construct the sequence of improving component probability densities. At each iteration, a Gaussian process (GP) surrogate is used to help identify areas in the space where failure is likely to occur. The GPs are not used to directly calculate the failure probability; they are only used to approximate the importance density. Thus, our Gaussian process adaptive importance sampling (GPAIS) algorithm overcomes limitations involving using a potentially inaccurate surrogate model directly in IS calculations. This robust GPAIS algorithm performs surprisingly well on a pathological test function.
A hybrid fs/ps pure-rotational CARS scheme is demonstrated in the product gases of premixed hydrogren/air and ethylene/air flat flames. Near-transform-limited, broadband femtosecond pump and Stokes pulses impulsively prepare a rotational Raman coherence, which is later probed by a high-energy, frequency-narrow picosecond pulse, generated by sum-frequency mixing of linearly chirped broadband pulses with conjugate temporal phase. Spectral fitting is demonstrated for both shot-averaged and single-laser-shot spectra. Measurement accuracy is quantified by comparison to adiabatic-equilibrium calculations for the hydrogen/air flames, and by comparison to nanosecond CARS measurements for the ethylene/air flames. Temperature-measurement precision is 1-3% and O2/N2 precision is 2-10% based on histograms constructed from 1000 single-shot measurements acquired at a data rate of 1 kHz. These results indicate that hybrid fs/ps rotational CARS is a quantitative tool for kHz-rate combustion temperature/species data.
52nd AIAA Aerospace Sciences Meeting - AIAA Science and Technology Forum and Exposition, SciTech 2014
Lietz, C.; Hassanaly, M.; Raman, V.; Kolla, Hemanth; Chen, J.; Gruber, A.
In the design of high-hydrogen content gas turbines for power generation, ashback of the turbulent ame by propagation through the low velocity boundary layers in the premix- ing region is an operationally dangerous event. Predictive models that could capture the onset of ashback would be indispensable in gas turbine design. For this purpose, modeling of the ashback process using the large eddy simulation (LES) approach is considered here. In particular, the goal is to understand the modeling requirements for predicting ashback in confined goemetries. The ow configuration considered is a turbulent channel ow, for which high-fidelity direct numerical simulation (DNS) data already exists. A suite of LES calculations with different model formulations and filterwidths is considered. It is shown that LES predicts certain statistical properties of the ame front reasonably well, but fails to capture the propagation velocity accurately. It is found that the ashback process is invariant to changes in the initial conditions and additional near-wall grid refinement but the LES filterwidth as well as the subfilter models are found to be important even when the turbulence is almost fully resolved. From the computations, it is shown that for an LES model to predict ashback, suffcient resolution of the near-wall region, proper represen- tation of the centerline acceleration caused by ame blockage, and appropriate modeling of the propagation of a wrinkled ame front near the center of the channel are considered the critical requirements.
The development and application of optically accessible engines to further our understanding of in-cylinder combustion processes is reviewed, spanning early efforts in simplified engines to the more recent development of high-pressure, highspeed engines that retain the geometric complexities of modern production engines. Limitations of these engines with respect to the reproduction of realistic metal test engine characteristics and performance are identified, as well as methods that have been used to overcome these limitations. Lastly, the role of the work performed in these engines on clarifying the fundamental physical processes governing the combustion process and on laying the foundation for predictive engine simulation is summarized.
A major theme in thermoelectric research is based on controlling the formation of nanostructures that occur naturally in bulk intermetallic alloys through various types of thermodynamic phase transformation processes (He et al., 2013). The question of how such nanostructures form and why they lead to a high thermoelectric figure of merit (zT) are scientifically interesting and worthy of attention. However, as we discuss in this opinion, any processing route based on thermodynamic phase transformations alone will be difficult to implement in thermoelectric applications where thermal stability and reliability are important. Attention should also be focused on overcoming these limitations through advanced post-processing techniques.
Model form error of the type considered here is error due to an approximate or incorrect representation of physics by a computational model. Typical approaches to adjust a model based on observed differences between experiment and prediction are to calibrate the model parameters utilizing the observed discrepancies and to develop parameterized additive corrections to the model output. These approaches are generally not suitable if significant physics is missing from the model and the desired quantities of interest for an application are different than those used for calibration. The approach developed here is to build a corrected surrogate solver through a multi- step process: 1) Sampled simulation results are used to develop a surrogate computational solver that maintains the overall conservative principles of the unmodified governing equations, 2) the surrogate solver is applied to candidate linear and non-linear corrector terms to develop corrections that are consistent with the original conservative principles, 3) constant multipliers on the these terms are calibrated using the experimental observations, and 4) the resulting surrogate solver is used to predict application response for the quantity of interest. This approach and several other calibration-based approaches were applied to an example problem based on the diffusive Burgers' equation. While all the approaches provided some model correction when the measure/calibration quantity was the same as that for an application, only the present approach was able to adequately correct the CompSim results when the prediction quantity was different from the calibration quantity.
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) plans to launch a spacecraft as part of the Mars 2020 mission. One option for the rover on the proposed spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. An alternative option being considered is a set of solar panels for electrical power with up to 80 Light-Weight Radioisotope Heater Units (LWRHUs) for local component heating. Both the MMRTG and the LWRHUs use radioactive plutonium dioxide. NASA is preparing an Environmental Impact Statement (EIS) in accordance with the National Environmental Policy Act. The EIS will include information on the risks of mission accidents to the general public and on-site workers at the launch complex. This Nuclear Risk Assessment (NRA) addresses the responses of the MMRTG or LWRHU options to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks of both options for the EIS.
In this report, measurements of the prompt radiation-induced conductivity (RIC) in 3 mil samples of Pyralux® are presented as a function of dose rate, pulse width, and applied bias. The experiments were conducted with the Medusa linear accelerator (LINAC) located at the Little Mountain Test Facility (LMTF) near Ogden, UT. The nominal electron energy for the LINAC is 20 MeV. Prompt conduction current data were obtained for dose rates ranging from ~2 x 109 rad(Si)/s to ~1.1 x 1011 rad(Si)/s and for nominal pulse widths of 50 ns and 500 ns. At a given dose rate, the applied bias across the samples was stepped between -1500 V and 1500 V. Calculated values of the prompt RIC varied between 1.39x10-8 Ω-1 · m-1 and 2.67x10-7 Ω-1 · m-1 and the prompt RIC coefficient varied between 1.25x10-18 Ω-1 · m-1/(rad/s) and 1.93x10-17 Ω-1 · m-1/(rad/s).
A simple method for experimentally determining thermodynamic quantities for flow battery cell reactions is presented. Equilibrium cell potentials, temperature derivatives of cell potential (dE/dT), Gibbs free energies, and entropies are reported here for all-vanadium, iron–vanadium, and iron–chromium flow cells with state-of-the-art solution compositions. Proof is given that formal potentials and formal temperature coefficients can be used with modified forms of the Nernst Equation to quantify the thermodynamics of flow cell reactions as a function of state-of-charge. Such empirical quantities can be used in thermo-electrochemical models of flow batteries at the cell or system level. In most cases, the thermodynamic quantities measured here are significantly different from standard values reported and used previously in the literature. The data reported here are also useful in the selection of operating temperatures for flow battery systems. Because higher temperatures correspond to lower equilibrium cell potentials for the battery chemistries studied here, it can be beneficial to charge a cell at higher temperature and discharge at lower temperature. As a result, proof-of-concept of improved voltage efficiency with the use of such non-isothermal cycling is given for the all-vanadium redox flow battery, and the effect is shown to be more pronounced at lower current densities.
A simple demonstration of nonlocality in a heterogeneous material is presented. By analysis of the microscale deformation of a two-component layered medium, it is shown that nonlocal interactions necessarily appear in a homogenized model of the system. Explicit expressions for the nonlocal forces are determined. The way these nonlocal forces appear in various nonlocal elasticity theories is derived. The length scales that emerge involve the constituent material properties as well as their geometrical dimen- sions. A peridynamic material model for the smoothed displacement eld is derived. It is demonstrated by comparison with experimental data that the incorporation of non- locality in modeling dramatically improves the prediction of the stress concentration in an open hole tension test on a composite plate.
For over two decades the dominant means for enabling portable performance of computational science and engineering applications on parallel processing architectures has been the bulk-synchronous parallel programming (BSP) model. Code developers, motivated by performance considerations to minimize the number of messages transmitted, have typically pursued a strategy of aggregating message data into fewer, larger messages. Emerging and future high-performance architectures, especially those seen as targeting Exascale capabilities, provide motivation and capabilities for revisiting this approach. In this paper we explore alternative configurations within the context of a large-scale complex multi-physics application and a proxy that represents its behavior, presenting results that demonstrate some important advantages as the number of processors increases in scale.
NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.
We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documents the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.
Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex, scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.
We develop a capability to simulate reduction-oxidation (redox) flow batteries in the Sierra Multi-Mechanics code base. Specifically, we focus on all-vanadium redox flow batteries; however, the capability is general in implementation and could be adopted to other chemistries. The electrochemical and porous flow models follow those developed in the recent publication by [28]. We review the model implemented in this work and its assumptions, and we show several verification cases including a binary electrolyte, and a battery half-cell. Then, we compare our model implementation with the experimental results shown in [28], with good agreement seen. Next, a sensitivity study is conducted for the major model parameters, which is beneficial in targeting specific features of the redox flow cell for improvement. Lastly, we simulate a three-dimensional version of the flow cell to determine the impact of plenum channels on the performance of the cell. Such channels are frequently seen in experimental designs where the current collector plates are borrowed from fuel cell designs. These designs use a serpentine channel etched into a solid collector plate.
To support higher fidelity modeling of residual stresses in glass-to-metal (GTM) seals and to demonstrate the accuracy of finite element analysis predictions, characterization and validation data have been collected for Sandia’s commonly used compression seal materials. The temperature dependence of the storage moduli, the shear relaxation modulus master curve and structural relaxation of the Schott 8061 glass were measured and stress-strain curves were generated for SS304L VAR in small strain regimes typical of GTM seal applications spanning temperatures from 20 to 500 C. Material models were calibrated and finite element predictions are being compared to measured data to assess the accuracy of predictions.
There are multiple ways for a homeowner to obtain the electricity generating and savings benefits offered by a photovoltaic (PV) system. These include purchasing a PV system through various financing mechanisms, or by leasing the PV system from a third party with multiple options that may include purchase, lease renewal or PV system removal. The different ownership options available to homeowners presents a challenge to appraisal and real estate professionals during a home sale or refinance in terms of how to develop a value that is reflective of the PV systems operational characteristics, local market conditions, and lender and underwriter requirements. This paper presents these many PV system ownership options with a discussion of what considerations an appraiser must make when developing the contributory value of a PV system to a residential property.
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Typically, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. Following the development of a set of verification tests, the code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos LANSCE "Blue Room" facility. The results reveal that KMC calculations agree well with experiment once adjustments are made to significant defect parameters within the appropriate uncertainty bounds.
This work was an early career LDRD investigating the idea of using a focused ion beam (FIB) to implant Ga into silicon to create embedded nanowires and/or fully suspended nanowires. The embedded Ga nanowires demonstrated electrical resistivity of 5 m-cm, conductivity down to 4 K, and acts as an Ohmic silicon contact. The suspended nanowires achieved dimensions down to 20 nm x 30 nm x 10 m with large sensitivity to pressure. These structures then performed well as Pirani gauges. Sputtered niobium was also developed in this research for use as a superconductive coating on the nanowire. Oxidation characteristics of Nb were detailed and a technique to place the Nb under tensile stress resulted in the Nb resisting bulk atmospheric oxidation for up to years.
The availability of efficient algorithms for long-range pairwise interactions is central to the success of numerous applications, ranging in scale from atomic-level modeling of materials to astrophysics. This report focuses on the implementation and analysis of the multilevel summation method for approximating long-range pairwise interactions. The computational cost of the multilevel summation method is proportional to the number of particles, N, which is an improvement over FFTbased methods whos cost is asymptotically proportional to N logN. In addition to approximating electrostatic forces, the multilevel summation method can be use to efficiently approximate convolutions with long-range kernels. As an application, we apply the multilevel summation method to a discretized integral equation formulation of the regularized generalized Poisson equation. Numerical results are presented using an implementation of the multilevel summation method in the LAMMPS software package. Preliminary results show that the computational cost of the method scales as expected, but there is still a need for further optimization.
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02, completed March, 31, 2012, THM.CFD.P5.01 completed June 30, 2012 and THM.CFD.P5.01 completed on October 31, 2012.
Radar coherence is an important concept for imaging radar systems such as synthetic aperture radar (SAR). This document quantifies some of the effects in SAR which modify the coherence. Although these effects can disrupt the coherence within a single SAR image, this report will focus on the coherence between separate images, such as for coherent change detection (CCD) processing. There have been other presentations on aspects of this material in the past. The intent of this report is to bring various issues that affect the coherence together in a single report to support radar engineers in making decisions about these matters.
This report describes a system model that can be used to analyze three advance small modular reactor (SMR) designs through their lifetime. Neutronics of these reactor designs were evaluated using Monte Carlo N-Particle eXtended (MCNPX/6). The system models were developed in Matlab and Simulink. A major thrust of this research was the initial scoping analysis of Sandias concept of a long-life fast reactor (LLFR). The inherent characteristic of this conceptual design is to minimize the change in reactivity over the lifetime of the reactor. This allows the reactor to operate substantially longer at full power than traditional light water reactors (LWRs) or other SMR designs (e.g. high temperature gas reactor (HTGR)). The system model has subroutines for lifetime reactor feedback and operation calculations, thermal hydraulic effects, load demand changes and a simplified SCO2 Brayton cycle for power conversion.
Cyber attacks pose a major threat to modern organizations. Little is known about the social aspects of decision making among organizations that face cyber threats, nor do we have empirically-grounded models of the dynamics of cooperative behavior among vulnerable organizations. The effectiveness of cyber defense can likely be enhanced if information and resources are shared among organizations that face similar threats. Three models were created to begin to understand the cognitive and social aspects of cyber cooperation. The first simulated a cooperative cyber security program between two organizations. The second focused on a cyber security training program in which participants interact (and potentially cooperate) to solve problems. The third built upon the first two models and simulates cooperation between organizations in an information-sharing program.
Early 2010 saw a signi cant change in adversarial techniques aimed at network intrusion: a shift from malware delivered via email attachments toward the use of hidden, embedded hyperlinks to initiate sequences of downloads and interactions with web sites and network servers containing malicious software. Enterprise security groups were well poised and experienced in defending the former attacks, but the new types of attacks were larger in number, more challenging to detect, dynamic in nature, and required the development of new technologies and analytic capabilities. The Hybrid LDRD project was aimed at delivering new capabilities in large-scale data modeling and analysis to enterprise security operators and analysts and understanding the challenges of detection and prevention of emerging cybersecurity threats. Leveraging previous LDRD research e orts and capabilities in large-scale relational data analysis, large-scale discrete data analysis and visualization, and streaming data analysis, new modeling and analysis capabilities were quickly brought to bear on the problems in email phishing and spear phishing attacks in the Sandia enterprise security operational groups at the onset of the Hybrid project. As part of this project, a software development and deployment framework was created within the security analyst work ow tool sets to facilitate the delivery and testing of new capabilities as they became available, and machine learning algorithms were developed to address the challenge of dynamic threats. Furthermore, researchers from the Hybrid project were embedded in the security analyst groups for almost a full year, engaged in daily operational activities and routines, creating an atmosphere of trust and collaboration between the researchers and security personnel. The Hybrid project has altered the way that research ideas can be incorporated into the production environments of Sandias enterprise security groups, reducing time to deployment from months and years to hours and days for the application of new modeling and analysis capabilities to emerging threats. The development and deployment framework has been generalized into the Hybrid Framework and incor- porated into several LDRD, WFO, and DOE/CSL projects and proposals. And most importantly, the Hybrid project has provided Sandia security analysts with new, scalable, extensible analytic capabilities that have resulted in alerts not detectable using their previous work ow tool sets.