Anoxic Corrosion of Steel and Lead in Na - Cl +/- Mg -Dominated Brines in Atmospheres Containing CO2
Abstract not provided.
Abstract not provided.
An efficient numerical algorithm for treating earth models composed of fluid and solid portions is obtained via straightforward modifications to a 3D time-domain finite-difference algorithm for simulating isotropic elastic wave propagation.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Modelling and Simulation in Materials Science and Engineering
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Before a spacecraft can be considered for launch, it must first survive environmental testing that simulates the launch environment. Typically, these simulations include vibration testing performed using an electro-dynamic shaker. For some spacecraft however, acoustic excitation may provide a more severe loading environment than base shaker excitation. Because this was the case for a Sandia Flight System, it was necessary to perform an acoustic test prior to launch in order to verify survival due to an acoustic environment. Typically, acoustic tests are performed in acoustic chambers, but because of scheduling, transportation, and cleanliness concerns, this was not possible. Instead, the test was performed as a direct field acoustic test (DFAT). This type of test consists of surrounding a test article with a wall of speakers and controlling the acoustic input using control microphones placed around the test item, with a closed-loop control system. Obtaining the desired acoustic input environment - proto-flight random noise input with an overall sound pressure level (OASPL) of 146.7 dB-with this technique presented a challenge due to several factors. An acoustic profile with this high OASPL had not knowingly been obtained using the DFAT technique prior to this test. In addition, the test was performed in a high-bay, where floor space and existing equipment constrained the speaker circle diameter. And finally, the Flight System had to be tested without contamination of the unit, which required a contamination bag enclosure of the test unit. This paper describes in detail the logistics, challenges, and results encountered while performing a high-OASPL, direct-field acoustic test on a contamination-sensitive Flight System in a high-bay environment.
Combustion Theory and Modelling
Abstract not provided.
Predictive Capability Maturity Model (PCMM) is a communication tool that must include a dicussion of the supporting evidence. PCMM is a tool for managing risk in the use of modeling and simulation. PCMM is in the service of organizing evidence to help tell the modeling and simulation (M&S) story. PCMM table describes what activities within each element are undertaken at each of the levels of maturity. Target levels of maturity can be established based on the intended application. The assessment is to inform what level has been achieved compared to the desired level, to help prioritize the VU activities & to allocate resources.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We have developed a novel modular automated processing system (MAPS) that enables reliable, high-throughput analysis as well as sample-customized processing. This system is comprised of a set of independent modules that carry out individual sample processing functions: cell lysis, protein concentration (based on hydrophobic, ion-exchange and affinity interactions), interferent depletion, buffer exchange, and enzymatic digestion of proteins of interest. Taking advantage of its unique capacity for enclosed processing of intact bioparticulates (viruses, spores) and complex serum samples, we have used MAPS for analysis of BSL1 and BSL2 samples to identify specific protein markers through integration with the portable microChemLab{trademark} and MALDI.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A low power modulator is monolithically integrated with a radiation hardened CMOS driver. This integrated optoelectronic device demonstrates 1.68mW power consumption at 2Gbps.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Most climate scientists believe that climate geoengineering is best considered as a potential complement to the mitigation of CO{sub 2} emissions, rather than as an alternative to it. Strong mitigation could achieve the equivalent of up to -4Wm{sup -2} radiative forcing on the century timescale, relative to a worst case scenario for rising CO{sub 2}. However, to tackle the remaining 3Wm{sup -2}, which are likely even in a best case scenario of strongly mitigated CO{sub 2} releases, a number of geoengineering options show promise. Injecting stratospheric aerosols is one of the least expensive and, potentially, most effective approaches and for that reason an examination of the possible unintended consequences of the implementation of atmospheric injections of sulphate aerosols was made. Chief among these are: reductions in rainfall, slowing of atmospheric ozone rebound, and differential changes in weather patterns. At the same time, there will be an increase in plant productivity. Lastly, because atmospheric sulphate injection would not mitigate ocean acidification, another side effect of fossil fuel burning, it would provide only a partial solution. Future research should aim at ameliorating the possible negative unintended consequences of atmospheric injections of sulphate injection. This might include modeling the optimum rate and particle type and size of aerosol injection, as well as the latitudinal, longitudinal and altitude of injection sites, to balance radiative forcing to decrease negative regional impacts. Similarly, future research might include modeling the optimum rate of decrease and location of injection sites to be closed to reduce or slow rapid warming upon aerosol injection cessation. A fruitful area for future research might be system modeling to enhance the possible positive increases in agricultural productivity. All such modeling must be supported by data collection and laboratory and field testing to enable iterative modeling to increase the accuracy and precision of the models, while reducing epistemic uncertainties.
The current state-of-the-art in antineutrino detection is such that it is now possible to remotely monitor the operational status, power levels and fissile content of nuclear reactors in real-time. This non-invasive and incorruptible technique has been demonstrated at civilian power reactors in both Russia and the United States and has been of interest to the IAEA Novel Technologies Unit for several years. Expert's meetings were convened at IAEA headquarters in 2003 and again in 2008. The latter produced a report in which antineutrino detection was called a 'highly promising technology for safeguards applications' at nuclear reactors and several near-term goals and suggested developments were identified to facilitate wider applicability. Over the last few years, we have been working to achieve some of these goals and improvements. Specifically, we have already demonstrated the successful operation of non-toxic detectors and most recently, we are testing a transportable, above-ground detector system, which is fully contained within a standard 6 meter ISO container. If successful, such a system could allow easy deployment at any reactor facility around the world. As well, our previously demonstrated ability to remotely monitor the data and respond in real-time to reactor operational changes could allow the verification of operator declarations without the need for costly site-visits. As the global nuclear power industry expands around the world, the burden on maintaining operational histories and safeguarding inventories will increase greatly. Such a system for providing remote data to verify operator's declarations could greatly reduce the need for frequent site inspections while still providing a robust warning of anomalies requiring further investigation.
A brief overview of Sandia National Laboratories will be presented highlighting the mission of Engineering Science Center. The Engineering Science Center provides a wide range of capabilities to support the lab's missions. As part of the Engineering Science Center the Aeroscience department provides research, development and application expertise in both experimental and computation compressible fluid mechanics. The role of Aeroscience at Sandia National Labs will be discussed with a focus on current research and development activities within the Aeroscience Department. These activities will be presented within the framework of a current program to highlight the synergy between computational and experimental work. The research effort includes computational and experimental activities covering fluid and structural dynamics disciplines. The presentation will touch on: probable excitation sources that yield the level of random vibration observed during flight; the methods that have been developed to model the random pressure fields in the turbulent boundary layer using a combination of CFD codes and a model of turbulent boundary layer pressure fluctuations; experimental measurement of boundary layer fluctuations; the methods of translating the random pressure fields to time-domain spatially correlated pressure fields.
During cavern leaching in the Strategic Petroleum Reserve (SPR), injected raw water mixes with resident brine and eventually interacts with the cavern salt walls. This report provides a record of data acquired during a series of experiments designed to measure the leaching rate of salt walls in a labscale simulated cavern, as well as discussion of the data. These results should be of value to validate computational fluid dynamics (CFD) models used to simulate leaching applications. Three experiments were run in the transparent 89-cm (35-inch) ID diameter vessel previously used for several related projects. Diagnostics included tracking the salt wall dissolution rate using ultrasonics, an underwater camera to view pre-installed markers, and pre- and post-test weighing and measuring salt blocks that comprise the walls. In addition, profiles of the local brine/water conductivity and temperature were acquired at three locations by traversing conductivity probes to map out the mixing of injected raw water with the surrounding brine. The data are generally as expected, with stronger dissolution when the salt walls were exposed to water with lower salt saturation, and overall reasonable wall shape profiles. However, there are significant block-to-block variations, even between neighboring salt blocks, so the averaged data are considered more useful for model validation. The remedial leach tests clearly showed that less mixing and longer exposure time to unsaturated water led to higher levels of salt wall dissolution. The data for all three tests showed a dividing line between upper and lower regions, roughly above and below the fresh water injection point, with higher salt wall dissolution in all cases, and stronger (for remedial leach cases) or weaker (for standard leach configuration) concentration gradients above the dividing line.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We present quantitative fluid inclusion gas analysis on a suite of violently-formed glasses. We used the incremental crush mass spectrometry method (Norman & Blamey, 2001) to analyze eight pieces of Libyan Desert Glass (LDG). As potential analogues we also analyzed trinitite, three impact crater glasses, and three fulgurites. The 'clear' LDG has the lowest CO{sub 2} content and O{sub 2}/Ar ratios are two orders of magnitude lower than atmospheric. The 'foamy' glass samples have heterogeneous CO{sub 2} contents and O{sub 2}/Ar ratios. N{sub 2}/Ar ratios are similar to atmospheric (83.6). H{sub 2} and He are elevated but it is difficult to confirm whether they are of terrestrial or meteoritic origin. Combustion cannot account for oxygen depletion that matches the amount of CO{sub 2} produced. An alternative mechanism is required that removes oxygen without producing CO{sub 2}. Trinitite has exceedingly high CO{sub 2} which we attribute to carbonate breakdown of the caliche at ground zero. The O{sub 2}/Ar ratio for trinitite is lower than atmospheric but higher than all LDG samples. N{sub 2}/Ar ratios closely match atmospheric. Samples from Lonar, Henbury and Aouelloul impact craters have atmospheric N{sub 2}/Ar ratios. O{sub 2}/Ar ratios at Lonar and Henbury are 9.5 to 9.9 whereas the O{sub 2}/Ar ratio is 0.1 for the Aouelloul sample. In most fulgurites the N{sub 2}/Ar ratio is higher than atmospheric, possibly due to interference from CO. Oxygen ranges from 1.3 to 19.3%. Gas signatures of LDG inclusions neither match those from the craters, trinitite nor fulgurites. It is difficult to explain both the observed depletion of oxygen in the LDG and a CO{sub 2} level that is lower than it would be if the CO{sub 2} were simply a product of hydrocarbon combustion in air. One possible mechanism for oxygen depletion is that as air turbulently mixed with a hot jet of vaporized asteroid from an airburst and expanded, the atmospheric oxygen reacted with the metal vapor to form metal oxides that condensed. This observation is compatible with the model of Boslough & Crawford (2008) who suggest that an airburst incinerates organic materials over a large area, melting surface materials that then quench to form glass. Bubbles would contain a mixture of pre-existing atmosphere with combustion products from organic material and products of the reaction between vaporized cosmic materials (including metals) and terrestrial surface and atmosphere.
Abstract not provided.
The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.
Abstract not provided.
Keigwin (Science 274:1504-1508, 1996) reconstructed the sea surface temperature (SST) record in the northern Sargasso Sea to document natural climate variability in recent millennia. The annual average SST proxy used {delta}{sup 18}O in planktonic foraminifera in a radiocarbon-dated 1990 Bermuda Rise box core. Keigwin's Fig. 4B (K4B) shows a 50-year-averaged time series along with four decades of SST measurements from Station S near Bermuda, demonstrating that the Sargasso Sea is now at its warmest in more than 400 years, and well above the most recent box-core temperature. Taken together, Station S and paleo-temperatures suggest there was an acceleration of warming in the 20th century, though this was not an explicit conclusion of the paper. Keigwin concluded that anthropogenic warming may be superposed on a natural warming trend. In an unpublished paper circulated with the anti-Kyoto 'Oregon Petition,' Robinson et al. ('Environmental Effects of Increased Atmospheric Carbon Dioxide,' 1998) reproduced K4B but (1) omitted Station S data, (2) incorrectly stated that the time series ended in 1975, (3) conflated Sargasso Sea data with global temperature, and (4) falsely claimed that Keigwin showed global temperatures 'are still a little below the average for the past 3,000 years.' Keigwin's Fig. 2 showed that {delta}{sup 18}O has increased over the past 6000 years, so SSTs calculated from those data would have a long term decrease. Thus, it is inappropriate to compare present-day SST to a long term mean unless the trend is removed. Slight variations of Robinson et al. (1998) have been repeatedly published with different author rotations. Various mislabeled, improperly-drawn, and distorted versions of K4B have appeared in the Wall Street Journal, in weblogs, and even as an editorial cartoon-all supporting baseless claims that current temperatures are lower than the long-term mean, and traceable to Robinson's misrepresentation with Station S data removed. In 2007, Robinson added a fictitious 2006 temperature that is significantly lower than the measured data. This doctored version of K4B with fabricated data was reprinted in a 2008 Heartland Institute advocacy report, 'Nature, Not Human Activity, Rules the Climate.'
The goals of this event are: (1) Discuss the next-generation issues and emerging risks in cyber security for control systems; (2) Review and discuss common control system architectures; (3) Discuss the role of policy, standards, and supply chain issues; (4) Interact to determine the most pertinent risks and most critical areas of the architecture; and (5) Merge feedback from Control System Managers, Engineers, IT, and Auditors.
Abstract not provided.
ASME Journal of Fuel Cell Science and Technology
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Google Earth and Google Maps are incredibly useful for researchers looking for easily-digestible displays of data. This presentation will provide a step-by-step tutorial on how to begin using Google Earth to create tools that further the mission of the DOE national lab complex.
ASME Journal of Fuel Cell Science and Technology
Abstract not provided.
Recently, molecular dynamics simulations (e.g. Groger et al. Acta Mat. vol.56) have uncovered new insights into dislocation motion associated with plastic deformation of BCC metals. Those results indicate that stress necessary for glide along 110[111] crystallographic systems plus additional shear stresses along non-glide directions may accurately characterize plastic flow in BCC crystals. Further, they are readily adaptable to micromechanical formulations used in crystal plasticity models. This presentation will discuss an adaptation into a classical mechanics framework for use in a large scale rate-dependent crystal plasticity model. The effects of incorporating the non-glide influences on an otherwise associative flow model are profound. Comparisons will be presented that show the effect of the non-glide stress components on tension-compression yield stress asymmetry and the evolution of texture in BCC crystals.
Physics of Fluids
Abstract not provided.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories has developed several models to analyze potential consequences of homeland security incidents. Two of these models (the National Infrastructure Simulation and Analysis Center Agent-Based Laboratory for Economics, N-ABLE{trademark}, and Loki) simulate detailed facility- and product-level consequences of simulated disruptions to supply chains. Disruptions in supply chains are likely to reduce production of some commodities, which may reduce economic activity across many other types of supply chains throughout the national economy. The detailed nature of Sandia's models means that simulations are limited to specific supply chains in which detailed facility-level data has been collected, but policymakers are often concerned with the national-level economic impacts of supply-chain disruptions. A preliminary input-output methodology has been developed to estimate national-level economic impacts based upon the results of supply-chain-level simulations. This methodology overcomes two primary challenges. First, the methodology must be relatively simple to integrate successfully with existing models; it must be easily understood, easily applied to the supply-chain models without user intervention, and run quickly. The second challenge is more fundamental: the methodology must account for both upstream and downstream impacts that result from supply-chain disruptions. Input-output modeling typically estimates only upstream impacts, but shortages resulting from disruptions in many supply chains (for example, energy, communications, and chemicals) are likely to have large downstream impacts. In overcoming these challenges, the input-output methodology makes strong assumptions about technology and substitution. This paper concludes by applying the methodology to chemical supply chains.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Antineutrino detection using inverse beta decay conversion has demonstrated the capability to measure nuclear reactor power and fissile material content for nuclear safeguards. Current efforts focus on aboveground deployment scenarios, for which highly efficient capture and identification of neutrons is needed to measure the anticipated antineutrino event rates in an elevated background environment. In this submission, we report on initial characterization of a new scintillation-based segmented design that uses layers of ZnS:Ag/{sup 6}LiF and an integrated readout technique to capture and identify neutrons created in the inverse beta decay reaction. Laboratory studies with multiple organic scintillator and ZnS:Ag/{sup 6}LiF configurations reliably identify {sup 6}Li neutron captures in 60 cm-long segments using pulse shape discrimination.
Springer Journal of Scientific Computing
Abstract not provided.
Polymer Preprints
Abstract not provided.
Abstract not provided.
Film durability is a primary factor governing the use of emerging thin film flexible substrate devices where compressive stresses can lead to delamination and buckling. It is of particular concern in gold film systems found in many submicron and nanoscale applications. We are therefore studying these effects in gold on PMMA systems using compressively stressed tungsten overlayers to force interfacial failure and simulations employing cohesive zone elements to model the fracture process. Delamination and buckling occurred spontaneously following deposition with buckle morphologies that differed significantly from existing model predictions. Moreover, use of thin adhesive interlayers had no discernable effect on performance. In this presentation we will use observations and simulations to show how substrate compliance and yielding affects the susceptibility to buckling of gold films on compliant substrates. We will also compare the fracture energies and buckle morphologies of this study with those of gold films on sapphire substrates to show how changing substrate compliance affects buckle formation.
We initiate the study of property testing of submodularity on the boolean hypercube. Submodular functions come up in a variety of applications in combinatorial optimization. For a vast range of algorithms, the existence of an oracle to a submodular function is assumed. But how does one check if this oracle indeed represents a submodular function? Consider a function f: {l_brace}0, 1{r_brace}{sup n} {yields} R. The distance to submodularity is the minimum fraction of values of f that need to be modified to make f submodular. If this distance is more than {epsilon} > 0, then we say that f is {epsilon}-far from being submodular. The aim is to have an efficient procedure that, given input f that is {epsilon}-far from being submodular, certifies that f is not submodular. We analyze a very natural tester for this problem, and prove that it runs in subexponential time. This gives the first non-trivial tester for submodularity. On the other hand, we prove an interesting lower bound (that is, unfortunately, quite far from the upper bound) suggesting that this tester cannot be very efficient in terms of {epsilon}. This involves non-trivial examples of functions which are far from submodular and yet do not exhibit too many local violations. We also provide some constructions indicating the difficulty in designing a tester for submodularity. We construct a partial function defined on exponentially many points that cannot be extended to a submodular function, but any strict subset of these values can be extended to a submodular function.
The 'dual etalon frequency comb spectrometer' is a novel low cost spectometer with limited moving parts. A broad band light source (pulsed laser, LED, lamp ...) is split into two beam paths. One travels through an etalon and a sample gas, while the second arm is just an etalon cavity, and the two beams are recombined onto a single detector. If the free spectral ranges (FSR) of the two cavities are not identical, the intensity pattern at the detector with consist of a series of heterodyne frequencies. Each mode out of the sample arm etalon with have a unique frequency in RF (radio-frequency) range, where modern electronics can easily record the signals. By monitoring these RF beat frequencies we can then determine when an optical frequencies is absorbed. The resolution is set by the FSR of the cavity, typically 10 MHz, with a bandwidth up to 100s of cm{sup -1}. In this report, the new spectrometer is described in detail and demonstration experiments on Iodine absorption are carried out. Further we discuss powerful potential next generation steps to developing this into a point sensor for monitoring combustion by-products, environmental pollutants, and warfare agents.
The analysis of continuous systems with nonlinearities in their domain have previously been limited to either numerical approaches, or analytical methods that are constrained in the parameter space, boundary conditions, or order of the system. The present analysis develops a robust method for studying continuous systems with arbitrary boundary conditions and nonlinearities using the assumption that the nonlinear constraint can be modeled with a piecewise-linear force-deflection constitutive relationship. Under this assumption, a superposition method is used to generate homogeneous boundary conditions, and modal analysis is used to find the displacement of the system in each state of the piecewise-linear nonlinearity. In order to map across each nonlinearity in the piecewise-linear force-deflection profile, a variational calculus approach is taken that minimizes the L2 energy norm between the previous and current states. To illustrate this method, a leaf spring coupled with a connector pin immersed in a viscous fluid is modeled as a beam with a piecewise-linear constraint. From the results of the convergence and parameter studies, a high correlation between the finite-time Lyapunov exponents and the contact time per period of the excitation is observed. The parameter studies also indicate that when the system's parameters are changed in order to reduce the magnitude of the velocity impact between the leaf spring and connector pin, the extent of the regions over which a chaotic response is observed increases.
Abstract not provided.
We present sublinear-time (randomized) algorithms for finding simple cycles of length at least k {ge} 3 and tree-minors in bounded-degree graphs. The complexity of these algorithms is related to the distance of the graph from being C{sub k}-minor-free (resp., free from having the corresponding tree-minor). In particular, if the graph is far (i.e., {Omega}(1)-far) from being cycle-free, i.e. if one has to delete a constant fraction of edges to make it cycle-free, then the algorithm finds a cycle of polylogarithmic length in time {tilde O}({radical}N), where N denotes the number of vertices. This time complexity is optimal up to polylogarithmic factors. The foregoing results are the outcome of our study of the complexity of one-sided error property testing algorithms in the bounded-degree graphs model. For example, we show that cycle-freeness of N-vertex graphs can be tested with one-sided error within time complexity {tilde O}(poly(1/{epsilon}) {center_dot} {radical}N). This matches the known {Omega}({radical}N) query lower bound, and contrasts with the fact that any minor-free property admits a two-sided error tester of query complexity that only depends on the proximity parameter {epsilon}. For any constant k {ge} 3, we extend this result to testing whether the input graph has a simple cycle of length at least k. On the other hand, for any fixed tree T, we show that T -minor-freeness has a one-sided error tester of query complexity that only depends on the proximity parameter {epsilon}. Our algorithm for finding cycles in bounded-degree graphs extends to general graphs, where distances are measured with respect to the actual number of edges. Such an extension is not possible with respect to finding tree-minors in o({radical}N) complexity.
Abstract not provided.
Experiments in terrestrial laboratories can be used to evaluate the physical models that interpret astronomical observations. The properties of matter in astrophysical objects are essential components of these models, but terrestrial laboratories struggle to reproduce the extreme conditions that often exist. Megajoule-class DOE/NNSA facilities such as the National Ignition Facility and Z can create unprecedented amounts of matter at extreme conditions, providing new capabilities to test astrophysical models with high accuracy. Experiments at these large facilities are challenging, and access is very competitive. However, the cylindrically-symmetric Z source emits radiation in all directions, enabling multiple physics experiments to be driven with a single Z discharge. This helps ameliorate access limitations. This article describes research efforts under way at Sandia National Laboratories Z facility investigating radiation transport through stellar interior matter, population kinetics of atoms exposed to the intense radiation emitted by accretion powered objects, and spectral line formation in white dwarf (WD) photospheres. Opacity quantifies the absorption of radiation by matter and strongly influences stellar structure and evolution, since radiation dominates energy transport deep inside stars. Opacity models have become highly sophisticated, but laboratory tests at the conditions existing inside stars have not been possible - until now. Z research is presently focused on measuring iron absorption at conditions relevant to the base of the solar convection zone, where the electron temperature and density are 190 eV and 9 x 10{sup 22} e/cc, respectively. Creating these conditions in a sample that is sufficiently large, long-lived, and uniform is extraordinarily challenging. A source of radiation that streams through the relatively-large samples can produce volumetric heating and thus, uniform conditions, but to achieve high temperatures a strong source is required. Z dynamic hohlraums provide such a megajoule-class source. Initial Z experiments measured transmission through iron samples ionized to the same charge states that exist at the solar convection zone base. The resulting data made it possible to test challenging aspects of the opacity calculations such as the ionization balance and the completeness and accuracy of the atomic energy level description. However, the density was too low to provide a definitive test of the physics at the solar convection zone base. Recent experiments have reached higher densities, and opacity model tests for stellar interiors now appear within reach. Accretion powered objects, including active galactic nuclei, x-ray binaries, and black hole accretion disks, are the most luminous objects in the universe. Astrophysical models for these objects rely largely on comparing spectroscopic predictions with observations. A dilemma arises because the spectra originate from plasmas that are bathed in the enormous photon flux from the accretion disk and photoionization dominates the atomic ionization and energy level populations. Thus, constraining astrophysical models depends on accurate atomic models for photoionized plasmas. Unfortunately, to date the ionization in almost all laboratory experiments is collision-dominated and very few tests of photoionized plasma atomic kinetics exist. Megajoule class high-energy-density facilities can help because they generate higher x-ray fluence over larger spatial scales and longer times. Expanded iron foils and pre-filled neon gas cells have been used in experiments at Z to study photoionized atomic kinetics in two elements commonly observed in astrophysical objects. In these experiments, low density samples are exposed to a measured intense x-ray spectrum, emergent emission or absorption spectra are recorded, and the results are compared to predictions made with spectral synthesis codes used by astrophysicists. Initial experiments focused on testing models used to interpret spectra from the 'warm absorber,' a plasma observed in the vicinity of some active galactic nuclei. In future experiments, spectra from plasmas exposed to higher radiation intensities will be measured, possibly leading to improved understanding of the plasma in the immediate vicinity of the accretion disk. WDs are the oldest stars and can serve as cosmic clocks, since the universe must be at least as old as the objects within it. Astrophysicists determine WD ages using stellar models combined with effective temperature (T{sub eff}) and mass inferred from spectral observations of WD photospheres. Many line profiles in the observed spectra are dominated by Stark broadening, a process sensitive to the photosphere density and related to the total mass through the stellar model. Accurate Stark broadening theory is, therefore, critical to the precise determination of the WD properties and the inferred ages.
Abstract not provided.
Abstract not provided.
Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned/contractor operated facility. Sandia Corporation (Sandia), a wholly owned subsidiary of Lockheed Martin Corporation (LMC), manages and operates the laboratory for the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA). The DOE/NNSA, Sandia Site O ffice (SSO) administers the contract and oversees contractor operations at the site. This annual report summarizes data and the compliance status of Sandia Corporation’s environmental protection and monitoring programs through December 31, 2009. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and implementation of the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 450.1A, Environmental Protection Program (DOE 2008a) and DOE Manual 231.1-1A, Environment, Safety, and Health Reporting (DOE 2007).
Applied Physics Letters
In this work, we model the carrier recombination mechanisms in GaInN/GaN light-emitting diodes as R=An+Bn2+Cn3+f(n), where f(n) represents carrier leakage out of the active region. The term f(n) is expanded into a power series and shown to have higher-than-third-order contributions to the recombination. The total third-order nonradiative coefficient (which may include an f(n) leakage contribution and an Auger contribution) is found to be 8×10-29 cm6 s-1. Finally, comparison of the theoretical ABC+f(n) model with experimental data shows that a good fit requires the inclusion of the f(n) term.
Journal of the American Chemical Society
We have demonstrated pressure-directed assembly for preparation of a new class of chemically and mechanically stable gold nanostructures through high pressure-driven sintering of nanoparticle assemblies at room temperature. We show that under a hydrostatic pressure field, the unit cell dimension of a 3D ordered nanoparticle array can be reversibly manipulated allowing fine-tuning of the interparticle separation distance. In addition, 3D nanostructured gold architecture can be formed through high pressure-induced nanoparticle sintering. This work opens a new pathway for engineering and fabrication of different metal nanostructured architectures. © 2010 American Chemical Society.
Proceedings of the International Conference on Dependable Systems and Networks
For future parallel-computing systems with as few as twenty-thousand nodes we propose redundant computing to reduce the number of application interrupts. The frequency of faults in exascale systems will be so high that traditional checkpoint/restart methods will break down. Applications will experience interruptions so often that they will spend more time restarting and recovering lost work, than computing the solution. We show that redundant computation at large scale can be cost effective and allows applications to complete their work in significantly less wall-clock time. On truly large systems, redundant computing can increase system throughput by an order of magnitude. © 2010 IEEE.
SPEEDAM 2010 - International Symposium on Power Electronics, Electrical Drives, Automation and Motion
This paper1 discusses the modeling, analysis, and testing in a real-time simulation environment of the Lanai power grid system for the integration and control of PhotoVoltaic (PV) distributed generation. The Lanai Island in Hawaii is part of the Hawaii Clean Energy Initiative (HCEI) to transition to 30% renewable green energy penetration by 2030. In Lanai the primary loads come from two Castle and Cook Resorts, in addition to residential needs. The total peak load profile is 12470V, 5.5 MW. Currently there are several diesel generators that meet these loading requirements. As part of the HCEI, Lanai has initially installed 1.2MW of PV generation. The goal of this study has been to evaluate the impact of the PV with respect to the conventional carbon-based diesel generation in real time simulation. For intermittent PV distributed generation, the overall stability and transient responses are investigated. A simple Lanai "like" model has been developed in the Matlab/Simulink environment [1] (see Fig. 1) and to accommodate real-time simulation of the hybrid power grid system the Opal-RT Technologies RT-Lab environment [2] is used. The diesel generators have been modelled using the SimPowerSystems toolbox [3] swing equations and a custom Simulink module has been developed for the High level PV generation. All of the loads have been characterized primarily as distribution lines with series resistive load banks with one VAR load bank. Three-phase faults are implemented for each bus. Both conventional and advanced control architectures will be used to evaluate the integration of the PV onto the current power grid system. The baselne numerical results include the stable performance of the power grid during varying cloud cover (PV generation ramping up/down) scenarios. The importance of assessing the real-time scenario is included. © 2010 IEEE.
Proceedings of the International Conference on Dependable Systems and Networks
Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe contextrelevant methodologies for determining the accuracy and cost-benefit of predictors. © 2010 IEEE.
Inorganic Chemistry
A synthesis of the bis(borano)hypophosphite anion with various counterions has been developed to make use of more benign and commercially available reagents. This method avoids the use of potentially dangerous reagents used by previous methods and gives the final products in good yield. Details of the crystal structure determination of the sodium salt in space group Ama2 are given using a novel computational technique combined with Rietveld refinement. © 2010 American Chemical Society.
International Congress on Advances in Nuclear Power Plants 2010, ICAPP 2010
High and very-high temperature gas-cooled reactors bring about unique challenges such as hot spots in the lower plate and thermal stratification in the lower plenum (LP). Analysis performed using Sandia National Laboratories' (SNL) Fuego computational fluid dynamics (CFD) code shows that these issues can be mitigated using static swirling inserts at the exit of the helium coolant channels to the LP. A full-scale, half-symmetry LP section is modeled using a numerical mesh that consists of 5.5 million hexahedral elements. The LP model includes the graphite support posts, the helium flow channel jets, the bottom plate, and the exterior walls. Calculations are performed for both conventional jets and clockwise and counter-clockwise swirling jets and varying swirl number, S, from 0 to 2.49. Calculations show that increasing S increases mixing and enhances heat transfer, thus reducing the likelihood of forming hot spots and thermal stratification in the LP.
International Congress on Advances in Nuclear Power Plants 2010, ICAPP 2010
A fission product release and transport model for High Temperature Gas cooled Reactors (HTGRs) is being developed for the MELCOR code. HTGRs use fuel in the form of TRISO coated fuel particles embedded in a graphitized matrix. The HTGR fission product model for MELCOR is being developed to calculate the released amounts and distribution offission products during normal operation and during accidents. The fission product release and transport model considers the important phenomena for fission product behavior in HTGRs, including the recoil and release offission products from the fuel kernel, transport through the coating layers, transport through the surrounding fuel matrix, release into circulating helium coolant, settling and plate-out on structural surfaces, adsorption by graphite dust in the primary system, and resuspension. The fraction of failed particles versus time is input by a particle failure fraction response surface of particle failure fraction as a function offuel temperature, and potentially, fuel burn-up. Fission product release from the fuel kernel and transport through the particle coating layers is calculated using diffusion-based release models. The models account for fission product release from uranium contamination in the graphitized matrix, and adsorption of fission products in the reactor system. The dust and its distribution can be determined from either MELCOR calculations of the reactor system during normal operation, or provided by other sources as input. The distribution of fission products is then normalized using the OR1GEN inventory to provide initial conditions for accident calculations. For the initial releases during an accident, the existing MELCOR aerosol transport models, with appropriate modifications, are being explored for calculating dust and fission product transport in the reactor system and in the confinement. For the delayed releases during the accident, which occur over many hours, and even days fission product release is calculated by combining the diffusion-based release rate with the failure fraction response surface input via a convolution integral. The decay of fission products is also included in the modeling.
Physical Review B - Condensed Matter and Materials Physics
The subsystem functional scheme is a promising approach recently proposed for constructing exchange-correlation density functionals. In this scheme, the physics in each part of real materials is described by mapping to a characteristic model system. The "confinement physics," an essential physical ingredient that has been left out in present functionals, is studied by employing the harmonic-oscillator (HO) gas model. By performing the potential→density and the density→exchange energy per particle mappings based on two model systems characterizing the physics in the interior (uniform electron-gas model) and surface regions (Airy gas model) of materials for the HO gases, we show that the confinement physics emerges when only the lowest subband of the HO gas is occupied by electrons. We examine the approximations of the exchange energy by several state-of-the-art functionals for the HO gas, and none of them produces adequate accuracy in the confinement dominated cases. A generic functional that incorporates the description of the confinement physics is needed. © 2010 The American Physical Society.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparison between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.
Sandia National Laboratories is investigating advanced Brayton cycles using supercritical working fluids for use with solar, nuclear or fossil heat sources. The focus of this work has been on the supercritical CO{sub 2} cycle (S-CO2) which has the potential for high efficiency in the temperature range of interest for these heat sources, and is also very compact, with the potential for lower capital costs. The first step in the development of these advanced cycles was the construction of a small scale Brayton cycle loop, funded by the Laboratory Directed Research & Development program, to study the key issue of compression near the critical point of CO{sub 2}. This document outlines the design of the small scale loop, describes the major components, presents models of system performance, including losses, leakage, windage, compressor performance, and flow map predictions, and finally describes the experimental results that have been generated.
Journal of Micromechanics and Microengineering
Mechanical stresses on microsystems die induced by packaging processes and varying environmental conditions can affect the performance and reliability of microsystems devices. Thermal microactuators and stress gauges were fabricated using the Sandia five-layer SUMMiT surface micromachining process and diced to fit in a four-point bending stage. The sample dies were tested under tension and compression at stresses varying from ?250 MPa, compressive, to 200 MPa, tensile. Stress values were validated by both on-die stress gauges and micro-Raman spectroscopy measurements. Thermal microactuator displacement is measured for applied currents up to 35 mA as the mechanical stress is systematically varied. Increasing tensile stress decreases the initial actuator displacement. In most cases, the incremental thermal microactuator displacement from the zero current value for a given applied current decreases when the die is stressed. Numerical model predictions of thermal microactuator displacement versus current agree with the experimental results. Quantitative information on the reduction in thermal microactuator displacement as a function of stress provides validation data for MEMS models and can guide future designs to be more robust to mechanical stresses. © 2010 IOP Publishing Ltd.
Progress in Surface Science
The development of scanning force microscopes that maintain precise control of the tip position using displacement control (DC-SFM) has allowed significant progress in understanding the relationships between the chemical and mechanical properties of soft interfaces. Here, developments in DC-SFM techniques and their applications are reviewed. Examples of material systems that have been investigated are discussed and compared to measurements with other techniques involving nanoprobe geometries to illustrate the achievements and promise in this area. Specifically discussed are applications to soft interfaces, including SAMs, lipid bilayers, confined fluids, polymer surfaces, ligand-receptor bonds, and soft metallic films. © 2010 Elsevier Ltd. All rights reserved.
Abstract not provided.
Abstract not provided.
This report assesses current public domain cyber security practices with respect to cyber indications and warnings. It describes cybersecurity industry and government activities, including cybersecurity tools, methods, practices, and international and government-wide initiatives known to be impacting current practice. Of particular note are the U.S. Government's Trusted Internet Connection (TIC) and 'Einstein' programs, which are serving to consolidate the Government's internet access points and to provide some capability to monitor and mitigate cyber attacks. Next, this report catalogs activities undertaken by various industry and government entities. In addition, it assesses the benchmarks of HPC capability and other HPC attributes that may lend themselves to assist in the solution of this problem. This report draws few conclusions, as it is intended to assess current practice in preparation for future work, however, no explicit references to HPC usage for the purpose of analyzing cyber infrastructure in near-real-time were found in the current practice. This report and a related SAND2010-4766 National Cyber Defense High Performance Computing and Analysis: Concepts, Planning and Roadmap report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.
There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.
As the capabilities of numerical simulations increase, decision makers are increasingly relying upon simulations rather than experiments to assess risks across a wide variety of accident scenarios including fires. There are still, however, many aspects of fires that are either not well understood or are difficult to treat from first principles due to the computational expense. For a simulation to be truly predictive and to provide decision makers with information which can be reliably used for risk assessment the remaining physical processes must be studied and suitable models developed for the effects of the physics. A set of experiments are outlined in this report which will provide soot volume fraction/temperature data and heat flux (intensity) data for the validation of models for the radiative transfer equation. In addition, a complete set of boundary condition measurements will be taken to allow full fire predictions for validation of the entire fire model. The experiments will be performed with a lightly-sooting liquid hydrocarbon fuel fire in the fully turbulent scale range (2 m diameter).
An expert opinion elicitation has been used to evaluate phenomena that could affect releases of radionuclides during accidents at sodium-cooled fast reactors. The intent was to identify research needed to develop a mechanistic model of radionuclide release for licensing and risk assessment purposes. Experts from the USA, France, the European Union, and Japan identified phenomena that could affect the release of radionuclides under hypothesized accident conditions. They qualitatively evaluated the importance of these phenomena and the need for additional experimental research. The experts identified seven phenomena that are of high importance and have a high need for additional experimental research: high temperature release of radionuclides from fuel during an energetic event, energetic interactions between molten reactor fuel and sodium coolant and associated transfer of radionuclides from the fuel to the coolant, entrainment of fuel and sodium bond material during the depressurization of a fuel rod with breached cladding, rates of radionuclide leaching from fuel by liquid sodium, surface enrichment of sodium pools by dissolved and suspended radionuclides, thermal decomposition of sodium iodide in the containment atmosphere, and reactions of iodine species in the containment to form volatile organic iodides. Other issues of high importance were identified that might merit further research as development of the mechanistic model of radionuclide release progressed.
The optoelectronic and excitonic properties in a series of linear acenes are investigated using range-separated methods within time-dependent density functional theory (TDDFT). In these highly-conjugated systems, we find that the range-separated formalism provides a substantially improved description of excitation energies compared to conventional hybrid functionals, which surprisingly fail for the various low-lying valence transitions. Moreover, we find that even if the percentage of Hartree-Fock exchange in conventional hybrids is re-optimized to match wavefunction-based CC2 benchmark calculations, they still yield serious errors in excitation energy trends. Based on an analysis of electron-hole transition density matrices, we also show that conventional hybrid functionals overdelocalize excitons and underestimate quasiparticle energy gaps in the acene systems. The results of the present study emphasize the importance of a range-separated and asymptotically-correct contribution of exchange in TDDFT for investigating optoelectronic and excitonic properties, even for these simple valence excitations.
The performance of a Ground Moving Target Indicator (GMTI) radar system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to 'get your arms around' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall GMTI radar system. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the 'seek time'.
This report summarizes the results of a one-year, feasibility-scale LDRD project that was conducted with the goal of developing new plastic scintillators capable of pulse shape discrimination (PSD) for neutron detection. Copolymers composed of matrix materials such as poly(methyl methacrylate) (PMMA) and blocks containing trans-stilbene (tSB) as the scintillator component were prepared and tested for gamma/neutron response. Block copolymer synthesis utilizing tSBMA proved unsuccessful so random copolymers containing up to 30% tSB were prepared. These copolymers were found to function as scintillators upon exposure to gamma radiation; however, they did not exhibit PSD when exposed to a neutron source. This project, while falling short of its ultimate goal, demonstrated the possible utility of single-component, undoped plastics as scintillators for applications that do not require PSD.
Abstract not provided.
Abstract not provided.