Publications

Results 69801–70000 of 96,771

Search results

Jump to search filters

Direct-field acoustic testing of a flight system : logistics, challenges, and results

Stasiunas, Eric C.; Skousen, Troy J.; Babuska, Vit B.; Gurule, David J.

Before a spacecraft can be considered for launch, it must first survive environmental testing that simulates the launch environment. Typically, these simulations include vibration testing performed using an electro-dynamic shaker. For some spacecraft however, acoustic excitation may provide a more severe loading environment than base shaker excitation. Because this was the case for a Sandia Flight System, it was necessary to perform an acoustic test prior to launch in order to verify survival due to an acoustic environment. Typically, acoustic tests are performed in acoustic chambers, but because of scheduling, transportation, and cleanliness concerns, this was not possible. Instead, the test was performed as a direct field acoustic test (DFAT). This type of test consists of surrounding a test article with a wall of speakers and controlling the acoustic input using control microphones placed around the test item, with a closed-loop control system. Obtaining the desired acoustic input environment - proto-flight random noise input with an overall sound pressure level (OASPL) of 146.7 dB-with this technique presented a challenge due to several factors. An acoustic profile with this high OASPL had not knowingly been obtained using the DFAT technique prior to this test. In addition, the test was performed in a high-bay, where floor space and existing equipment constrained the speaker circle diameter. And finally, the Flight System had to be tested without contamination of the unit, which required a contamination bag enclosure of the test unit. This paper describes in detail the logistics, challenges, and results encountered while performing a high-OASPL, direct-field acoustic test on a contamination-sensitive Flight System in a high-bay environment.

More Details

Predictive Capability Maturity Model (PCMM)

Swiler, Laura P.; Knupp, Patrick K.

Predictive Capability Maturity Model (PCMM) is a communication tool that must include a dicussion of the supporting evidence. PCMM is a tool for managing risk in the use of modeling and simulation. PCMM is in the service of organizing evidence to help tell the modeling and simulation (M&S) story. PCMM table describes what activities within each element are undertaken at each of the levels of maturity. Target levels of maturity can be established based on the intended application. The assessment is to inform what level has been achieved compared to the desired level, to help prioritize the VU activities & to allocate resources.

More Details

Modular Automated Processing System (MAPS) for analysis of biological samples

Gil, Geun-Cheol G.; Throckmorton, Daniel J.; Brennan, James S.; Schoeniger, Joseph S.; VanderNoot, Victoria A.; Fruetel, Julia A.; Branda, Steven B.

We have developed a novel modular automated processing system (MAPS) that enables reliable, high-throughput analysis as well as sample-customized processing. This system is comprised of a set of independent modules that carry out individual sample processing functions: cell lysis, protein concentration (based on hydrophobic, ion-exchange and affinity interactions), interferent depletion, buffer exchange, and enzymatic digestion of proteins of interest. Taking advantage of its unique capacity for enclosed processing of intact bioparticulates (viruses, spores) and complex serum samples, we have used MAPS for analysis of BSL1 and BSL2 samples to identify specific protein markers through integration with the portable microChemLab{trademark} and MALDI.

More Details

Unintended consequences of atmospheric injection of sulphate aerosols

Goldstein, Barry G.; Kobos, Peter H.

Most climate scientists believe that climate geoengineering is best considered as a potential complement to the mitigation of CO{sub 2} emissions, rather than as an alternative to it. Strong mitigation could achieve the equivalent of up to -4Wm{sup -2} radiative forcing on the century timescale, relative to a worst case scenario for rising CO{sub 2}. However, to tackle the remaining 3Wm{sup -2}, which are likely even in a best case scenario of strongly mitigated CO{sub 2} releases, a number of geoengineering options show promise. Injecting stratospheric aerosols is one of the least expensive and, potentially, most effective approaches and for that reason an examination of the possible unintended consequences of the implementation of atmospheric injections of sulphate aerosols was made. Chief among these are: reductions in rainfall, slowing of atmospheric ozone rebound, and differential changes in weather patterns. At the same time, there will be an increase in plant productivity. Lastly, because atmospheric sulphate injection would not mitigate ocean acidification, another side effect of fossil fuel burning, it would provide only a partial solution. Future research should aim at ameliorating the possible negative unintended consequences of atmospheric injections of sulphate injection. This might include modeling the optimum rate and particle type and size of aerosol injection, as well as the latitudinal, longitudinal and altitude of injection sites, to balance radiative forcing to decrease negative regional impacts. Similarly, future research might include modeling the optimum rate of decrease and location of injection sites to be closed to reduce or slow rapid warming upon aerosol injection cessation. A fruitful area for future research might be system modeling to enhance the possible positive increases in agricultural productivity. All such modeling must be supported by data collection and laboratory and field testing to enable iterative modeling to increase the accuracy and precision of the models, while reducing epistemic uncertainties.

More Details

Remote safeguards and monitoring of reactors with antineutrinos

Reyna, David R.

The current state-of-the-art in antineutrino detection is such that it is now possible to remotely monitor the operational status, power levels and fissile content of nuclear reactors in real-time. This non-invasive and incorruptible technique has been demonstrated at civilian power reactors in both Russia and the United States and has been of interest to the IAEA Novel Technologies Unit for several years. Expert's meetings were convened at IAEA headquarters in 2003 and again in 2008. The latter produced a report in which antineutrino detection was called a 'highly promising technology for safeguards applications' at nuclear reactors and several near-term goals and suggested developments were identified to facilitate wider applicability. Over the last few years, we have been working to achieve some of these goals and improvements. Specifically, we have already demonstrated the successful operation of non-toxic detectors and most recently, we are testing a transportable, above-ground detector system, which is fully contained within a standard 6 meter ISO container. If successful, such a system could allow easy deployment at any reactor facility around the world. As well, our previously demonstrated ability to remotely monitor the data and respond in real-time to reactor operational changes could allow the verification of operator declarations without the need for costly site-visits. As the global nuclear power industry expands around the world, the burden on maintaining operational histories and safeguarding inventories will increase greatly. Such a system for providing remote data to verify operator's declarations could greatly reduce the need for frequent site inspections while still providing a robust warning of anomalies requiring further investigation.

More Details

Aerosciences at Sandia National Labs

Payne, Jeffrey L.

A brief overview of Sandia National Laboratories will be presented highlighting the mission of Engineering Science Center. The Engineering Science Center provides a wide range of capabilities to support the lab's missions. As part of the Engineering Science Center the Aeroscience department provides research, development and application expertise in both experimental and computation compressible fluid mechanics. The role of Aeroscience at Sandia National Labs will be discussed with a focus on current research and development activities within the Aeroscience Department. These activities will be presented within the framework of a current program to highlight the synergy between computational and experimental work. The research effort includes computational and experimental activities covering fluid and structural dynamics disciplines. The presentation will touch on: probable excitation sources that yield the level of random vibration observed during flight; the methods that have been developed to model the random pressure fields in the turbulent boundary layer using a combination of CFD codes and a model of turbulent boundary layer pressure fluctuations; experimental measurement of boundary layer fluctuations; the methods of translating the random pressure fields to time-domain spatially correlated pressure fields.

More Details

SPR salt wall leaching experiments in lab-scale vessel : data report

Webb, Stephen W.

During cavern leaching in the Strategic Petroleum Reserve (SPR), injected raw water mixes with resident brine and eventually interacts with the cavern salt walls. This report provides a record of data acquired during a series of experiments designed to measure the leaching rate of salt walls in a labscale simulated cavern, as well as discussion of the data. These results should be of value to validate computational fluid dynamics (CFD) models used to simulate leaching applications. Three experiments were run in the transparent 89-cm (35-inch) ID diameter vessel previously used for several related projects. Diagnostics included tracking the salt wall dissolution rate using ultrasonics, an underwater camera to view pre-installed markers, and pre- and post-test weighing and measuring salt blocks that comprise the walls. In addition, profiles of the local brine/water conductivity and temperature were acquired at three locations by traversing conductivity probes to map out the mixing of injected raw water with the surrounding brine. The data are generally as expected, with stronger dissolution when the salt walls were exposed to water with lower salt saturation, and overall reasonable wall shape profiles. However, there are significant block-to-block variations, even between neighboring salt blocks, so the averaged data are considered more useful for model validation. The remedial leach tests clearly showed that less mixing and longer exposure time to unsaturated water led to higher levels of salt wall dissolution. The data for all three tests showed a dividing line between upper and lower regions, roughly above and below the fresh water injection point, with higher salt wall dissolution in all cases, and stronger (for remedial leach cases) or weaker (for standard leach configuration) concentration gradients above the dividing line.

More Details

Quantitative fluid inclusion gas analysis of airburst, nuclear, impact and fulgurite glasses

We present quantitative fluid inclusion gas analysis on a suite of violently-formed glasses. We used the incremental crush mass spectrometry method (Norman & Blamey, 2001) to analyze eight pieces of Libyan Desert Glass (LDG). As potential analogues we also analyzed trinitite, three impact crater glasses, and three fulgurites. The 'clear' LDG has the lowest CO{sub 2} content and O{sub 2}/Ar ratios are two orders of magnitude lower than atmospheric. The 'foamy' glass samples have heterogeneous CO{sub 2} contents and O{sub 2}/Ar ratios. N{sub 2}/Ar ratios are similar to atmospheric (83.6). H{sub 2} and He are elevated but it is difficult to confirm whether they are of terrestrial or meteoritic origin. Combustion cannot account for oxygen depletion that matches the amount of CO{sub 2} produced. An alternative mechanism is required that removes oxygen without producing CO{sub 2}. Trinitite has exceedingly high CO{sub 2} which we attribute to carbonate breakdown of the caliche at ground zero. The O{sub 2}/Ar ratio for trinitite is lower than atmospheric but higher than all LDG samples. N{sub 2}/Ar ratios closely match atmospheric. Samples from Lonar, Henbury and Aouelloul impact craters have atmospheric N{sub 2}/Ar ratios. O{sub 2}/Ar ratios at Lonar and Henbury are 9.5 to 9.9 whereas the O{sub 2}/Ar ratio is 0.1 for the Aouelloul sample. In most fulgurites the N{sub 2}/Ar ratio is higher than atmospheric, possibly due to interference from CO. Oxygen ranges from 1.3 to 19.3%. Gas signatures of LDG inclusions neither match those from the craters, trinitite nor fulgurites. It is difficult to explain both the observed depletion of oxygen in the LDG and a CO{sub 2} level that is lower than it would be if the CO{sub 2} were simply a product of hydrocarbon combustion in air. One possible mechanism for oxygen depletion is that as air turbulently mixed with a hot jet of vaporized asteroid from an airburst and expanded, the atmospheric oxygen reacted with the metal vapor to form metal oxides that condensed. This observation is compatible with the model of Boslough & Crawford (2008) who suggest that an airburst incinerates organic materials over a large area, melting surface materials that then quench to form glass. Bubbles would contain a mixture of pre-existing atmosphere with combustion products from organic material and products of the reaction between vaporized cosmic materials (including metals) and terrestrial surface and atmosphere.

More Details

Power electronics reliability

Smith, Mark A.; Kaplar, Robert K.; Marinella, Matthew J.; Stanley, James B.

The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

More Details

Misrepresentations of Sargasso Sea temperatures by Arthur B. Robinson et al

Boslough, Mark B.

Keigwin (Science 274:1504-1508, 1996) reconstructed the sea surface temperature (SST) record in the northern Sargasso Sea to document natural climate variability in recent millennia. The annual average SST proxy used {delta}{sup 18}O in planktonic foraminifera in a radiocarbon-dated 1990 Bermuda Rise box core. Keigwin's Fig. 4B (K4B) shows a 50-year-averaged time series along with four decades of SST measurements from Station S near Bermuda, demonstrating that the Sargasso Sea is now at its warmest in more than 400 years, and well above the most recent box-core temperature. Taken together, Station S and paleo-temperatures suggest there was an acceleration of warming in the 20th century, though this was not an explicit conclusion of the paper. Keigwin concluded that anthropogenic warming may be superposed on a natural warming trend. In an unpublished paper circulated with the anti-Kyoto 'Oregon Petition,' Robinson et al. ('Environmental Effects of Increased Atmospheric Carbon Dioxide,' 1998) reproduced K4B but (1) omitted Station S data, (2) incorrectly stated that the time series ended in 1975, (3) conflated Sargasso Sea data with global temperature, and (4) falsely claimed that Keigwin showed global temperatures 'are still a little below the average for the past 3,000 years.' Keigwin's Fig. 2 showed that {delta}{sup 18}O has increased over the past 6000 years, so SSTs calculated from those data would have a long term decrease. Thus, it is inappropriate to compare present-day SST to a long term mean unless the trend is removed. Slight variations of Robinson et al. (1998) have been repeatedly published with different author rotations. Various mislabeled, improperly-drawn, and distorted versions of K4B have appeared in the Wall Street Journal, in weblogs, and even as an editorial cartoon-all supporting baseless claims that current temperatures are lower than the long-term mean, and traceable to Robinson's misrepresentation with Station S data removed. In 2007, Robinson added a fictitious 2006 temperature that is significantly lower than the measured data. This doctored version of K4B with fabricated data was reprinted in a 2008 Heartland Institute advocacy report, 'Nature, Not Human Activity, Rules the Climate.'

More Details

Critical issues in process control system security : DHS spares project

McIntyre, Annie M.; Hernandez, Jacquelynne H.

The goals of this event are: (1) Discuss the next-generation issues and emerging risks in cyber security for control systems; (2) Review and discuss common control system architectures; (3) Discuss the role of policy, standards, and supply chain issues; (4) Interact to determine the most pertinent risks and most critical areas of the architecture; and (5) Merge feedback from Control System Managers, Engineers, IT, and Auditors.

More Details

A micromechanical framework for simulating the deformation response of BCC metals

Battaile, Corbett C.; Weinberger, Christopher R.

Recently, molecular dynamics simulations (e.g. Groger et al. Acta Mat. vol.56) have uncovered new insights into dislocation motion associated with plastic deformation of BCC metals. Those results indicate that stress necessary for glide along 110[111] crystallographic systems plus additional shear stresses along non-glide directions may accurately characterize plastic flow in BCC crystals. Further, they are readily adaptable to micromechanical formulations used in crystal plasticity models. This presentation will discuss an adaptation into a classical mechanics framework for use in a large scale rate-dependent crystal plasticity model. The effects of incorporating the non-glide influences on an otherwise associative flow model are profound. Comparisons will be presented that show the effect of the non-glide stress components on tension-compression yield stress asymmetry and the evolution of texture in BCC crystals.

More Details

An input-output procedure for calculating economy-wide economic impacts in supply chains using homeland security consequence analysis tools

Warren, Drake E.; Vargas, Vanessa N.; Smith, Braeton J.; Vugrin, Eric D.

Sandia National Laboratories has developed several models to analyze potential consequences of homeland security incidents. Two of these models (the National Infrastructure Simulation and Analysis Center Agent-Based Laboratory for Economics, N-ABLE{trademark}, and Loki) simulate detailed facility- and product-level consequences of simulated disruptions to supply chains. Disruptions in supply chains are likely to reduce production of some commodities, which may reduce economic activity across many other types of supply chains throughout the national economy. The detailed nature of Sandia's models means that simulations are limited to specific supply chains in which detailed facility-level data has been collected, but policymakers are often concerned with the national-level economic impacts of supply-chain disruptions. A preliminary input-output methodology has been developed to estimate national-level economic impacts based upon the results of supply-chain-level simulations. This methodology overcomes two primary challenges. First, the methodology must be relatively simple to integrate successfully with existing models; it must be easily understood, easily applied to the supply-chain models without user intervention, and run quickly. The second challenge is more fundamental: the methodology must account for both upstream and downstream impacts that result from supply-chain disruptions. Input-output modeling typically estimates only upstream impacts, but shortages resulting from disruptions in many supply chains (for example, energy, communications, and chemicals) are likely to have large downstream impacts. In overcoming these challenges, the input-output methodology makes strong assumptions about technology and substitution. This paper concludes by applying the methodology to chemical supply chains.

More Details

Integrated readout of organic scintillator and ZnS:Ag/6LiF for segmented antineutrino detectors

Kiff, Scott D.; Reyna, David R.

Antineutrino detection using inverse beta decay conversion has demonstrated the capability to measure nuclear reactor power and fissile material content for nuclear safeguards. Current efforts focus on aboveground deployment scenarios, for which highly efficient capture and identification of neutrons is needed to measure the anticipated antineutrino event rates in an elevated background environment. In this submission, we report on initial characterization of a new scintillation-based segmented design that uses layers of ZnS:Ag/{sup 6}LiF and an integrated readout technique to capture and identify neutrons created in the inverse beta decay reaction. Laboratory studies with multiple organic scintillator and ZnS:Ag/{sup 6}LiF configurations reliably identify {sup 6}Li neutron captures in 60 cm-long segments using pulse shape discrimination.

More Details

Substrate compliance effects on buckle driven delamination in thin gold film systems

Moody, Neville R.; Reedy, Earl D.; Corona, Edmundo C.; Adams, David P.

Film durability is a primary factor governing the use of emerging thin film flexible substrate devices where compressive stresses can lead to delamination and buckling. It is of particular concern in gold film systems found in many submicron and nanoscale applications. We are therefore studying these effects in gold on PMMA systems using compressively stressed tungsten overlayers to force interfacial failure and simulations employing cohesive zone elements to model the fracture process. Delamination and buckling occurred spontaneously following deposition with buckle morphologies that differed significantly from existing model predictions. Moreover, use of thin adhesive interlayers had no discernable effect on performance. In this presentation we will use observations and simulations to show how substrate compliance and yielding affects the susceptibility to buckling of gold films on compliant substrates. We will also compare the fracture energies and buckle morphologies of this study with those of gold films on sapphire substrates to show how changing substrate compliance affects buckle formation.

More Details

Is submodularity testable?

We initiate the study of property testing of submodularity on the boolean hypercube. Submodular functions come up in a variety of applications in combinatorial optimization. For a vast range of algorithms, the existence of an oracle to a submodular function is assumed. But how does one check if this oracle indeed represents a submodular function? Consider a function f: {l_brace}0, 1{r_brace}{sup n} {yields} R. The distance to submodularity is the minimum fraction of values of f that need to be modified to make f submodular. If this distance is more than {epsilon} > 0, then we say that f is {epsilon}-far from being submodular. The aim is to have an efficient procedure that, given input f that is {epsilon}-far from being submodular, certifies that f is not submodular. We analyze a very natural tester for this problem, and prove that it runs in subexponential time. This gives the first non-trivial tester for submodularity. On the other hand, we prove an interesting lower bound (that is, unfortunately, quite far from the upper bound) suggesting that this tester cannot be very efficient in terms of {epsilon}. This involves non-trivial examples of functions which are far from submodular and yet do not exhibit too many local violations. We also provide some constructions indicating the difficulty in designing a tester for submodularity. We construct a partial function defined on exponentially many points that cannot be extended to a submodular function, but any strict subset of these values can be extended to a submodular function.

More Details

Dual-etalon, cavity-ring-down, frequency comb spectroscopy

Chandler, D.W.; Strecker, Kevin S.

The 'dual etalon frequency comb spectrometer' is a novel low cost spectometer with limited moving parts. A broad band light source (pulsed laser, LED, lamp ...) is split into two beam paths. One travels through an etalon and a sample gas, while the second arm is just an etalon cavity, and the two beams are recombined onto a single detector. If the free spectral ranges (FSR) of the two cavities are not identical, the intensity pattern at the detector with consist of a series of heterodyne frequencies. Each mode out of the sample arm etalon with have a unique frequency in RF (radio-frequency) range, where modern electronics can easily record the signals. By monitoring these RF beat frequencies we can then determine when an optical frequencies is absorbed. The resolution is set by the FSR of the cavity, typically 10 MHz, with a bandwidth up to 100s of cm{sup -1}. In this report, the new spectrometer is described in detail and demonstration experiments on Iodine absorption are carried out. Further we discuss powerful potential next generation steps to developing this into a point sensor for monitoring combustion by-products, environmental pollutants, and warfare agents.

More Details

A hybrid approach for the modal analysis of continuous systems with localized nonlinear constraints

Brake, Matthew R.

The analysis of continuous systems with nonlinearities in their domain have previously been limited to either numerical approaches, or analytical methods that are constrained in the parameter space, boundary conditions, or order of the system. The present analysis develops a robust method for studying continuous systems with arbitrary boundary conditions and nonlinearities using the assumption that the nonlinear constraint can be modeled with a piecewise-linear force-deflection constitutive relationship. Under this assumption, a superposition method is used to generate homogeneous boundary conditions, and modal analysis is used to find the displacement of the system in each state of the piecewise-linear nonlinearity. In order to map across each nonlinearity in the piecewise-linear force-deflection profile, a variational calculus approach is taken that minimizes the L2 energy norm between the previous and current states. To illustrate this method, a leaf spring coupled with a connector pin immersed in a viscous fluid is modeled as a beam with a piecewise-linear constraint. From the results of the convergence and parameter studies, a high correlation between the finite-time Lyapunov exponents and the contact time per period of the excitation is observed. The parameter studies also indicate that when the system's parameters are changed in order to reduce the magnitude of the velocity impact between the leaf spring and connector pin, the extent of the regions over which a chaotic response is observed increases.

More Details

Finding cycles and trees in sublinear time

We present sublinear-time (randomized) algorithms for finding simple cycles of length at least k {ge} 3 and tree-minors in bounded-degree graphs. The complexity of these algorithms is related to the distance of the graph from being C{sub k}-minor-free (resp., free from having the corresponding tree-minor). In particular, if the graph is far (i.e., {Omega}(1)-far) from being cycle-free, i.e. if one has to delete a constant fraction of edges to make it cycle-free, then the algorithm finds a cycle of polylogarithmic length in time {tilde O}({radical}N), where N denotes the number of vertices. This time complexity is optimal up to polylogarithmic factors. The foregoing results are the outcome of our study of the complexity of one-sided error property testing algorithms in the bounded-degree graphs model. For example, we show that cycle-freeness of N-vertex graphs can be tested with one-sided error within time complexity {tilde O}(poly(1/{epsilon}) {center_dot} {radical}N). This matches the known {Omega}({radical}N) query lower bound, and contrasts with the fact that any minor-free property admits a two-sided error tester of query complexity that only depends on the proximity parameter {epsilon}. For any constant k {ge} 3, we extend this result to testing whether the input graph has a simple cycle of length at least k. On the other hand, for any fixed tree T, we show that T -minor-freeness has a one-sided error tester of query complexity that only depends on the proximity parameter {epsilon}. Our algorithm for finding cycles in bounded-degree graphs extends to general graphs, where distances are measured with respect to the actual number of edges. Such an extension is not possible with respect to finding tree-minors in o({radical}N) complexity.

More Details

Radiative properties of astrophysical matter : a quest to reproduce astrophysical conditions on earth

Bailey, James E.

Experiments in terrestrial laboratories can be used to evaluate the physical models that interpret astronomical observations. The properties of matter in astrophysical objects are essential components of these models, but terrestrial laboratories struggle to reproduce the extreme conditions that often exist. Megajoule-class DOE/NNSA facilities such as the National Ignition Facility and Z can create unprecedented amounts of matter at extreme conditions, providing new capabilities to test astrophysical models with high accuracy. Experiments at these large facilities are challenging, and access is very competitive. However, the cylindrically-symmetric Z source emits radiation in all directions, enabling multiple physics experiments to be driven with a single Z discharge. This helps ameliorate access limitations. This article describes research efforts under way at Sandia National Laboratories Z facility investigating radiation transport through stellar interior matter, population kinetics of atoms exposed to the intense radiation emitted by accretion powered objects, and spectral line formation in white dwarf (WD) photospheres. Opacity quantifies the absorption of radiation by matter and strongly influences stellar structure and evolution, since radiation dominates energy transport deep inside stars. Opacity models have become highly sophisticated, but laboratory tests at the conditions existing inside stars have not been possible - until now. Z research is presently focused on measuring iron absorption at conditions relevant to the base of the solar convection zone, where the electron temperature and density are 190 eV and 9 x 10{sup 22} e/cc, respectively. Creating these conditions in a sample that is sufficiently large, long-lived, and uniform is extraordinarily challenging. A source of radiation that streams through the relatively-large samples can produce volumetric heating and thus, uniform conditions, but to achieve high temperatures a strong source is required. Z dynamic hohlraums provide such a megajoule-class source. Initial Z experiments measured transmission through iron samples ionized to the same charge states that exist at the solar convection zone base. The resulting data made it possible to test challenging aspects of the opacity calculations such as the ionization balance and the completeness and accuracy of the atomic energy level description. However, the density was too low to provide a definitive test of the physics at the solar convection zone base. Recent experiments have reached higher densities, and opacity model tests for stellar interiors now appear within reach. Accretion powered objects, including active galactic nuclei, x-ray binaries, and black hole accretion disks, are the most luminous objects in the universe. Astrophysical models for these objects rely largely on comparing spectroscopic predictions with observations. A dilemma arises because the spectra originate from plasmas that are bathed in the enormous photon flux from the accretion disk and photoionization dominates the atomic ionization and energy level populations. Thus, constraining astrophysical models depends on accurate atomic models for photoionized plasmas. Unfortunately, to date the ionization in almost all laboratory experiments is collision-dominated and very few tests of photoionized plasma atomic kinetics exist. Megajoule class high-energy-density facilities can help because they generate higher x-ray fluence over larger spatial scales and longer times. Expanded iron foils and pre-filled neon gas cells have been used in experiments at Z to study photoionized atomic kinetics in two elements commonly observed in astrophysical objects. In these experiments, low density samples are exposed to a measured intense x-ray spectrum, emergent emission or absorption spectra are recorded, and the results are compared to predictions made with spectral synthesis codes used by astrophysicists. Initial experiments focused on testing models used to interpret spectra from the 'warm absorber,' a plasma observed in the vicinity of some active galactic nuclei. In future experiments, spectra from plasmas exposed to higher radiation intensities will be measured, possibly leading to improved understanding of the plasma in the immediate vicinity of the accretion disk. WDs are the oldest stars and can serve as cosmic clocks, since the universe must be at least as old as the objects within it. Astrophysicists determine WD ages using stellar models combined with effective temperature (T{sub eff}) and mass inferred from spectral observations of WD photospheres. Many line profiles in the observed spectra are dominated by Stark broadening, a process sensitive to the photosphere density and related to the total mass through the stellar model. Accurate Stark broadening theory is, therefore, critical to the precise determination of the WD properties and the inferred ages.

More Details

Calendar Year 2009 Annual Site Environmental Report for Sandia National Laboratories, New Mexico

Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned/contractor operated facility. Sandia Corporation (Sandia), a wholly owned subsidiary of Lockheed Martin Corporation (LMC), manages and operates the laboratory for the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA). The DOE/NNSA, Sandia Site O ffice (SSO) administers the contract and oversees contractor operations at the site. This annual report summarizes data and the compliance status of Sandia Corporation’s environmental protection and monitoring programs through December 31, 2009. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and implementation of the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 450.1A, Environmental Protection Program (DOE 2008a) and DOE Manual 231.1-1A, Environment, Safety, and Health Reporting (DOE 2007).

More Details

Carrier recombination mechanisms and efficiency droop in GaInN/GaN light-emitting diodes

Applied Physics Letters

Crawford, Mary H.; Koleske, Daniel K.

In this work, we model the carrier recombination mechanisms in GaInN/GaN light-emitting diodes as R=An+Bn2+Cn3+f(n), where f(n) represents carrier leakage out of the active region. The term f(n) is expanded into a power series and shown to have higher-than-third-order contributions to the recombination. The total third-order nonradiative coefficient (which may include an f(n) leakage contribution and an Auger contribution) is found to be 8×10-29 cm6 s-1. Finally, comparison of the theoretical ABC+f(n) model with experimental data shows that a good fit requires the inclusion of the f(n) term.

More Details

Nanostructured gold architectures formed through high pressure-driven sintering of spherical nanoparticle arrays

Journal of the American Chemical Society

Wu, Huimeng; Bai, Feng; Sun, Zaicheng; Haddad, Raid E.; Boye, Daniel M.; Wang, Zhongwu; Huang, Jian Y.; Fan, Hongyou

We have demonstrated pressure-directed assembly for preparation of a new class of chemically and mechanically stable gold nanostructures through high pressure-driven sintering of nanoparticle assemblies at room temperature. We show that under a hydrostatic pressure field, the unit cell dimension of a 3D ordered nanoparticle array can be reversibly manipulated allowing fine-tuning of the interparticle separation distance. In addition, 3D nanostructured gold architecture can be formed through high pressure-induced nanoparticle sintering. This work opens a new pathway for engineering and fabrication of different metal nanostructured architectures. © 2010 American Chemical Society.

More Details

See applications run and throughput jump: The case for redundant computing in HPC

Proceedings of the International Conference on Dependable Systems and Networks

Riesen, Rolf; Ferreira, Kurt; Stearley, Jon S.

For future parallel-computing systems with as few as twenty-thousand nodes we propose redundant computing to reduce the number of application interrupts. The frequency of faults in exascale systems will be so high that traditional checkpoint/restart methods will break down. Applications will experience interruptions so often that they will spend more time restarting and recovering lost work, than computing the solution. We show that redundant computation at large scale can be cost effective and allows applications to complete their work in significantly less wall-clock time. On truly large systems, redundant computing can increase system throughput by an order of magnitude. © 2010 IEEE.

More Details

Nonlinear Power Flow Control applications to conventional generator swing equations subject to variable generation

SPEEDAM 2010 - International Symposium on Power Electronics, Electrical Drives, Automation and Motion

Wilson, David G.; Robinett, R.D.

In this paper1, the swing equations for renewable generators are formulated as a natural Hamiltonian system with externally applied non-conservative forces. A two-step process referred to as Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) is used to analyze and design feedback controllers for the renewable generator system. This formulation extends previous results on the analytical verification of the Potential Energy Boundary Surface (PEBS) method to nonlinear control analysis and design and justifies the decomposition of the system into conservative and non-conservative systems to enable a two-step, serial analysis and design procedure. In particular, this approach extends the work done by [1] by developing a formulation which applies to a larger set of Hamiltonian Systems that has Nearly Hamiltonian Systems as a subset. The results of this research include the determination of the required performance of a proposed Flexible AC Transmission System (FACTS)/storage device to enable the maximum power output of a wind turbine while meeting the power system constraints on frequency and phase. The FACTS/storage device is required to operate as both a generator and load (energy storage) on the power system in this design. The Second Law of Thermodynamics is applied to the power flow equations to determine the stability boundaries (limit cycles) of the renewable generator system and enable design of feedback controllers that meet stability requirements while maximizing the power generation and flow to the load. Necessary and sufficient conditions for stability of renewable generators systems are determined based on the concepts of Hamiltonian systems, power flow, exergy (the maximum work that can be extracted from an energy flow) rate, and entropy rate. © 2010 IEEE.

More Details

PhotoVoltaic distributed generation for Lanai power grid real-time simulation and control integration scenario

SPEEDAM 2010 - International Symposium on Power Electronics, Electrical Drives, Automation and Motion

Schenkman, Benjamin L.; Wilson, David G.; Robinett, R.D.; Kukolich, Keith

This paper1 discusses the modeling, analysis, and testing in a real-time simulation environment of the Lanai power grid system for the integration and control of PhotoVoltaic (PV) distributed generation. The Lanai Island in Hawaii is part of the Hawaii Clean Energy Initiative (HCEI) to transition to 30% renewable green energy penetration by 2030. In Lanai the primary loads come from two Castle and Cook Resorts, in addition to residential needs. The total peak load profile is 12470V, 5.5 MW. Currently there are several diesel generators that meet these loading requirements. As part of the HCEI, Lanai has initially installed 1.2MW of PV generation. The goal of this study has been to evaluate the impact of the PV with respect to the conventional carbon-based diesel generation in real time simulation. For intermittent PV distributed generation, the overall stability and transient responses are investigated. A simple Lanai "like" model has been developed in the Matlab/Simulink environment [1] (see Fig. 1) and to accommodate real-time simulation of the hybrid power grid system the Opal-RT Technologies RT-Lab environment [2] is used. The diesel generators have been modelled using the SimPowerSystems toolbox [3] swing equations and a custom Simulink module has been developed for the High level PV generation. All of the loads have been characterized primarily as distribution lines with series resistive load banks with one VAR load bank. Three-phase faults are implemented for each bus. Both conventional and advanced control architectures will be used to evaluate the integration of the PV onto the current power grid system. The baselne numerical results include the stable performance of the power grid during varying cloud cover (PV generation ramping up/down) scenarios. The importance of assessing the real-time scenario is included. © 2010 IEEE.

More Details

Quantifying effectiveness of failure prediction and response in HPC systems: Methodology and example

Proceedings of the International Conference on Dependable Systems and Networks

Brandt, James M.; Chen, Frank X.; De Sapio, Vincent D.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, Philippe; Roe, Diana C.; Thompson, David; Wong, Matthew H.

Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe contextrelevant methodologies for determining the accuracy and cost-benefit of predictors. © 2010 IEEE.

More Details

Improved synthesis of bis(borano)hypophosphite salts

Inorganic Chemistry

Anstey, Mitchell A.; Corbett, Michael T.; Majzoub, Eric H.; Cordaro, Joseph G.

A synthesis of the bis(borano)hypophosphite anion with various counterions has been developed to make use of more benign and commercially available reagents. This method avoids the use of potentially dangerous reagents used by previous methods and gives the final products in good yield. Details of the crystal structure determination of the sodium salt in space group Ama2 are given using a novel computational technique combined with Rietveld refinement. © 2010 American Chemical Society.

More Details

On enhancing VHTR lower plenum heat transfer and mixing via swirling jets

International Congress on Advances in Nuclear Power Plants 2010, ICAPP 2010

Rodriguez, Sal B.; El-Genk, Mohamed S.

High and very-high temperature gas-cooled reactors bring about unique challenges such as hot spots in the lower plate and thermal stratification in the lower plenum (LP). Analysis performed using Sandia National Laboratories' (SNL) Fuego computational fluid dynamics (CFD) code shows that these issues can be mitigated using static swirling inserts at the exit of the helium coolant channels to the LP. A full-scale, half-symmetry LP section is modeled using a numerical mesh that consists of 5.5 million hexahedral elements. The LP model includes the graphite support posts, the helium flow channel jets, the bottom plate, and the exterior walls. Calculations are performed for both conventional jets and clockwise and counter-clockwise swirling jets and varying swirl number, S, from 0 to 2.49. Calculations show that increasing S increases mixing and enhances heat transfer, thus reducing the likelihood of forming hot spots and thermal stratification in the LP.

More Details

MELCOR fission product release model for HTGRs

International Congress on Advances in Nuclear Power Plants 2010, ICAPP 2010

Young, Michael F.; Esmaili, Hossein; Gauntt, Randall O.; Basu, Sudhamay; Lee, Richard; Rubin, Stuart

A fission product release and transport model for High Temperature Gas cooled Reactors (HTGRs) is being developed for the MELCOR code. HTGRs use fuel in the form of TRISO coated fuel particles embedded in a graphitized matrix. The HTGR fission product model for MELCOR is being developed to calculate the released amounts and distribution offission products during normal operation and during accidents. The fission product release and transport model considers the important phenomena for fission product behavior in HTGRs, including the recoil and release offission products from the fuel kernel, transport through the coating layers, transport through the surrounding fuel matrix, release into circulating helium coolant, settling and plate-out on structural surfaces, adsorption by graphite dust in the primary system, and resuspension. The fraction of failed particles versus time is input by a particle failure fraction response surface of particle failure fraction as a function offuel temperature, and potentially, fuel burn-up. Fission product release from the fuel kernel and transport through the particle coating layers is calculated using diffusion-based release models. The models account for fission product release from uranium contamination in the graphitized matrix, and adsorption of fission products in the reactor system. The dust and its distribution can be determined from either MELCOR calculations of the reactor system during normal operation, or provided by other sources as input. The distribution of fission products is then normalized using the OR1GEN inventory to provide initial conditions for accident calculations. For the initial releases during an accident, the existing MELCOR aerosol transport models, with appropriate modifications, are being explored for calculating dust and fission product transport in the reactor system and in the confinement. For the delayed releases during the accident, which occur over many hours, and even days fission product release is calculated by combining the diffusion-based release rate with the failure fraction response surface input via a convolution integral. The decay of fission products is also included in the modeling.

More Details

Subsystem functionals and the missing ingredient of confinement physics in density functionals

Physical Review B - Condensed Matter and Materials Physics

Hao, Feng H.; Armiento, Rickard; Mattsson, Ann E.

The subsystem functional scheme is a promising approach recently proposed for constructing exchange-correlation density functionals. In this scheme, the physics in each part of real materials is described by mapping to a characteristic model system. The "confinement physics," an essential physical ingredient that has been left out in present functionals, is studied by employing the harmonic-oscillator (HO) gas model. By performing the potential→density and the density→exchange energy per particle mappings based on two model systems characterizing the physics in the interior (uniform electron-gas model) and surface regions (Airy gas model) of materials for the HO gases, we show that the confinement physics emerges when only the lowest subband of the HO gas is occupied by electrons. We examine the approximations of the exchange energy by several state-of-the-art functionals for the HO gas, and none of them produces adequate accuracy in the confinement dominated cases. A generic functional that incorporates the description of the confinement physics is needed. © 2010 The American Physical Society.

More Details

Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes

Jernigan, Dann A.

It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparison between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.

More Details

Operation and analysis of a supercritical CO2 Brayton cycle

Radel, Ross R.; Vernon, Milton E.; Rochau, Gary E.; Pickard, Paul S.

Sandia National Laboratories is investigating advanced Brayton cycles using supercritical working fluids for use with solar, nuclear or fossil heat sources. The focus of this work has been on the supercritical CO{sub 2} cycle (S-CO2) which has the potential for high efficiency in the temperature range of interest for these heat sources, and is also very compact, with the potential for lower capital costs. The first step in the development of these advanced cycles was the construction of a small scale Brayton cycle loop, funded by the Laboratory Directed Research & Development program, to study the key issue of compression near the critical point of CO{sub 2}. This document outlines the design of the small scale loop, describes the major components, presents models of system performance, including losses, leakage, windage, compressor performance, and flow map predictions, and finally describes the experimental results that have been generated.

More Details

Effects of mechanical stress on thermal microactuator performance

Journal of Micromechanics and Microengineering

Phinney, Leslie M.; Spletzer, Matthew A.; Baker, Michael S.; Serrano, Justin R.

Mechanical stresses on microsystems die induced by packaging processes and varying environmental conditions can affect the performance and reliability of microsystems devices. Thermal microactuators and stress gauges were fabricated using the Sandia five-layer SUMMiT surface micromachining process and diced to fit in a four-point bending stage. The sample dies were tested under tension and compression at stresses varying from ?250 MPa, compressive, to 200 MPa, tensile. Stress values were validated by both on-die stress gauges and micro-Raman spectroscopy measurements. Thermal microactuator displacement is measured for applied currents up to 35 mA as the mechanical stress is systematically varied. Increasing tensile stress decreases the initial actuator displacement. In most cases, the incremental thermal microactuator displacement from the zero current value for a given applied current decreases when the die is stressed. Numerical model predictions of thermal microactuator displacement versus current agree with the experimental results. Quantitative information on the reduction in thermal microactuator displacement as a function of stress provides validation data for MEMS models and can guide future designs to be more robust to mechanical stresses. © 2010 IOP Publishing Ltd.

More Details

Mechanics of soft interfaces studied with displacement-controlled scanning force microscopy

Progress in Surface Science

Goertz, Matt G.; Moore, Nathan W.

The development of scanning force microscopes that maintain precise control of the tip position using displacement control (DC-SFM) has allowed significant progress in understanding the relationships between the chemical and mechanical properties of soft interfaces. Here, developments in DC-SFM techniques and their applications are reviewed. Examples of material systems that have been investigated are discussed and compared to measurements with other techniques involving nanoprobe geometries to illustrate the achievements and promise in this area. Specifically discussed are applications to soft interfaces, including SAMs, lipid bilayers, confined fluids, polymer surfaces, ligand-receptor bonds, and soft metallic films. © 2010 Elsevier Ltd. All rights reserved.

More Details

Assessment of current cybersecurity practices in the public domain : cyber indications and warnings domain

Keliiaa, Curtis M.; Hamlet, Jason H.

This report assesses current public domain cyber security practices with respect to cyber indications and warnings. It describes cybersecurity industry and government activities, including cybersecurity tools, methods, practices, and international and government-wide initiatives known to be impacting current practice. Of particular note are the U.S. Government's Trusted Internet Connection (TIC) and 'Einstein' programs, which are serving to consolidate the Government's internet access points and to provide some capability to monitor and mitigate cyber attacks. Next, this report catalogs activities undertaken by various industry and government entities. In addition, it assesses the benchmarks of HPC capability and other HPC attributes that may lend themselves to assist in the solution of this problem. This report draws few conclusions, as it is intended to assess current practice in preparation for future work, however, no explicit references to HPC usage for the purpose of analyzing cyber infrastructure in near-real-time were found in the current practice. This report and a related SAND2010-4766 National Cyber Defense High Performance Computing and Analysis: Concepts, Planning and Roadmap report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

More Details

National cyber defense high performance computing and analysis : concepts, planning and roadmap

Keliiaa, Curtis M.; Hamlet, Jason H.

There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

More Details

Test plan for validation of the radiative transfer equation

Kearney, S.P.; Ricks, Allen J.; Grasser, Thomas W.; Jernigan, Dann A.

As the capabilities of numerical simulations increase, decision makers are increasingly relying upon simulations rather than experiments to assess risks across a wide variety of accident scenarios including fires. There are still, however, many aspects of fires that are either not well understood or are difficult to treat from first principles due to the computational expense. For a simulation to be truly predictive and to provide decision makers with information which can be reliably used for risk assessment the remaining physical processes must be studied and suitable models developed for the effects of the physics. A set of experiments are outlined in this report which will provide soot volume fraction/temperature data and heat flux (intensity) data for the validation of models for the radiative transfer equation. In addition, a complete set of boundary condition measurements will be taken to allow full fire predictions for validation of the entire fire model. The experiments will be performed with a lightly-sooting liquid hydrocarbon fuel fire in the fully turbulent scale range (2 m diameter).

More Details

Advanced sodium fast reactor accident source terms :

Powers, Dana A.

An expert opinion elicitation has been used to evaluate phenomena that could affect releases of radionuclides during accidents at sodium-cooled fast reactors. The intent was to identify research needed to develop a mechanistic model of radionuclide release for licensing and risk assessment purposes. Experts from the USA, France, the European Union, and Japan identified phenomena that could affect the release of radionuclides under hypothesized accident conditions. They qualitatively evaluated the importance of these phenomena and the need for additional experimental research. The experts identified seven phenomena that are of high importance and have a high need for additional experimental research: High temperature release of radionuclides from fuel during an energetic event Energetic interactions between molten reactor fuel and sodium coolant and associated transfer of radionuclides from the fuel to the coolant Entrainment of fuel and sodium bond material during the depressurization of a fuel rod with breached cladding Rates of radionuclide leaching from fuel by liquid sodium Surface enrichment of sodium pools by dissolved and suspended radionuclides Thermal decomposition of sodium iodide in the containment atmosphere Reactions of iodine species in the containment to form volatile organic iodides. Other issues of high importance were identified that might merit further research as development of the mechanistic model of radionuclide release progressed.

More Details

Optoelectronic and excitonic properties of oligoacenes and one-dimensional nanostructures

Wong, Bryan M.

The optoelectronic and excitonic properties in a series of linear acenes are investigated using range-separated methods within time-dependent density functional theory (TDDFT). In these highly-conjugated systems, we find that the range-separated formalism provides a substantially improved description of excitation energies compared to conventional hybrid functionals, which surprisingly fail for the various low-lying valence transitions. Moreover, we find that even if the percentage of Hartree-Fock exchange in conventional hybrids is re-optimized to match wavefunction-based CC2 benchmark calculations, they still yield serious errors in excitation energy trends. Based on an analysis of electron-hole transition density matrices, we also show that conventional hybrid functionals overdelocalize excitons and underestimate quasiparticle energy gaps in the acene systems. The results of the present study emphasize the importance of a range-separated and asymptotically-correct contribution of exchange in TDDFT for investigating optoelectronic and excitonic properties, even for these simple valence excitations.

More Details

Performance limits for exo-clutter Ground Moving Target Indicator (GMTI) radar

Doerry, Armin

The performance of a Ground Moving Target Indicator (GMTI) radar system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to 'get your arms around' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall GMTI radar system. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the 'seek time'.

More Details

Final LDRD report : advanced plastic scintillators for neutron detection

O'Bryan, Gregory O.; Mrowka, Stanley M.; Mascarenhas, Nicholas M.

This report summarizes the results of a one-year, feasibility-scale LDRD project that was conducted with the goal of developing new plastic scintillators capable of pulse shape discrimination (PSD) for neutron detection. Copolymers composed of matrix materials such as poly(methyl methacrylate) (PMMA) and blocks containing trans-stilbene (tSB) as the scintillator component were prepared and tested for gamma/neutron response. Block copolymer synthesis utilizing tSBMA proved unsuccessful so random copolymers containing up to 30% tSB were prepared. These copolymers were found to function as scintillators upon exposure to gamma radiation; however, they did not exhibit PSD when exposed to a neutron source. This project, while falling short of its ultimate goal, demonstrated the possible utility of single-component, undoped plastics as scintillators for applications that do not require PSD.

More Details

Development of a new generation of waste form for entrapment and immobilization of highly volatile and soluble radionuclides

Wang, Yifeng

The United States is now re-assessing its nuclear waste disposal policy and re-evaluating the option of moving away from the current once-through open fuel cycle to a closed fuel cycle. In a closed fuel cycle, used fuels will be reprocessed and useful components such as uranium or transuranics will be recovered for reuse. During this process, a variety of waste streams will be generated. Immobilizing these waste streams into appropriate waste forms for either interim storage or long-term disposal is technically challenging. Highly volatile or soluble radionuclides such as iodine ({sup 129}I) and technetium ({sup 99}Tc) are particularly problematic, because both have long half-lives and can exist as gaseous or anionic species that are highly soluble and poorly sorbed by natural materials. Under the support of Sandia National Laboratories (SNL) Laboratory-Directed Research & Development (LDRD), we have developed a suite of inorganic nanocomposite materials (SNL-NCP) that can effectively entrap various radionuclides, especially for {sup 129}I and {sup 99}Tc. In particular, these materials have high sorption capabilities for iodine gas. After the sorption of radionuclides, these materials can be directly converted into nanostructured waste forms. This new generation of waste forms incorporates radionuclides as nano-scale inclusions in a host matrix and thus effectively relaxes the constraint of crystal structure on waste loadings. Therefore, the new waste forms have an unprecedented flexibility to accommodate a wide range of radionuclides with high waste loadings and low leaching rates. Specifically, we have developed a general route for synthesizing nanoporous metal oxides from inexpensive inorganic precursors. More than 300 materials have been synthesized and characterized with x-ray diffraction (XRD), BET surface area measurements, and transmission electron microscope (TEM). The sorption capabilities of the synthesized materials have been quantified by using stable isotopes I and Re as analogs to {sup 129}I and {sup 99}Tc. The results have confirmed our original finding that nanoporous Al oxide and its derivatives have high I sorption capabilities due to the combined effects of surface chemistry and nanopore confinement. We have developed a suite of techniques for the fixation of radionuclides in metal oxide nanopores. The key to this fixation is to chemically convert a target radionuclide into a less volatile or soluble form. We have developed a technique to convert a radionuclide-loaded nanoporous material into a durable glass-ceramic waste form through calcination. We have shown that mixing a radionuclide-loaded getter material with a Na-silicate solution can effectively seal the nanopores in the material, thus enhancing radionuclide retention during waste form formation. Our leaching tests have demonstrated the existence of an optimal vitrification temperature for the enhancement of waste form durability. Our work also indicates that silver may not be needed for I immobilization and encapsulation.

More Details

Uncertainty quantification in the presence of limited climate model data with discontinuities

Sargsyan, Khachik S.; Safta, Cosmin S.; Debusschere, Bert D.; Najm, H.N.

Uncertainty quantification in climate models is challenged by the prohibitive cost of a large number of model evaluations for sampling. Another feature that often prevents classical uncertainty analysis from being readily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits a discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. In order to propagate uncertainties from model parameters to model output we use polynomial chaos (PC) expansions to represent the maximum overturning stream function in terms of the uncertain climate sensitivity and CO2 forcing parameters. Since the spectral methodology assumes a certain degree of smoothness, the presence of discontinuities suggests that separate PC expansions on each side of the discontinuity will lead to more accurate descriptions of the climate model output compared to global PC expansions. We propose a methodology that first finds a probabilistic description of the discontinuity given a number of data points. Assuming the discontinuity curve is a polynomial, the algorithm is based on Bayesian inference of its coefficients. Markov chain Monte Carlo sampling is used to obtain joint distributions for the polynomial coefficients, effectively parameterizing the distribution over all possible discontinuity curves. Next, we apply the Rosenblatt transformation to the irregular parameter domains on each side of the discontinuity. This transformation maps a space of uncertain parameters with specific probability distributions to a space of i.i.d standard random variables where orthogonal projections can be used to obtain PC coefficients. In particular, we use uniform random variables that are compatible with PC expansions based on Legendre polynomials. The Rosenblatt transformation and the corresponding PC expansions for the model output on either side of the discontinuity are applied successively for several realizations of the discontinuity curve. The climate model output and its associated uncertainty at specific design points is then computed by taking a quadrature-based integration average over PC expansions corresponding to possible realizations of the discontinuity curve.

More Details

Advanced methods for uncertainty quantification in tail regions of climate model predictions

Sargsyan, Khachik S.; Safta, Cosmin S.; Debusschere, Bert D.; Najm, H.N.

Conventional methods for uncertainty quantification are generally challenged in the 'tails' of probability distributions. This is specifically an issue for many climate observables since extensive sampling to obtain a reasonable accuracy in tail regions is especially costly in climate models. Moreover, the accuracy of spectral representations of uncertainty is weighted in favor of more probable ranges of the underlying basis variable, which, in conventional bases does not particularly target tail regions. Therefore, what is ideally desired is a methodology that requires only a limited number of full computational model evaluations while remaining accurate enough in the tail region. To develop such a methodology, we explore the use of surrogate models based on non-intrusive Polynomial Chaos expansions and Galerkin projection. We consider non-conventional and custom basis functions, orthogonal with respect to probability distributions that exhibit fat-tailed regions. We illustrate how the use of non-conventional basis functions, and surrogate model analysis, improves the accuracy of the spectral expansions in the tail regions. Finally, we also demonstrate these methodologies using precipitation data from CCSM simulations.

More Details

Separations and safeguards model integration

Cipiti, Benjamin B.; Zinaman, Owen R.

Research and development of advanced reprocessing plant designs can greatly benefit from the development of a reprocessing plant model capable of transient solvent extraction chemistry. This type of model can be used to optimize the operations of a plant as well as the designs for safeguards, security, and safety. Previous work has integrated a transient solvent extraction simulation module, based on the Solvent Extraction Process Having Interaction Solutes (SEPHIS) code developed at Oak Ridge National Laboratory, with the Separations and Safeguards Performance Model (SSPM) developed at Sandia National Laboratory, as a first step toward creating a more versatile design and evaluation tool. The goal of this work was to strengthen the integration by linking more variables between the two codes. The results from this integrated model show expected operational performance through plant transients. Additionally, ORIGEN source term files were integrated into the SSPM to provide concentrations, radioactivity, neutron emission rate, and thermal power data for various spent fuels. This data was used to generate measurement blocks that can determine the radioactivity, neutron emission rate, or thermal power of any stream or vessel in the plant model. This work examined how the code could be expanded to integrate other separation steps and benchmark the results to other data. Recommendations for future work will be presented.

More Details

Influence of the Richtmyer-Meshkov instability on the kinetic energy spectrum

Weber, Christopher R.

The fluctuating kinetic energy spectrum in the region near the Richtmyer-Meshkov instability (RMI) is experimentally investigated using particle image velocimetry (PIV). The velocity field is measured at a high spatial resolution in the light gas to observe the effects of turbulence production and dissipation. It is found that the RMI acts as a source of turbulence production near the unstable interface, where energy is transferred from the scales of the perturbation to smaller scales until dissipation. The interface also has an effect on the kinetic energy spectrum farther away by means of the distorted reflected shock wave. The energy spectrum far from the interface initially has a higher energy content than that of similar experiments with a flat interface. These differences are quick to disappear as dissipation dominates the flow far from the interface.

More Details

Quantum information processing : science & technology

Carroll, Malcolm; Tarman, Thomas D.

Qubits demonstrated using GaAs double quantum dots (DQD). The qubit basis states are the (1) singlet and (2) triplet stationary states. Long spin decoherence times in silicon spurs translation of GaAs qubit in to silicon. In the near term the goals are: (1) Develop surface gate enhancement mode double quantum dots (MOS & strained-Si/SiGe) to demonstrate few electrons and spin read-out and to examine impurity doped quantum-dots as an alternative architecture; (2) Use mobility, C-V, ESR, quantum dot performance & modeling to feedback and improve upon processing, this includes development of atomic precision fabrication at SNL; (3) Examine integrated electronics approaches to RF-SET; (4) Use combinations of numerical packages for multi-scale simulation of quantum dot systems (NEMO3D, EMT, TCAD, SPICE); and (5) Continue micro-architecture evaluation for different device and transport architectures.

More Details

Risk analysis for truck transportation of high consequence cargo

Waters, Robert D.

The fixed facilities control everything they can to drive down risk. They control the environment, work processes, work pace and workers. The transportation sector drive the State and US highways with high kinetic energy and less-controllable risks such as: (1) other drivers (beginners, impaired, distracted, etc.); (2) other vehicles (tankers, hazmat, super-heavies); (3) road environments (bridges/tunnels/abutments/construction); and (4) degraded weather.

More Details

LDRD final report : energy conversion using chromophore-functionalized carbon nanotubes

Leonard, Francois L.; Wong, Bryan M.; Krafcik, Karen L.; Zifer, Thomas Z.; Katzenmeyer, Aaron M.; Kane, Alexander K.

With the goal of studying the conversion of optical energy to electrical energy at the nanoscale, we developed and tested devices based on single-walled carbon nanotubes functionalized with azobenzene chromophores, where the chromophores serve as photoabsorbers and the nanotube as the electronic read-out. By synthesizing chromophores with specific absorption windows in the visible spectrum and anchoring them to the nanotube surface, we demonstrated the controlled detection of visible light of low intensity in narrow ranges of wavelengths. Our measurements suggested that upon photoabsorption, the chromophores isomerize to give a large change in dipole moment, changing the electrostatic environment of the nanotube. All-electron ab initio calculations were used to study the chromophore-nanotube hybrids, and show that the chromophores bind strongly to the nanotubes without disturbing the electronic structure of either species. Calculated values of the dipole moments supported the notion of dipole changes as the optical detection mechanism.

More Details

Adversary phase change detection using S.O.M. and text data

Speed, Ann S.; Warrender, Christina E.

In this work, we developed a self-organizing map (SOM) technique for using web-based text analysis to forecast when a group is undergoing a phase change. By 'phase change', we mean that an organization has fundamentally shifted attitudes or behaviors. For instance, when ice melts into water, the characteristics of the substance change. A formerly peaceful group may suddenly adopt violence, or a violent organization may unexpectedly agree to a ceasefire. SOM techniques were used to analyze text obtained from organization postings on the world-wide web. Results suggest it may be possible to forecast phase changes, and determine if an example of writing can be attributed to a group of interest.

More Details

Temperature-dependent mechanical property testing of nitrate thermal storage salts

Broome, Scott T.

Three salt compositions for potential use in trough-based solar collectors were tested to determine their mechanical properties as a function of temperature. The mechanical properties determined were unconfined compressive strength, Young's modulus, Poisson's ratio, and indirect tensile strength. Seventeen uniaxial compression and indirect tension tests were completed. It was found that as test temperature increases, unconfined compressive strength and Young's modulus decreased for all salt types. Empirical relationships were developed quantifying the aforementioned behaviors. Poisson's ratio tends to increase with increasing temperature except for one salt type where there is no obvious trend. The variability in measured indirect tensile strength is large, but not atypical for this index test. The average tensile strength for all salt types tested is substantially higher than the upper range of tensile strengths for naturally occurring rock salts. Interest in raising the operating temperature of concentrating solar technologies and the incorporation of thermal storage has motivated studies on the implementation of molten salt as the system working fluid. Recently, salt has been considered for use in trough-based solar collectors and has been shown to offer a reduction in levelized cost of energy as well as increasing availability (Kearney et al., 2003). Concerns regarding the use of molten salt are often related to issues with salt solidification and recovery from freeze events. Differences among salts used for convective heat transfer and storage are typically designated by a comparison of thermal properties. However, the potential for a freeze event necessitates an understanding of salt mechanical properties in order to characterize and mitigate possible detrimental effects. This includes stress imparted by the expanding salt. Samples of solar salt, HITEC salt (Coastal Chemical Co.), and a low melting point quaternary salt were cast for characterization tests to determine unconfined compressive strength, indirect tensile strength, coefficient of thermal expansion (CTE), Young's modulus, and Poisson's ratio. Experiments were conducted at multiple temperatures below the melting point to determine temperature dependence.

More Details

Hydrostatic compaction of Microtherm HT

Broome, Scott T.

Two samples of jacketed Microtherm{reg_sign}HT were hydrostatically pressurized to maximum pressures of 29,000 psi to evaluate both pressure-volume response and change in bulk modulus as a function of density. During testing, each of the two samples exhibited large irreversible compactive volumetric strains with only small increases in pressure; however at volumetric strains of approximately 50%, the Microtherm{reg_sign}HT stiffened noticeably at ever increasing rates. At the maximum pressure of 29,000 psi, the volumetric strains for both samples were approximately 70%. Bulk modulus, as determined from hydrostatic unload/reload loops, increased by more than two-orders of magnitude (from about 4500 psi to over 500,000 psi) from an initial material density of {approx}0.3 g/cc to a final density of {approx}1.1 g/cc. An empirical fit to the density vs. bulk modulus data is K = 492769{rho}{sup 4.6548}, where K is the bulk modulus in psi, and {rho} is the material density in g/cm{sup 3}. The porosity decreased from 88% to {approx}20% indicating that much higher pressures would be required to compact the material fully.

More Details

Anatomy of an transparent optical circulator

Podsednik, Jason P.

An optical circulator is a multi-port, nonreciprocal device that routes light from one specific port to another. Optical circulators have at least 3 or 4 ports, up to 6 port possible (JDS Uniphase, Huihong Fiber) Circulators do not disregard backward propagating light, but direct it to another port. Optical circulators are commonly found in bi-directional transmission systems, WDM networks, fiber amplifiers, and optical time domain reflectometers (OTDRs). 3-Port optical circulators are commonly used in PDV systems. 1550 nm laser light is launched into Port 1 and will exit out of Port 2 to the target. Doppler-shifted light off the moving surface is reflected back into Port 2 and exits out of Port 3. Surprisingly, a circulator requires a large number of parts to operate efficiently. Transparent circulators offer higher isolation than those of the reflective style using PBSs. A lower PMD is obtained using birefringent crystals rather than PBSs due to the similar path lengths between e and o rays. Many various circulator designs exist, but all achieve the same non-reciprocal results.

More Details

Easy system call tracing for Plan 9

Minnich, Ronald G.

Tracing system calls makes debugging easy and fast. On Plan 9, traditionally, system call tracing has been implemented with acid. New systems do not always implement all the capabilities needed for Acid, particularly the ability to rewrite the process code space to insert breakpoints. Architecture support libraries are not always available for Acid, or may not work even on a supported architecture. The requirement that Acid's libraries be available can be a problem on systems with a very small memory footprint, such as High Performance Computing systems where every Kbyte counts. Finally, Acid tracing is inconvenient in the presence of forks, which means tracing shell pipelines is particularly troublesome. The strace program available on most Unix systems is far more convenient to use and more capable than Acid for system call tracing. A similar system on Plan 9 can simplify troubleshooting. We have built a system calling tracing capability into the Plan 9 kernel. It has proven to be more convenient than strace in programming effort. One can write a shell script to implement tracing, and the C code to implement an strace equivalent is several orders of magnitude smaller.

More Details

What is the limiting performance of PDV (really)?

Dolan, Daniel H.

The limiting performance of PDV is determined by power spectrum location resolution - The uncertainty principle overestimates error and peak fit confidences underestimates error. Simulations indicate that PDV is: (1) Inaccurate and imprecise at low frequencies; (2) Accurate and (potentially) precise otherwise; (3) Limiting performance can be tied to sampling rate, noise fraction, and analysis duration. Frequency conversion is a good thing. PDV is competitive with VISAR, despite wavelength difference.

More Details

Mesoscale to plant-scale models of nuclear waste reprocessing

Rao, Rekha R.; Pawlowski, Roger P.; Brotherton, Christopher M.; Cipiti, Benjamin B.; Domino, Stefan P.; Jove Colon, Carlos F.; Moffat, Harry K.; Nemer, Martin N.; Noble, David R.; O'Hern, Timothy J.

Imported oil exacerabates our trade deficit and funds anti-American regimes. Nuclear Energy (NE) is a demonstrated technology with high efficiency. NE's two biggest political detriments are possible accidents and nuclear waste disposal. For NE policy, proliferation is the biggest obstacle. Nuclear waste can be reduced through reprocessing, where fuel rods are separated into various streams, some of which can be reused in reactors. Current process developed in the 1950s is dirty and expensive, U/Pu separation is the most critical. Fuel rods are sheared and dissolved in acid to extract fissile material in a centrifugal contactor. Plants have many contacts in series with other separations. We have taken a science and simulation-based approach to develop a modern reprocessing plant. Models of reprocessing plants are needed to support nuclear materials accountancy, nonproliferation, plant design, and plant scale-up.

More Details

The BROOM system

The Building Restoration Operations Optimization Model (BROOM) is a software product developed to assist in the restoration of major transport facilities in the event of an attack involving chemical or biological materials. As shown in Figure 3-1, the objective of this work is to replace a manual, paper-based data entry and tracking system with an electronic system that should be much less error-prone. It will also manage the sampling data efficiently and produce contamination maps in a more timely manner.

More Details

Multiscale model development of pattern nano-imprinting processes

Schunk, Randy

Nano-imprinting is an increasingly popular method of creating structured, nanometer scale patterns on a variety of surfaces. Applications are numerous, including non-volatile memory devices, printed flexible circuits, light-management films for displays and sundry energy-conversion devices. While there have been many extensive studies of fluid transport through the individual features of a pattern template, computational models of the entire machine-scale process, where features may number in the trillions per square inch, are currently computationally intractable. In this presentation we discuss a multiscale model aimed at addressing machine-scale issues in a nano-imprinting process. Individual pattern features are coarse-grained and represented as a structured porous medium, and the entire process is modeled using lubrication theory in a two-dimensional finite element method simulation. Machine pressures, optimal initial liquid distributions, pattern fill fractions (shown in figure 1), and final coating distributions of a typical process are investigated. This model will be of interest to those wishing to understand and carefully design the mechanics of nano-imprinting processes.

More Details

Modeling and simulation of soft-particle colloids under dynamic environmental gradients

Schunk, Randy; Brinker, C.J.

Controlled assembly in soft-particle colloidal suspensions is a technology poised to advance manufacturing methods for nano-scale templating, coating, and bio-conjugate devices. Applications for soft-particle colloids include photovoltaics, nanoelectronics, functionalized thin-film coatings, and a wide range of bio-conjugate devices such as sensors, assays, and bio-fuel cells. This presentation covers the topics of modeling and simulation of soft-particle colloidal systems over dewetting, evaporation, and irradiation gradients, including deposition of particles to surfaces. By tuning particle/solvent and environmental parameters, we transition from the regime of self-assembly to that of controlled assembly, and enable finer resolution of features at both the nano-scale and meso-scale. We report models of interparticle potentials and order parametrization techniques including results from simulations of colloids utilizing soft-particle field potentials. Using LAMMPS (Large-Scale Atomic/Molecular Massively Parallel Simulator), we demonstrate effects of volume fraction, shear and drag profiles, adsorbed and bulk polymer parameters, solvent chi parameter, and deposition profiles. Results are compared to theoretical models and correlation to TEM images from soft-particle irradiation experiments.

More Details

Analysis of advanced biofuels

Taatjes, Craig A.; Dec, John E.; Yang, Yi Y.; Welz, Oliver W.

Long chain alcohols possess major advantages over ethanol as bio-components for gasoline, including higher energy content, better engine compatibility, and less water solubility. Rapid developments in biofuel technology have made it possible to produce C{sub 4}-C{sub 5} alcohols efficiently. These higher alcohols could significantly expand the biofuel content and potentially replace ethanol in future gasoline mixtures. This study characterizes some fundamental properties of a C{sub 5} alcohol, isopentanol, as a fuel for homogeneous-charge compression-ignition (HCCI) engines. Wide ranges of engine speed, intake temperature, intake pressure, and equivalence ratio are investigated. The elementary autoignition reactions of isopentanol is investigated by analyzing product formation from laser-photolytic Cl-initiated isopentanol oxidation. Carbon-carbon bond-scission reactions in the low-temperature oxidation chemistry may provide an explanation for the intermediate-temperature heat release observed in the engine experiments. Overall, the results indicate that isopentanol has a good potential as a HCCI fuel, either in neat form or in blend with gasoline.

More Details

Constructing and sampling graphs with a given joint degree distribution

Stanton, Isabelle S.; Pinar, Ali P.

One of the most influential recent results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest understanding the relationship between network structure and the joint degree distribution of graphs is an interesting avenue of further research. An important tool for such studies are algorithms that can generate random instances of graphs with the same joint degree distribution. This is the main topic of this paper and we study the problem from both a theoretical and practical perspective. We provide an algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via end point switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge. These experiments show that our Markov Chain mixes quickly on real graphs, allowing for utilization of our techniques in practice.

More Details

Structure-property relations in negative permittivity reststrahlen materials for IR metamaterial applications

Ihlefeld, Jon I.; Ginn, James C.; Rodriguez, Marko A.; Kotula, Paul G.; Clem, Paul G.; Sinclair, Michael B.

We will present a study of the structure-property relations in Reststrahlen materials that possess a band of negative permittivities in the infrared. It will be shown that sub-micron defects strongly affect the optical response, resulting in significantly diminished permittivities. This work has implications on the use of ionic materials in IR-metamaterials.

More Details

PAT-1 safety analysis report addendum

Yoshimura, Richard H.; Morrow, Charles W.; Weiner, Ruth F.; Harding, David C.; Heitman, Lili A.; Kalan, Robert K.; Lopez Mestre, Carlos L.; Miller, David R.; Schmale, David T.; Knorovsky, Gerald A.

The Plutonium Air Transportable Package, Model PAT-1, is certified under Title 10, Code of Federal Regulations Part 71 by the U.S. Nuclear Regulatory Commission (NRC) per Certificate of Compliance (CoC) USA/0361B(U)F-96 (currently Revision 9). The purpose of this SAR Addendum is to incorporate plutonium (Pu) metal as a new payload for the PAT-1 package. The Pu metal is packed in an inner container (designated the T-Ampoule) that replaces the PC-1 inner container. The documentation and results from analysis contained in this addendum demonstrate that the replacement of the PC-1 and associated packaging material with the T-Ampoule and associated packaging with the addition of the plutonium metal content are not significant with respect to the design, operating characteristics, or safe performance of the containment system and prevention of criticality when the package is subjected to the tests specified in 10 CFR 71.71, 71.73 and 71.74.

More Details

PAT-1 safety analysis report addendum author responses to request for additional information

Yoshimura, Richard H.; Knorovsky, Gerald A.; Morrow, Charles W.; Weiner, Ruth F.; Harding, David C.; Heitman, Lili A.; Lopez Mestre, Carlos L.; Kalan, Robert K.; Miller, David R.; Schmale, David T.

The Plutonium Air Transportable Package, Model PAT-1, is certified under Title 10, Code of Federal Regulations Part 71 by the U.S. Nuclear Regulatory Commission (NRC) per Certificate of Compliance (CoC) USA/0361B(U)F-96 (currently Revision 9). The National Nuclear Security Administration (NNSA) submitted SAND Report SAND2009-5822 to NRC that documented the incorporation of plutonium (Pu) metal as a new payload for the PAT-1 package. NRC responded with a Request for Additional Information (RAI), identifying information needed in connection with its review of the application. The purpose of this SAND report is to provide the authors responses to each RAI. SAND Report SAND2010-6106 containing the proposed changes to the Addendum is provided separately.

More Details

Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia

Moreland, Kenneth D.; Fabian, Nathan D.

This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four sequential steps: meshing, partitioning, solver, and visualization. Not all of these components are necessarily run on the supercomputer. In particular, the meshing and visualization typically happen on smaller but more interactive computing resources. However, the previous decade has seen a growth in both the need and ability to perform scalable parallel analysis, and this gives motivation for coupling the solver and visualization.

More Details

The application of quaternions and other spatial representations to the reconstruction of re-entry vehicle motion

De Sapio, Vincent D.

The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processing techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.

More Details

Modeling algae growth in an open-channel raceway

James, Scott C.

Cost-effective implementation of microalgae as a solar-to-chemical energy conversion platform requires extensive system optimization; computer modeling can bring this to bear. This work uses modified versions of the U.S. Environmental Protection Agency's (EPA's) Environmental Fluid Dynamics Code (EFDC) in conjunction with the U.S. Army Corp of Engineers water-quality code (CE-QUAL) to simulate hydrodynamics coupled to growth kinetics of algae (Phaeodactylum tricornutum) in open-channel raceways. The model allows the flexibility to manipulate a host of variables associated with raceway-design, algal-growth, water-quality, hydrodynamic, and atmospheric conditions. The model provides realistic results wherein growth rates follow the diurnal fluctuation of solar irradiation and temperature. The greatest benefit that numerical simulation of the flow system offers is the ability to design the raceway before construction, saving considerable cost and time. Moreover, experiment operators can evaluate the impacts of various changes to system conditions (e.g., depth, temperature, flow speeds) without risking the algal biomass under study.

More Details

Characterization of the surface changes during the activation of erbium/erbium oxide for hydrogen storage

Zavadil, Kevin R.; Snow, Clark S.; Ohlhausen, J.A.

Erbium is known to effectively load with hydrogen when held at high temperature in a hydrogen atmosphere. To make the storage of hydrogen kinetically feasible, a thermal activation step is required. Activation is a routine practice, but very little is known about the physical, chemical, and/or electronic processes that occur during Activation. This work presents in situ characterization of erbium Activation using variable energy photoelectron spectroscopy at various stages of the Activation process. Modification of the passive surface oxide plays a significant role in Activation. The chemical and electronic changes observed from core-level and valence band spectra will be discussed along with corroborating ion scattering spectroscopy measurements.

More Details

Understanding large scale HPC systems through scalable monitoring and analysis

Brandt, James M.; Gentile, Ann C.; Roe, Diana C.; Pebay, Philippe P.; Wong, Matthew H.

As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate large scale HPC system understanding. OVIS incorporates an intuitive graphical user interface, an extensive and extendable data analysis suite, and a 3-D visualization engine that allows visual inspection of both raw and derived data on a geometrically correct representation of a HPC system. This talk will cover system instrumentation, data collection (including log files and the complications of meaningful parsing), analysis, visualization of both raw and derived information, and how data can be combined to increase system understanding and efficiency.

More Details

Nanomechanics and nanometallurgy of boundaries

Boyce, Brad B.; Clark, Blythe C.; Foiles, Stephen M.; Hattar, Khalid M.; Holm, Elizabeth A.; Knapp, J.A.

One of the tenets of nanotechnology is that the electrical/optical/chemical/biological properties of a material may be changed profoundly when the material is reduced to sufficiently small dimensions - and we can exploit these new properties to achieve novel or greatly improved material's performance. However, there may be mechanical or thermodynamic driving forces that hinder the synthesis of the structure, impair the stability of the structure, or reduce the intended performance of the structure. Examples of these phenomena include de-wetting of films due to high surface tension, thermally-driven instability of nano-grain structure, and defect-related internal dissipation. If we have fundamental knowledge of the mechanical processes at small length scales, we can exploit these new properties to achieve robust nanodevices. To state it simply, the goal of this program is the fundamental understanding of the mechanical properties of materials at small length scales. The research embodied by this program lies at the heart of modern materials science with a guiding focus on structure-property relationships. We have divided this program into three Tasks, which are summarized: (1) Mechanics of Nanostructured Materials (PI Blythe Clark). This task aims to develop a fundamental understanding of the mechanical properties and thermal stability of nanostructured metals, and of the relationship between nano/microstructure and bulk mechanical behavior through a combination of special materials synthesis methods, nanoindentation coupled with finite-element modeling, detailed electron microscopic characterization, and in-situ transmission electron microscopy experiments. (2) Theory of Microstructures and Ensemble Controlled Deformation (PI Elizabeth A. Holm). The goal of this Task is to combine experiment, modeling, and simulation to construct, analyze, and utilize three-dimensional (3D) polycrystalline nanostructures. These full 3D models are critical for elucidating the complete structural geometry, topology, and arrangements that control experimentally-observed phenomena, such as abnormal grain growth, grain rotation, and internal dissipation measured in nanocrystalline metal. (3) Mechanics and Dynamics of Nanostructured and Nanoscale Materials (PI John P. Sullivan). The objective of this Task is to develop atomic-scale understanding of dynamic processes including internal dissipation in nanoscale and nanostructured metals, and phonon transport and boundary scattering in nanoscale structures via internal friction measurements.

More Details

Enhanced Performance Assessment System (EPAS) for carbon sequestration

Wang, Yifeng; McNeish, Jerry M.; Dewers, Thomas D.; Jove Colon, Carlos F.; Sun, Amy C.; Hadgu, Teklu H.

Carbon capture and sequestration (CCS) is an option to mitigate impacts of atmospheric carbon emission. Numerous factors are important in determining the overall effectiveness of long-term geologic storage of carbon, including leakage rates, volume of storage available, and system costs. Recent efforts have been made to apply an existing probabilistic performance assessment (PA) methodology developed for deep nuclear waste geologic repositories to evaluate the effectiveness of subsurface carbon storage (Viswanathan et al., 2008; Stauffer et al., 2009). However, to address the most pressing management, regulatory, and scientific concerns with subsurface carbon storage (CS), the existing PA methodology and tools must be enhanced and upgraded. For example, in the evaluation of a nuclear waste repository, a PA model is essentially a forward model that samples input parameters and runs multiple realizations to estimate future consequences and determine important parameters driving the system performance. In the CS evaluation, however, a PA model must be able to run both forward and inverse calculations to support optimization of CO{sub 2} injection and real-time site monitoring as an integral part of the system design and operation. The monitoring data must be continually fused into the PA model through model inversion and parameter estimation. Model calculations will in turn guide the design of optimal monitoring and carbon-injection strategies (e.g., in terms of monitoring techniques, locations, and time intervals). Under the support of Laboratory-Directed Research & Development (LDRD), a late-start LDRD project was initiated in June of Fiscal Year 2010 to explore the concept of an enhanced performance assessment system (EPAS) for carbon sequestration and storage. In spite of the tight time constraints, significant progress has been made on the project: (1) Following the general PA methodology, a preliminary Feature, Event, and Process (FEP) analysis was performed for a hypothetical CS system. Through this FEP analysis, relevant scenarios for CO{sub 2} release were defined. (2) A prototype of EPAS was developed by wrapping an existing multi-phase, multi-component reservoir simulator (TOUGH2) with an uncertainty quantification and optimization code (DAKOTA). (3) For demonstration, a probabilistic PA analysis was successfully performed for a hypothetical CS system based on an existing project in a brine-bearing sandstone. The work lays the foundation for the development of a new generation of PA tools for effective management of CS activities. At a top-level, the work supports energy security and climate change/adaptation by furthering the capability to effectively manage proposed carbon capture and sequestration activities (both research and development as well as operational), and it greatly enhances the technical capability to address this national problem. The next phase of the work will include (1) full capability demonstration of the EPAS, especially for data fusion, carbon storage system optimization, and process optimization of CO{sub 2} injection, and (2) application of the EPAS to actual carbon storage systems.

More Details
Results 69801–70000 of 96,771
Results 69801–70000 of 96,771