Publications

Results 70601–70800 of 96,771

Search results

Jump to search filters

Regulatory standards for permanent disposal of spent nuclear fuel and high-level radioactive waste

Swift, Peter N.

This paper provides a summary of observations drawn from twenty years of personal experience in working with regulatory criteria for the permanent disposal of radioactive waste for both the Waste Isolation Pilot Plant repository for transuranic defense waste and the proposed Yucca Mountain repository for spent nuclear fuel and high-level wastes. Rather than providing specific recommendations for regulatory criteria, my goal here is to provide a perspective on topics that are fundamental to how high-level radioactive waste disposal regulations have been implemented in the past. What are the main questions raised relevant to long-term disposal regulations? What has proven effective in the past? Where have regulatory requirements perhaps had unintended consequences? New regulations for radioactive waste disposal may prove necessary, but the drafting of these regulations may be premature until a broad range of policy issues are better addressed. In the interim, the perspective offered here may be helpful for framing policy discussions.

More Details

Simulated combined abnormal environment fire calculations for aviation impacts

Aircraft impacts at flight speeds are relevant environments for aircraft safety studies. This type of environment pertains to normal environments such as wildlife impacts and rough landings, but also the abnormal environment that has more recently been evidenced in cases such as the Pentagon and World Trade Center events of September 11, 2001, and the FBI building impact in Austin. For more severe impacts, the environment is combined because it involves not just the structural mechanics, but also the release of the fuel and the subsequent fire. Impacts normally last on the order of milliseconds to seconds, whereas the fire dynamics may last for minutes to hours, or longer. This presents a serious challenge for physical models that employ discrete time stepping to model the dynamics with accuracy. Another challenge is that the capabilities to model the fire and structural impact are seldom found in a common simulation tool. Sandia National Labs maintains two codes under a common architecture that have been used to model the dynamics of aircraft impact and fire scenarios. Only recently have these codes been coupled directly to provide a fire prediction that is better informed on the basis of a detailed structural calculation. To enable this technology, several facilitating models are necessary, as is a methodology for determining and executing the transfer of information from the structural code to the fire code. A methodology has been developed and implemented. Previous test programs at the Sandia National Labs sled track provide unique data for the dynamic response of an aluminum tank of liquid water impacting a barricade at flight speeds. These data are used to validate the modeling effort, and suggest reasonable accuracy for the dispersion of a non-combustible fluid in an impact environment. The capability is also demonstrated with a notional impact of a fuel-filled container at flight speed. Both of these scenarios are used to evaluate numeric approximations, and help provide an understanding of the quantitative accuracy of the modeling methods.

More Details

Robustness of optimally controlled unitary quantum gates

Grace, Matthew G.

A unitary quantum gate is the basic functioning element of a quantum computer. Summary of results: (1) Robustness of a general n-qubit gate = 1 - F {proportional_to} 2{sup n}; (2) Robustness of a universal gate with complete isolation of one-and two-qubit subgates = 1 - F {proportional_to} n; and (3) Robustness of a universal gate with small unwanted couplings between the qubits is unclear.

More Details

Multi-MA reflex triode research

Mikkelson, Kenneth A.; Harper-Slaboszewicz, V.H.

The Reflex Triode can efficiently produce and transmit medium energy (10-100 keV) x-rays. Perfect reflexing through thin converter can increase transmission of 10-100 keV x-rays. Gamble II experiment at 1 MV, 1 MA, 60 ns - maximum dose with 25 micron tantalum. Electron orbits depend on the foil thickness. Electron orbits from LSP used to calculate path length inside tantalum. A simple formula predicts the optimum foil thickness for reflexing converters. The I(V) characteristics of the diode can be understood using simple models. Critical current dominates high voltage triodes, bipolar current is more important at low voltage. Higher current (2.5 MA), lower voltage (250 kV) triodes are being tested on Saturn at Sandia. Small, precise, anode-cathode gaps enable low impedance operation. Sample Saturn results at 2.5 MA, 250 kV. Saturn dose rate could be about two times greater. Cylindrical triode may improve x-ray transmission. Cylindrical triode design will be tested at 1/2 scale on Gamble II. For higher current on Saturn, could use two cylindrical triodes in parallel. 3 triodes in parallel require positive polarity operation. 'Triodes in series' would improve matching low impedance triodes to generator. Conclusions of this presentation are: (1) Physics of reflex triodes from Gamble II experiments (1 MA, 1 MV) - (a) Converter thickness 1/20 of CSDA range optimizes x-ray dose; (b) Simple model based on electron orbits predicts optimum thickness from LSP/ITS calculations and experiment; (c) I(V) analysis: beam dynamics different between 1 MV and 250 kV; (2) Multi-MA triode experiments on Saturn (2.5 MA, 250 kV) - (a) Polarity inversion in vacuum, (b) No-convolute configuration, accurate gap settings, (c) About half of current produces useful x-rays, (d) Cylindrical triode one option to increase x-ray transmission; and (3) Potential to increase Saturn current toward 10 MA, maintaining voltage and outer diameter - (a) 2 (or 3) cylindrical triodes in parallel, (b) Triodes in series to improve matching, (c) These concepts will be tested first on Gamble II.

More Details

Pore-lining composition and capillary breakthrough pressure of mudstone caprocks : sealing efficiency of geologic CO2 storage sites

Dewers, Thomas D.; Kotula, Paul G.

Subsurface containment of CO2 is predicated on effective caprock sealing. Many previous studies have relied on macroscopic measurements of capillary breakthrough pressure and other petrophysical properties without direct examination of solid phases that line pore networks and directly contact fluids. However, pore-lining phases strongly contribute to sealing behavior through interfacial interactions among CO2, brine, and the mineral or non-mineral phases. Our high resolution (i.e., sub-micron) examination of the composition of pore-lining phases of several continental and marine mudstones indicates that sealing efficiency (i.e., breakthrough pressure) is governed by pore shapes and pore-lining phases that are not identifiable except through direct characterization of pores. Bulk X-ray diffraction data does not indicate which phases line the pores and may be especially lacking for mudstones with organic material. Organics can line pores and may represent once-mobile phases that modify the wettability of an originally clay-lined pore network. For shallow formations (i.e., < {approx}800 m depth), interfacial tension and contact angles result in breakthrough pressures that may be as high as those needed to fracture the rock - thus, in the absence of fractures, capillary sealing efficiency is indicated. Deeper seals have poorer capillary sealing if mica-like wetting dominates the wettability.

More Details

90Sr liquid scintillation urine analysis utilizing different approaches for tracer recovery

Piraner, Olga; Preston, Rose T.; Shanks, Sonoya T.; Jones, Robert

90Sr is one of the isotopes most commonly produced by nuclear fission. This medium lived isotope presents serious challenges to radiation workers, the environment, and following a nuclear event, the general public. Methods of identifying this nuclide have been in existence for a number of years (e.g. Horwitz, E.P. [1], Maxwell, S.L.[2], EPA 905.0 [3]) which are time consuming, requiring a month or more for full analysis. This time frame is unacceptable in the present security environment. It is therefore important to have a dependable and rapid method for the determination of Sr. The purposes of this study are to reduce analysis time to less than half a day by utilizing a single method of radiation measurement while continuing to yield precise results. This paper presents findings on three methods that can meet this criteria; (1) stable Sr carrier, (2) 85Sr by gamma spectroscopy, and (3) 85Sr by LSC. Two methods of analyzing and calculating the 85Sr tracer recovery were investigated (gamma spectroscopy and a low energy window-Sr85LEBAB by LSC) as well as the use of two different types of Sr tracer (85Sr and stable Sr carrier). Three separate stock blank urine samples were spiked with various activity levels of 239Pu, 137Cs, 90Sr /90Y to determine the effectiveness of the Eichrome Sr-spec resin 2mL extractive columns. The objective was to compare the recoveries of 85Sr versus a stable strontium carrier, attempt to compare the rate at which samples can be processed by evaluating evaporation, neutralization, and removing the use of another instrument (gamma spectrometer) by using the LSC spectrometer to obtain 85Sr recovery. It was found that when using a calibration curve comprised of a different cocktail and a non-optimum discriminator setting reasonable results (bias of « 25%) were achieved. The results from spiked samples containing 85Sr demonstrated that a higher recovery is obtained when using gamma spectroscopy (89-95%) than when using the LEB window from LSC (120-470%). The high recovery for 85Sr by LSC analysis may be due to the interference/cross talk from the alpha region since alpha counts were observed in all sample sets. After further investigation it was determined that the alpha counts were due to 239Pu breakthrough on the Sr-spec column. This requires further development to purify the Sr before an accurate tracer recovery determination can be made. Sample preparation times varied and ranged from 4-6 hours depending on the specific sample preparation process. The results from the spiked samples containing stable strontium nitrate Sr(NO3)2 carrier demonstrate that gravimetric analysis yields the most consistent high recoveries (97-101%) when evaporation is carefully performed. Since this method did not have a variation on the tracer recovery method, the samples were counted in 1) LEB/Alpha/Beta mode optimized for Sr-90, 2) DPM for Sr-90, and 3) general LEB/Alpha/Beta mode. The results (from the known) ranged from 79-104%, 107-177%, and 85-89% for 1, 2, and 3 respectively. Counting the prepared samples in a generic low energy beta/alpha/beta protocol yielded more accurate and consistent results and also yielded the shortest sample preparation turn-around-time of 3.5 hours.

More Details

Electromagnetic rotational actuation

Hogan, Alexander H.

There are many applications that need a meso-scale rotational actuator. These applications have been left by the wayside because of the lack of actuation at this scale. Sandia National Laboratories has many unique fabrication technologies that could be used to create an electromagnetic actuator at this scale. There are also many designs to be explored. In this internship exploration of the designs and fabrications technologies to find an inexpensive design that can be used for prototyping the electromagnetic rotational actuator.

More Details

Thermal-stress modeling of an optical microphone at high temperature

Keane, Casey B.

To help determine the capability range of a MEMS optical microphone design in harsh conditions computer simulations were carried out. Thermal stress modeling was performed up to temperatures of 1000 C. Particular concern was over stress and strain profiles due to the coefficient of thermal expansion mismatch between the polysilicon device and alumina packaging. Preliminary results with simplified models indicate acceptable levels of deformation within the device.

More Details

Preliminary characterization of active MEMS valves

Keane, Casey B.

Partial characterization of a series of electrostatically actuated active microfluidic valves is to be performed. Tests are performed on a series of 24 valves from two different MEMS sets. Focus is on the physical deformation of the structures under variable pressure loadings, as well as voltage levels. Other issues that inhibit proper performance of the valves are observed, addressed and documented as well. Many microfluidic applications have need for the distribution of gases at finely specified pressures and times. To this end a series of electrostatically actuated active valves have been fabricated. Eight separate silicon die are discussed, each with a series of four active valves present. The devices are designed such that the valve boss is held at a ground, with a voltage applied to lower contacts. Resulting electrostatic forces pull the boss down against a series of stops, intended to create a seal as well as prevent accidental shorting of the device. They have been uniquely packaged atop a stack of material layers, which have inlaid channels for application of fluid flow to the backside of the valve. Electrical contact is supplied from the underlying printed circuit board, attached to external supplies and along traces on the silicon. Pressure is supplied from a reservoir of house compressed air, up to 100 Psig. This is routed through a Norgren R07-200-RGKA pressure regulator, rated to 150 Psig. From there flow passes a manually operated ball valve, and to a flow meter. Two flow meters were utilized; initially an Omega FMA1802 rated at 10 sccm, and followed by a Flocat model for higher flow rates up to 100 sccm. An Omega DPG4000-500 pressure gauge produced pressure measurements. Optical measurements were returned via a WYKO Interferometry probe station. This would allow for determination of physical deformations of the device under a variety of voltage and pressure loads. This knowledge could lead to insight as to the failure mechanisms of the device, yielding improvements for subsequent fabrications.

More Details

Early warning analysis for social diffusion events

ISI 2010 - 2010 IEEE International Conference on Intelligence and Security Informatics: Public Safety and Security

Colbaugh, Richard C.; Glass, Kristin

There is considerable interest in developing predictive capabilities for social diffusion processes, for instance enabling early identification of contentious "triggering" incidents that are likely to grow into large, self-sustaining mobilization events. Recently we have shown, using theoretical analysis, that the dynamics of social diffusion may depend crucially upon the interactions of social network communities, that is, densely connected groupings of individuals which have only relatively few links to other groups. This paper presents an empirical investigation of two hypotheses which follow from this finding: 1.) the presence of even just a few inter-community links can make diffusion activity in one community a significant predictor of activity in otherwise disparate communities and 2.) very early dispersion of a diffusion process across network communities is a reliable early indicator that the diffusion will ultimately involve a substantial number of individuals. We explore these hypotheses with case studies involving emergence of the Swedish Social Democratic Party at the turn of the 20th century, the spread of SARS in 2002-2003, and blogging dynamics associated with potentially incendiary real world occurrences. These empirical studies demonstrate that network community-based diffusion metrics do indeed possess predictive power, and in fact can be significantly more predictive than standard measures. © 2010 IEEE.

More Details

Energy loss due to eddy current in linear transformer driver cores

Physical Review Special Topics - Accelerators and Beams

Kim, A.A.; Mazarakis, M.G.; Manylov, V.I.; Vizir, V.A.; Stygar, William A.

In linear transformer drivers as well as any other linear induction accelerator cavities, ferromagnetic cores are used to prevent the current from flowing along the induction cavity walls which are in parallel with the load. But if the core is made of conductive material, the applied voltage pulse generates the eddy current in the core itself which heats the core and therefore also reduces the overall linear transformer driver (LTD) efficiency. The energy loss due to generation of the eddy current in the cores depends on the specific resistivity of the core material, the design of the core, as well as on the distribution of the eddy current in the core tape during the remagnetizing process. In this paper we investigate how the eddy current is distributed in a core tape with an arbitrary shape hysteresis loop. Our model is based on the textbook knowledge related to the eddy current generation in ferromagnetics with rectangular hysteresis loop, and in usual conductors. For the reader's convenience, we reproduce some most important details of this knowledge in our paper. The model predicts that the same core would behave differently depending on how fast the applied voltage pulse is: in the high frequency limit, the equivalent resistance of the core reduces during the pulse whereas in the low frequency limit it is constant. An important inference is that the energy loss due to the eddy current generation can be reduced by increasing the cross section of the core over the minimum value which is required to avoid its saturation. The conclusions of the model are confirmed with experimental observations presented at the end of the paper. © 2010 The American Physical Society.

More Details

A configurable-hardware document-similarity classifier to detect web attacks

Proceedings of the 2010 IEEE International Symposium on Parallel and Distributed Processing, Workshops and Phd Forum, IPDPSW 2010

Ulmer, Craig D.; Gokhale, Maya

This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric [11] to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

More Details

Hashing strategies for the cray XMT

Proceedings of the 2010 IEEE International Symposium on Parallel and Distributed Processing, Workshops and Phd Forum, IPDPSW 2010

Goodman, Eric G.; Haglin, David J.; Scherrer, Chad; Chavarría-Miranda, Daniel; Mogill, Jace; Feo, John

Two of the most commonly used hashing strategies-linear probing and hashing with chaining-are adapted for efficient execution on a Cray XMT. These strategies are designed to minimize memory contention. Datasets that follow a power law distribution cause significant performance challenges to shared memory parallel hashing implementations. Experimental results show good scalability up to 128 processors on two power law datasets with different data types: integer and string. These implementations can be used in a wide range of applications. © 2010 IEEE.

More Details

Palacios and kitten: New high performance operating systems for scalable virtualized and native supercomputing

Proceedings of the 2010 IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2010

Lange, John; Pedretti, Kevin P.; Hudson, Trammell; Dinda, Peter; Cui, Zheng; Xia, Lei; Bridges, Patrick; Gocke, Andy; Jaconette, Steven; Levenhagen, Michael J.; Brightwell, Ronald B.

Palacios is a new open-source VMM under development at Northwestern University and the University of New Mexico that enables applications executing in a virtualized environment to achieve scalable high performance on large machines. Palacios functions as a modularized extension to Kitten, a high performance operating system being developed at Sandia National Laboratories to support large-scale supercomputing applications. Together, Palacios and Kitten provide a thin layer over the hardware to support full-featured virtualized environments alongside Kitten's lightweight native environment. Palacios supports existing, unmodified applications and operating systems by using the hardware virtualization technologies in recent AMD and Intel processors. Additionally, Palacios leverages Kitten's simple memory management scheme to enable low-overhead pass-through of native devices to a virtualized environment. We describe the design, implementation, and integration of Palacios and Kitten. Our benchmarks show that Palacios provides near native (within 5%), scalable performance for virtualized environments running important parallel applications. This new architecture provides an incremental path for applications to use supercomputers, running specialized lightweight host operating systems, that is not significantly performance-compromised. © 2010 IEEE.

More Details

A comparison of two-phase computational fluid dynamics codes applied to the ITER first wall hypervapotron

IEEE Transactions on Plasma Science

Youchison, Dennis L.; Ulrickson, M.A.; Bullock, James H.

Enhanced radial transport in the plasma and the effect of ELMS may increase the ITER first wall heat loads to as much as 4 to 5 MW/m2 over localized areas. One proposed heatsink that can handle these higher loads is a CuCrZr hypervapotron. One concept for a first wall panel consists of 20 hypervapotron channels, each measuring 1400 mm long and 48.5 mm wide. The nominal cooling conditions anticipated for each channel are 400 g/s of water at 3 MPa and 100 °C. This will result in boiling over a portion of the total length. A two-phase thermalhydraulic analysis is required to predict accurately the thermal performance. Existing heat transfer correlations used for nucleate boiling are not appropriate here because the flow does not reach fully developed conditions in the multi-segmented channels. Our design-by-analysis approach used two commercial codes, Fluent and Star-CCM+, to perform computational fluid dynamics analyses with conjugate heat transfer. Both codes use the Rensselear (RPI) model for wall heat flux partitioning to model nucleate boiling as implemented in user-defined functions. We present a comparison between the two codes for this Eulerian multiphase problem that relies on temperature dependent materials properties. The analyses optimized the hypervapotron geometry, including teeth height and pitch, as well as the depth of the back channel to permit highly effective boiling heat transfer in the grooves between the teeth while ensuring that no boiling could occur at the back channel exit. The analysis used a representative heat flux profile with the peak heat flux of 5 MW/m2 limited to a 50 mm length. The maximum surface temperature of the heatsink is 415 °C. The baseline design uses 2 mm for the teeth height, a 3 mm width and 6 mm pitch, and a back channel depth of 8 mm. The teeth are detached from the sidewall by a 2-mm-wide slot on both sides that aids in sweep-out and quenching of the vapor bubbles. © 2006 IEEE.

More Details

Complementary ultrashort laser pulse characterization using MOSAIC and SHG FROG

Optics Letters

Bender, Daniel A.; Sheik-Bahae, Mansoor

A new (to our knowledge) method for generating the modified spectrum autointerferometric correlation (MOSAIC) trace from the second-harmonic generation frequency-resolved optical gating (SHG FROG) dataset is shown. Examples are presented illustrating enhanced visual sensitivity, applicability, and complementary qualitative pulse characterization using SHG FROG. © 2010 Optical Society of America.

More Details

Review of the technical bases of 40 CFR Part 190

McMahon, Kevin A.; Bixler, Nathan E.; Kelly, John E.; Siegel, Malcolm D.; Weiner, Ruth F.

The dose limits for emissions from the nuclear fuel cycle were established by the Environmental Protection Agency in 40 CFR Part 190 in 1977. These limits were based on assumptions regarding the growth of nuclear power and the technical capabilities of decontamination systems as well as the then-current knowledge of atmospheric dispersion and the biological effects of ionizing radiation. In the more than thirty years since the adoption of the limits, much has changed with respect to the scale of nuclear energy deployment in the United States and the scientific knowledge associated with modeling health effects from radioactivity release. Sandia National Laboratories conducted a study to examine and understand the methodologies and technical bases of 40 CFR 190 and also to determine if the conclusions of the earlier work would be different today given the current projected growth of nuclear power and the advances in scientific understanding. This report documents the results of that work.

More Details

FY09 recycling opportunity assessment for Sandia National Laboratories/New Mexico

Mccord, Samuel A.

This Recycling Opportunity Assessment (ROA) is a revision and expansion of the FY04 ROA. The original 16 materials are updated through FY08, and then 56 material streams are examined through FY09 with action items for ongoing improvement listed for most. In addition to expanding the list of solid waste materials examined, two new sections have been added to cover hazardous waste materials. Appendices include energy equivalencies of materials recycled, trends and recycle data, and summary tables of high, medium, and low priority action items.

More Details

Efficient nearest neighbor searches in N-ABLE

Mackey, Greg

The nearest neighbor search is a significant problem in transportation modeling and simulation. This paper describes how the nearest neighbor search is implemented efficiently with respect to running time in the NISAC Agent-Based Laboratory for Economics. The paper shows two methods to optimize running time of the nearest neighbor search. The first optimization uses a different distance metric that is more computationally efficient. The concept of a magnitude-comparable distance is described, and the paper gives a specific magnitude-comparable distance that is more computationally efficient than the actual distance function. The paper also shows how the given magnitude-comparable distance can be used to speed up the actual distance calculation. The second optimization reduces the number of points the search examines by using a spatial data structure. The paper concludes with testing of the different techniques discussed and the results.

More Details

New topics in coherent anti-stokes raman scattering gas-phase diagnostics : femtosecond rotational CARS and electric-field measurements

Serrano, Justin R.; Barnat, Edward V.

We discuss two recent diagnostic-development efforts in our laboratory: femtosecond pure-rotational Coherent anti-Stokes Raman scattering (CARS) for thermometry and species detection in nitrogen and air, and nanosecond vibrational CARS measurements of electric fields in air. Transient pure-rotational fs-CARS data show the evolution of the rotational Raman polarization in nitrogen and air over the first 20 ps after impulsive pump/Stokes excitation. The Raman-resonant signal strength at long time delays is large, and we additionally observe large time separation between the fs-CARS signatures of nitrogen and oxygen, so that the pure-rotational approach to fs-CARS has promise for simultaneous species and temperature measurements with suppressed nonresonant background. Nanosecond vibrational CARS of nitrogen for electric-field measurements is also demonstrated. In the presence of an electric field, a dipole is induced in the otherwise nonpolar nitrogen molecule, which can be probed with the introduction of strong collinear pump and Stokes fields, resulting in CARS signal radiation in the infrared. The electric-field diagnostic is demonstrated in air, where the strength of the coherent infrared emission and sensitivity our field measurements is quantified, and the scaling of the infrared signal with field strength is verified.

More Details

Comparison of high pressure transient PVT measurements and model predictions. Part I

Felver, Todd G.; Paradiso, Nicholas J.; Winters, William S.; Evans, Gregory H.

A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must be confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.

More Details

The role of science supporting the Waste Isolation Pilot Plant

Swift, Peter N.

The presentation briefly addresses three topics. First, science has played an important role throughout the history of the WIPP project, beginning with site selection in the middle 1970s. Science was a key part of site characterization in the 1980s, providing basic information on geology, hydrology, geochemisty, and the mechanical behavior of the salt, among other topics. Science programs also made significant contributions to facility design, specifically in the area of shaft seal design and testing. By the middle 1990s, emphasis shifted from site characterization to regulatory evaluations, and the science program provided one of the essential bases for certification by the Environmental Protection Agency in 1998. Current science activities support ongoing disposal operations and regulatory recertification evaluations mandated by the EPA. Second, the EPA regulatory standards for long-term performance frame the scientific evaluations that provide the basis for certification. Unlike long-term dose standards applied to Yucca Mountain and proposed repositories in other nations, the WIPP regulations focused on cumulative releases during a fixed time interval of 10,000 years, and placed a high emphasis on the consequences of future inadvertent drilling intrusions into the repository. Close attention to the details of the regulatory requirements facilitated EPA's review of the DOE's 1996 Compliance Certification Application. Third, the scientific understanding developed for WIPP provided the basis for modeling studies that evaluated the long-term performance of the repository in the context of regulatory requirements. These performance assessment analyses formed a critical part of the demonstration that the site met the specific regulatory requirements as well as providing insight into the overall understanding of the long-term performance of the system. The presentation concludes with observations on the role of science in the process of developing a disposal system, including the importance of establishing the regulatory framework, building confidence in the long-term safety of the system, and the critical role of the regulator in decision making.

More Details

Efficient DSMC collision-partner selection schemes

Gallis, Michail A.; Torczynski, J.R.

The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.

More Details

Synthesis and characterization of metal oxide materials for thermochemical CO2 splitting using concentrated solar energy

Stechel-Speicher, Ellen B.; Coker, Eric N.; Rodriguez, Marko A.

The Sunshine to Petrol effort at Sandia aims to convert carbon dioxide and water to precursors for liquid hydrocarbon fuels using concentrated solar power. Significant advances have been made in the field of solar thermochemical CO{sub 2}-splitting technologies utilizing yttria-stabilized zirconia (YSZ)-supported ferrite composites. Conceptually, such materials work via the basic redox reactions: Fe{sub 3}O{sub 4} {yields} 3FeO + 0.5O{sub 2} (Thermal reduction, >1350 C) and 3FeO + CO{sub 2} {yields} Fe{sub 3}O{sub 4} + CO (CO{sub 2}-splitting oxidation, <1200 C). There has been limited fundamental characterization of the ferrite-based materials at the high temperatures and conditions present in these cycles. A systematic study of these composites is underway in an effort to begin to elucidate microstructure, structure-property relationships, and the role of the support on redox behavior under high-temperature reducing and oxidizing environments. In this paper the synthesis, structural characterization (including scanning electron microscopy and room temperature and in-situ x-ray diffraction), and thermogravimetric analysis of YSZ-supported ferrites will be reported.

More Details

Synthesis and characterization of ferrite materials for thermochemical CO2 splitting using concentrated solar energy

Stechel-Speicher, Ellen B.; Coker, Eric N.; Rodriguez, Marko A.

The Sunshine to Petrol effort at Sandia aims to convert carbon dioxide and water to precursors for liquid hydrocarbon fuels using concentrated solar power. Significant advances have been made in the field of solar thermochemical CO{sub 2}-splitting technologies utilizing yttria-stabilized zirconia (YSZ)-supported ferrite composites. Conceptually, such materials work via the basic redox reactions: Fe{sub 3}O{sub 4} {yields} 3FeO + 0.5O{sub 2} (Thermal reduction, >1350 C) and 3FeO + CO{sub 2} {yields} Fe{sub 3}O{sub 4} + CO (CO{sub 2}-splitting oxidation, <1200 C). There has been limited fundamental characterization of the ferrite-based materials at the high temperatures and conditions present in these cycles. A systematic study of these composites is underway in an effort to begin to elucidate microstructure, structure-property relationships, and the role of the support on redox behavior under high-temperature reducing and oxidizing environments. In this paper the synthesis, structural characterization (including scanning electron microscopy and room temperature and in-situ x-ray diffraction), and thermogravimetric analysis of YSZ-supported ferrites will be reported.

More Details

Prescriptive vs. performance based cook-off fire testing

Tieszen, Sheldon R.; Erikson, William W.; Gill, Walt; Blanchat, Tom; Nakos, James T.

In the fire safety community, the trend is toward implementing performance-based standards in place of existing prescriptive ones. Prescriptive standards can be difficult to adapt to changing design methods, materials, and application situations of systems that ultimately must perform well in unwanted fire situations. In general, this trend has produced positive results and is embraced by the fire protection community. The question arises as to whether this approach could be used to advantage in cook-off testing. Prescribed fuel fire cook-off tests have been instigated because of historical incidents that led to extensive damage to structures and loss of life. They are designed to evaluate the propensity for a violent response. The prescribed protocol has several advantages: it can be defined in terms of controllable parameters (wind speed, fuel type, pool size, etc.); and it may be conservative for a particular scenario. However, fires are inherently variable and prescribed tests are not necessarily representative of a particular accident scenario. Moreover, prescribed protocols are not necessarily adaptable and may not be conservative. We also consider performance-based testing. This requires more knowledge and thought regarding not only the fire environment, but the behavior of the munitions themselves. Sandia uses a performance based approach in assuring the safe behavior of systems of interest that contain energetic materials. Sandia also conducts prescriptive fire testing for the IAEA, NRC and the DOT. Here we comment on the strengths and weakness of both approaches and suggest a path forward should it be desirable to pursue a performance based cook-off standard.

More Details

Measurements of Magneto-Rayleigh-Taylor instability growth in solid liners on the 20 MA Z facility

Sinars, Daniel S.; Edens, Aaron E.; Lopez, Mike R.; Smith, Ian C.; Shores, Jonathon S.; Bennett, Guy R.; Atherton, B.W.; Savage, Mark E.; Stygar, William A.; Leifeste, Gordon T.; Slutz, Stephen A.; Herrmann, Mark H.; Cuneo, M.E.; Peterson, Kyle J.; McBride, Ryan D.; Vesey, Roger A.; Nakhleh, Charles N.; Tomlinson, Kurt T.

The magneto-Rayleigh-Taylor (MRT) instability is the most important instability for determining whether a cylindrical liner can be compressed to its axis in a relatively intact form, a requirement for achieving the high pressures needed for inertial confinement fusion (ICF) and other high energy-density physics applications. While there are many published RT studies, there are a handful of well-characterized MRT experiments at time scales >1 {micro}s and none for 100 ns z-pinch implosions. Experiments used solid Al liners with outer radii of 3.16 mm and thicknesses of 292 {micro}m, dimensions similar to magnetically-driven ICF target designs [1]. In most tests the MRT instability was seeded with sinusoidal perturbations ({lambda} = 200, 400 {micro}m, peak-to-valley amplitudes of 10, 20 {micro}m, respectively), wavelengths similar to those predicted to dominate near stagnation. Radiographs show the evolution of the MRT instability and the effects of current-induced ablation of mass from the liner surface. Additional Al liner tests used 25-200 {micro}m wavelengths and flat surfaces. Codes being used to design magnetized liner ICF loads [1] match the features seen except at the smallest scales (<50 {micro}m). Recent experiments used Be liners to enable penetrating radiography using the same 6.151 keV diagnostics and provide an in-flight measurement of the liner density profile.

More Details

Ultra Wideband (UWB) communication vulnerability for security applications

Cooley, H.T.

RF toxicity and Information Warfare (IW) are becoming omnipresent posing threats to the protection of nuclear assets, and within theatres of hostility or combat where tactical operation of wireless communication without detection and interception is important and sometimes critical for survival. As a result, a requirement for deployment of many security systems is a highly secure wireless technology manifesting stealth or covert operation suitable for either permanent or tactical deployment where operation without detection or interruption is important The possible use of ultra wideband (UWB) spectrum technology as an alternative physical medium for wireless network communication offers many advantages over conventional narrowband and spread spectrum wireless communication. UWB also known as fast-frequency chirp is nonsinusoidal and sends information directly by transmitting sub-nanosecond pulses without the use of mixing baseband information upon a sinusoidal carrier. Thus UWB sends information using radar-like impulses by spreading its energy thinly over a vast spectrum and can operate at extremely low-power transmission within the noise floor where other forms of RF find it difficult or impossible to operate. As a result UWB offers low probability of detection (LPD), low probability of interception (LPI) as well as anti-jamming (AJ) properties in signal space. This paper analyzes and compares the vulnerability of UWB to narrowband and spread spectrum wireless network communication.

More Details

A global 3D P-velocity model of the Earth's crust and mantle for improved event location : SALSA3D

Ballard, Sanford B.; Young, Christopher J.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method. We compare the travel-time prediction and location capabilities of SALSA3D to standard 1D models via location tests on a global event set with GT of 5 km or better. These events generally possess hundreds of Pn and P picks from which we generate different realizations of station distributions, yielding a range of azimuthal coverage and ratios of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135 regardless of Pn to P ratio, with the improvement being most pronounced at higher azimuthal gaps.

More Details

Imaging penetrating radiation through ion photon emission microscopy

Hattar, Khalid M.; Villone, J.; Powell, Cody J.; Doyle, Barney L.

The ion photon emission microscope (IPEM), a new radiation effects microscope for the imaging of single event effects from penetrating radiation, is being developed at Sandia National Laboratories and implemented on the 88' cyclotron at Lawrence Berkeley National Laboratories. The microscope is designed to permit the direct correlation between the locations of high-energy heavy-ion strikes and single event effects in microelectronic devices. The development of this microscope has required the production of a robust optical system that is compatible with the ion beam lines, design and assembly of a fast single photon sensitive measurement system to provide the necessary coincidence, and the development and testing of many scintillating films. A wide range of scintillating material for application to the ion photon emission microscope has been tested with few meeting the stringent radiation hardness, intensity, and photon lifetime requirements. The initial results of these luminescence studies and the current operation of the ion photon emission microscope will be presented. Finally, the planned development for future microscopes and ion luminescence testing chambers will be discussed.

More Details

Solid oxide fuel cell electrolytes produced by a combination of suspension plasma spray and very low pressure plasma spray

McCloskey, James F.

Plasma spray coating techniques allow unique control of electrolyte microstructures and properties as well as facilitating deposition on complex surfaces. This can enable significantly improved solid oxide fuel cells (SOFCs), including non-planar designs. SOFCs are promising because they directly convert the oxidization of fuel into electrical energy. However, electrolytes deposited using conventional plasma spray are porous and often greater than 50 microns thick. One solution to form dense, thin electrolytes of ideal composition for SOFCs is to combine suspension plasma spray (SPS) with very low pressure plasma spray (VLPPS). Increased compositional control is achieved due to dissolved dopant compounds in the suspension that are incorporated into the coating during plasma spraying. Thus, it is possible to change the chemistry of the feed stock during deposition. In the work reported, suspensions of sub-micron diameter 8 mol.% Y2O3-ZrO2 (YSZ) powders were sprayed on NiO-YSZ anodes at Sandia National Laboratories (SNL) Thermal Spray Research Laboratory (TSRL). These coatings were compared to the same suspensions doped with scandium nitrate at 3 to 8 mol%. The pressure in the chamber was 2.4 torr and the plasma was formed from a combination of argon and hydrogen gases. The resultant electrolytes were well adhered to the anode substrates and were approximately 10 microns thick. The microstructure of the resultant electrolytes will be reported as well as the electrolyte performance as part of a SOFC system via potentiodynamic testing and impedance spectroscopy.

More Details

Listing triangles in expected linear time on a class of power law graphs

Berry, Jonathan W.

Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysis for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.

More Details

Nontraditional tensor decompositions and applications

Bader, Brett W.

This presentation will discuss two tensor decompositions that are not as well known as PARAFAC (parallel factors) and Tucker, but have proven useful in informatics applications. Three-way DEDICOM (decomposition into directional components) is an algebraic model for the analysis of 3-way arrays with nonsymmetric slices. PARAFAC2 is a related model that is less constrained than PARAFAC and allows for different objects in one mode. Applications of both models to informatics problems will be shown.

More Details

Quantitative imaging of graphene impedance with the near-field scanning microwave microscope

Gin, Aaron G.; Shaner, Eric A.

Graphene has emerged as a promising material for high speed nano-electronics due to the relatively high carrier mobility that can be achieved. To further investigate electronic transport in graphene and reveal its potential for microwave applications, we employed a near-field scanning microwave microscope with the probe formed by an electrically open end of a 4 GHz half-lambda parallel-strip transmission line resonator. Because of the balanced probe geometry, our microscope allows for truly localized quantitative characterization of various bulk and low-dimensional materials, with the response region defined by the one micron spacing between the two metallic strips at the probe tip. The single- and few-layer graphene flakes were fabricated by a mechanical cleavage method on 300-nm-thick silicon dioxide grown on low resistivity Si wafer. The flake thickness was determined using both AFM and Raman microscopies. We observe clear correlation between the near-field microwave and far-field optical images of graphene produced by the probe resonant frequency shift and thickness-defined color gradation, respectively. We show that the microwave response of graphene flakes is determined by the local sheet impedance, which is found to be predominantly active. Furthermore, we apply a quantitative electrodynamic model relating the probe resonant frequency shift to 2D conductivity of single- and few-layer graphene. From fitting a model to the experimental data we evaluate graphene sheet resistance as a function of thickness. Near-field scanning microwave microscopy can simultaneously image location, geometry, thickness, and distribution of electrical properties of graphene without a need for device fabrication. The approach may be useful for design of graphene-based microwave transistors, quality control of large area graphene sheets, or investigation of chemical and electrical doping effects on graphene transport properties. We acknowledge support from the DOE Center for Integrated Nanotechnologies user support program (grant No.U2008A061), from the NASA NM Space Grant Consortium program, and from the LANL-NMT MOU program supported by UCDRD.

More Details

Strategic Petroleum Reserve equation of state model development : current performance against measured data

Lord, David L.

This report documents the progression of crude oil phase behavior modeling within the U.S. Strategic Petroleum Reserve vapor pressure program during the period 2004-2009. Improvements in quality control on phase behavior measurements in 2006 coupled with a growing body of degasification plant operations data have created a solid measurement baseline that has served to inform and significantly improve project understanding on phase behavior of SPR oils. Systematic tuning of the model based on proven practices from the technical literature have shown to reduce model bias and match observed data very well, though this model tuning effort is currently in process at SPR and based on preliminary data. The current report addresses many of the steps that have helped to build a strong baseline of data coupled with sufficient understanding of model features so that calibration is possible.

More Details

Thin and small form factor cells : simulated behavior

Cruz-Campa, Jose L.; Okandan, Murat O.; Resnick, Paul J.; Grubbs, Robert K.; Clews, Peggy J.; Pluym, Tammy P.; Young, Ralph W.; Gupta, Vipin P.; Nielson, Gregory N.

Thin and small form factor cells have been researched lately by several research groups around the world due to possible lower assembly costs and reduced material consumption with higher efficiencies. Given the popularity of these devices, it is important to have detailed information about the behavior of these devices. Simulation of fabrication processes and device performance reveals some of the advantages and behavior of solar cells that are thin and small. Three main effects were studied: the effect of surface recombination on the optimum thickness, efficiency, and current density, the effect of contact distance on the efficiency for thin cells, and lastly the effect of surface recombination on the grams per Watt-peak. Results show that high efficiency can be obtained in thin devices if they are well-passivated and the distance between contacts is short. Furthermore, the ratio of grams per Watt-peak is greatly reduced as the device is thinned.

More Details

Back-contacted and small form factor GaAs solar cell

Cruz-Campa, Jose L.; Nielson, Gregory N.; Okandan, Murat O.; Sanchez, Carlos A.; Resnick, Paul J.; Clews, Peggy J.; Pluym, Tammy P.; Gupta, Vipin P.

We present a newly developed microsystem enabled, back-contacted, shade-free GaAs solar cell. Using microsystem tools, we created sturdy 3 {micro}m thick devices with lateral dimensions of 250 {micro}m, 500 {micro}m, 1 mm, and 2 mm. The fabrication procedure and the results of characterization tests are discussed. The highest efficiency cell had a lateral size of 500 {micro}m and a conversion efficiency of 10%, open circuit voltage of 0.9 V and a current density of 14.9 mA/cm{sup 2} under one-sun illumination.

More Details

Mass accretion and nested array dynamics from Ni-Clad Ti-Al wire array Z pinches

Coverdale, Christine A.; Jones, Brent M.; Cuneo, M.E.; Jennings, Christopher A.

Analysis of 50 mm diameter wire arrays at the Z Accelerator has shown experimentally the accretion of mass in a stagnating z pinch and provided insight into details of the radiating plasma species and plasma conditions. This analysis focused on nested wire arrays with a 2:1 (outeninner) mass, radius, and wire number ratio where Al wires were fielded on the outer array and Ni-clad Ti wires were fielded on the inner array.In this presentation, we will present analysis of data from other mixed Al/Ni-clad Ti configurations to further evaluate nested wire array dynamics and mass accretion. These additional configurations include the opposite configuration to that described above (Ni-clad Ti wires on the outer array, with Al wires on the inner array) as well as higher wire number Al configurations fielded to vary the interaction of the two arrays. These same variations were also assessed for a smaller diameter nested array configuration (40 mm). Variations in the emitted radiation and plasma conditions will be presented, along with a discussion of what the results indicate about the nested array dynamics. Additional evidence for mass accretion will also be presented.

More Details

What can spectroscopy and imaging of multi-planar wire arrays reveal about Z-pinch radiation physics?

Coverdale, Christine A.

The planar wire array research on Zebra at UNR that started in 2005 continues experiments with new types of planar loads with results for consideration and comprehensive analysis [see, for example, Kantsyrev et al, HEDP 5, 115 (2009)]. The detailed studies of radiative properties of such loads are important and spectroscopy and imaging constitute a very valuable and informative diagnostic tool. The set of theoretical codes is implemented which provides non-LTE kinetics, wire ablation dynamic, and MHD modeling. This talk is based on the results of new recent experiments with planar wire arrays on Zebra at UNR. We start with results on radiative properties of a uniform single planar wire array (SPWA) from alloyed Al wires and move to combined triple planar wire arrays (TPWA) made from two materials, Cu and Al. Such combined TPWA includes three planar wire rows that are parallel to each other and made of either Cu or Al alloyed wires. Three different configurations (Al/Cu/Al, Cu/Al/Cu, and Cu/Cu/Al) are considered and compared with each other, and with the results from SPWA of the same materials. X-ray time-gated and time integrated pinhole images and spectra are analyzed together with bolometer, PCD, and XRD measurements, and optical images. Emphasis is made on the radiative properties and temporal and spatial evolution of plasma parameters of such two-component plasmas. The opacity effects are considered and the important question of what causes K-shell Al lines to be optically thin in combined TPWAs is addressed. In conclusion, the new findings from studying multi-planar wire array implosions are summarized and their input to Z-pinch radiation physics is discussed.

More Details

Bridging the gaps : joining information sources with Splunk

Corwell, Sophia E.

Supercomputers are composed of many diverse components, operated at a variety of scales, and function as a coherent whole. The resulting logs are thus diverse in format, interrelated at multiple scales, and provide evidence of faults across subsystems. When combined with system configuration information, insights on both the downstream effects and upstream causes of events can be determined. However, difficulties in joining the data and expressing complex queries slow the speed at which actionable insights can be obtained. Effectively connecting data experts and data miners faces similar hurdles. This paper describes our experience with applying the Splunk log analysis tool as a vehicle to combine both data, and people. Splunk's search language, lookups, macros, and subsearches reduce hours of tedium to seconds of simplicity, and its tags, saved searches, and dashboards offer both operational insights and collaborative vehicles.

More Details

New analytic 1D pn junction diode transient photocurrent solutions following ionizing radiation and including time-dependent carrier lifetime degradation from a non-concurrent neutron pulse

Axness, Carl L.; Keiter, Eric R.

Circuit simulation codes, such as SPICE, are invaluable in the development and design of electronic circuits in radiation environments. These codes are often employed to study the effect of many thousands of devices under transient current conditions. Device-scale simulation codes are commonly used in the design of individual semiconductor components, but computational requirements limit their use to small-scale circuits. Analytic solutions to the ambipolar diffusion equation, an approximation to the carrier transport equations, may be used to characterize the transient currents at nodes within a circuit simulator. We present new analytic transient excess carrier density and photocurrent solutions to the ambipolar diffusion equation for 1-D abrupt-junction pn diodes. These solutions incorporate low-level radiation pulses and take into account a finite device geometry, ohmic fields outside the depleted region, and an arbitrary change in the carrier lifetime due to neutron irradiation or other effects. The solutions are specifically evaluated for the case of an abrupt change in the carrier lifetime during or after, a step, square, or piecewise linear radiation pulse. Noting slow convergence of the Fourier series solutions for some parameters sets, we evaluate portions of the solutions using closed-form formulas, which result in a two order of magnitude increase in computational efficiency.

More Details

Atomistic models for scintillator discovery

Doty, Fred P.; Yang, Pin Y.

A2BLnX6 elpasolites (A, B: alkali; Ln: lanthanide; X: halogen), LaBr3 lanthanum bromide, and AX alkali halides are three classes of the ionic compound crystals being explored for {gamma}-ray detection applications. Elpasolites are attractive because they can be optimized from combinations of four different elements. One design goal is to create cubic crystals that have isotropic optical properties and can be grown into large crystals at lower costs. Unfortunately, many elpasolites do not have cubic crystals and the experimental trial-and-error approach to find the cubic elpasolites has been prolonged and inefficient. LaBr3 is attractive due to its established good scintillation properties. The problem is that this brittle material is not only prone to fracture during services, but also difficult to grow into large crystals resulting in high production cost. Unfortunately, it is not always clear how to strengthen LaBr3 due to the lack of understanding of its fracture mechanisms. The problem with alkali halides is that their properties decay rapidly over time especially under harsh environment. Here we describe our recent progress on the development of atomistic models that may begin to enable the prediction of crystal structures and the study of fracture mechanisms of multi-element compounds.

More Details

ParaText : scalable text analysis and visualization

Dunlavy, Daniel D.

Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

More Details

Scalable methods for representing, characterizing, and generating large graphs

Pinar, Ali P.

Goal - design methods to characterize and identify a low dimensional representation of graphs. Impact - enabling predictive simulation; monitoring dynamics on graphs; and sampling and recovering network structure from limited observations. Areas to explore are: (1) Enabling technologies - develop novel algorithms and tailor existing ones for complex networks; (2) Modeling and generation - Identify the right parameters for graph representation and develop algorithms to compute these parameters and generate graphs from these parameters; and (3) Comparison - Given two graphs how do we tell they are similar? Some conclusions are: (1) A bad metric can make anything look good; (2) A metric that is based an edge-by edge prediction will suffer from the skewed distribution of present and absent edges; (3) The dominant signal is the sparsity, edges only add a noise on top of the signal, the real signal, structure of the graph is often lost behind the dominant signal; and (4) Proposed alternative: comparison based on carefully chosen set of features, it is more efficient, sensitive to selection of features, finding independent set of features is an important area, and keep an eye on us for some important results.

More Details

Measurement and interpretation of threshold stress intensity factors for steels in high-pressure hydrogen gas

Somerday, Brian P.; San Marchi, Christopher W.; Foulk, James W.

Threshold stress intensity factors were measured in high-pressure hydrogen gas for a variety of low alloy ferritic steels using both constant crack opening displacement and rising crack opening displacement procedures. The sustained load cracking procedures are generally consistent with those in ASME Article KD-10 of Section VIII Division 3 of the Boiler and Pressure Vessel Code, which was recently published to guide design of high-pressure hydrogen vessels. Three definitions of threshold were established for the two test methods: K{sub THi}* is the maximum applied stress intensity factor for which no crack extension was observed under constant displacement; K{sub THa} is the stress intensity factor at the arrest position for a crack that extended under constant displacement; and K{sub JH} is the stress intensity factor at the onset of crack extension under rising displacement. The apparent crack initiation threshold under constant displacement, K{sub THi}*, and the crack arrest threshold, K{sub THa}, were both found to be non-conservative due to the hydrogen exposure and crack-tip deformation histories associated with typical procedures for sustained-load cracking tests under constant displacement. In contrast, K{sub JH}, which is measured under concurrent rising displacement and hydrogen gas exposure, provides a more conservative hydrogen-assisted fracture threshold that is relevant to structural components in which sub-critical crack extension is driven by internal hydrogen gas pressure.

More Details

Detecting insider activity using enhanced directory virtualization

Claycomb, William R.

Insider threats often target authentication and access control systems, which are frequently based on directory services. Detecting these threats is challenging, because malicious users with the technical ability to modify these structures often have sufficient knowledge and expertise to conceal unauthorized activity. The use of directory virtualization to monitor various systems across an enterprise can be a valuable tool for detecting insider activity. The addition of a policy engine to directory virtualization services enhances monitoring capabilities by allowing greater flexibility in analyzing changes for malicious intent. The resulting architecture is a system-based approach, where the relationships and dependencies between data sources and directory services are used to detect an insider threat, rather than simply relying on point solutions. This paper presents such an architecture in detail, including a description of implementation results.

More Details

Expected result of firing an ICE load on Z without vacuum

Struve, Kenneth W.; Lemke, Raymond W.; Savage, Mark E.

In addressing the issue of the determining the hazard categorization of the Z Accelerator of doing Special Nuclear Material (SNM) experiments the question arose as to whether the machine could be fired with its central vacuum chamber open, thus providing a path for airborne release of SNM materials. In this report we summarize calculations that show that we could only expect a maximum current of 460 kA into such a load in a long-pulse mode, which will be used for the SNM experiments, and 750 kA in a short-pulse mode, which is not useful for these experiments. We also investigated the effect of the current for both cases and found that for neither case is the current high enough to either melt or vaporize these loads, with a melt threshold of 1.6 MA. Therefore, a necessary condition to melt, vaporize, or otherwise disperse SNM material is that a vacuum must exist in the Z vacuum chamber. Thus the vacuum chamber serves as a passive feature that prevents any airborne release during the shot, regardless of whatever containment may be in place.

More Details
Results 70601–70800 of 96,771
Results 70601–70800 of 96,771