Quarter 2 Fiscal Year 2010 Hydrogen Systems Analysis Report
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The effect of collision-partner selection schemes on the accuracy and the efficiency of the Direct Simulation Monte Carlo (DSMC) method of Bird is investigated. Several schemes to reduce the total discretization error as a function of the mean collision separation and the mean collision time are examined. These include the historically first sub-cell scheme, the more recent nearest-neighbor scheme, and various near-neighbor schemes, which are evaluated for their effect on the thermal conductivity for Fourier flow. Their convergence characteristics as a function of spatial and temporal discretization and the number of simulators per cell are compared to the convergence characteristics of the sophisticated and standard DSMC algorithms. Improved performance is obtained if the population from which possible collision partners are selected is an appropriate fraction of the population of the cell.
Continued reduction of characteristic dimensions in nanosystems has given rise to increasing importance of material interfaces on the overall system performance. With regard to thermal transport, this increases the need for a better fundamental understanding of the processes affecting interfacial thermal transport, as characterized by the thermal boundary conductance. When thermal boundary conductance is driven by phononic scattering events, accurate predictions of interfacial transport must account for anharmonic phononic coupling as this affects the thermal transmission. In this paper, a new model for phononic thermal boundary conductance is developed that takes into account anharonic coupling, or inelastic scattering events, at the interface between two materials. Previous models for thermal boundary conductance are first reviewed, including the Diffuse Mismatch Model, which only consdiers elastic phonon scattering events, and earlier attempts to account for inelastic phonon scattering, namely, the Maximum Transmission Model and the Higher Harmonic Inelastic model. A new model is derived, the Anharmonic Inelastic Model, which provides a more physical consideration of the effects of inelastic scattering on thermal boundary conductance. This is accomplished by considering specific ranges of phonon frequency interactions and phonon number density conservation. Thus, this model considers the contributions of anharmonic, inelastically scattered phonons to thermal boundary conductance. This new Anharmonic Inelastic Model shows excellent agreement between model predictions and experimental data at the Pb/diamond interface due to its ability to account for the temperature dependent changing phonon population in diamond, which can couple anharmonically with multiple phonons in Pb.
Abstract not provided.
Abstract not provided.
The Waste Isolation Pilot Plant (WIPP) disposal operations currently employ two different disposal methods: one for Contact Handled (CH) waste and another for Remote Handled (RH) waste. CH waste is emplaced in a variety of payload container configurations on the floor of each disposal room. In contrast, RH waste is packaged into a single type of canister and emplaced in pre-drilled holes in the walls of disposal rooms. Emplacement of the RH waste in the walls must proceed in advance of CH waste emplacement and therefore poses logistical constraints, in addition to the loss of valuable disposal capacity. To improve operational efficiency and disposal capacity, the Department of Energy (DOE) has proposed a shielded container for certain RH waste streams. RH waste with relatively low gammaemitting activity would be packaged in lead-lined containers, shipped to WIPP in existing certified transportation packages for CH waste and emplaced in WIPP among the stacks of CH waste containers on the floor of a disposal room. RH waste with high gamma-emitting activity would continue to be emplaced in the boreholes along the walls. The new RH container is similar to the nominal 208-liter (55-gallon) drum, however it includes about 2.5 cm (1 in) of lead, sandwiched between thick steel sheets. Furthermore, the top and bottom are made of thick plate steel to strengthening the package to meet transportation requirements. This robust configuration provides an overpack for materials that otherwise would be RH waste. This paper describes the container and the regulatory approach used to meet the requirements imposed by regulations that apply to WIPP. This includes a Performance Assessment used to evaluate WIPP's long-term performance and the DOE's approach to gain approval for the transportation of shielded containers. This paper also describes estimates of the DOE's RH transuranic waste inventory that may be packaged and emplaced in shielded containers. Finally, the paper includes a discussion of how the DOE proposes to track the waste packaged into shielded containers against the RH waste inventory and how this will comply with the regulated volume.
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
Abstract not provided.
The original Trusted Radiation Identification System (TRIS) was developed from 1999-2001, featuring information barrier technology to collect gamma radiation template measurements useful for arms control regime operations. The first TRIS design relied upon a multichannel analyzer (MCA) that was external to the protected volume of the system enclosure, undesirable from a system security perspective. An internal complex programmable logic device (CPLD) contained data which was not subject to software authentication. Physical authentication of the TRIS instrument case was performed by a sensitive but slow eddy-current inspection method. This paper describes progress to date for the Next Generation TRIS (NG-TRIS), which improves the TRIS design. We have incorporated the MCA internal to the trusted system volume, achieved full authentication of CPLD data, and have devised rapid methods to authenticate the system enclosure and weld seals of the NG-TRIS enclosure. For a complete discussion of the TRIS system and components upon which NG-TRIS is based, the reader is directed to the comprehensive user's manual and system reference of Seager, et al.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In this work, we developed a self-organizing map (SOM) technique for using web-based text analysis to forecast when a group is undergoing a phase change. By 'phase change', we mean that an organization has fundamentally shifted attitudes or behaviors. For instance, when ice melts into water, the characteristics of the substance change. A formerly peaceful group may suddenly adopt violence, or a violent organization may unexpectedly agree to a ceasefire. SOM techniques were used to analyze text obtained from organization postings on the world-wide web. Results suggest it may be possible to forecast phase changes, and determine if an example of writing can be attributed to a group of interest.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The world currently faces tremendous energy challenges stemming from the need to curb potentially catastrophic anthropogenic climate change. In addition, many nations, including the United States, recognize increasing political and economic risks associated with dependence on uncertain and limited energy sources. For these and other reasons the chemical composition of transportation fuels is changing, both through introduction of nontraditional fossil sources, such as oil sands-derived fuels in the US stream, and through broader exploration of biofuels. At the same time the need for clean and efficient combustion is leading engine research towards advanced low-temperature combustion strategies that are increasingly sensitive to this changing fuel chemistry, particularly in the areas of pollutant formation and autoignition. I will highlight the new demands that advanced engine technologies and evolving fuel composition place on investigations of fundamental reaction chemistry. I will focus on recent progress in measuring product formation in elementary reactions by tunable synchrotron photoionization, on the elucidation of pressure-dependent effects in the reactions of alkyl and substituted alkyl radicals with O{sub 2}, and on new combined efforts in fundamental combustion chemistry and engine performance studies of novel potential biofuels.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper discusses a new approach to making hybrid power electronic circuits by combining a low-temperature (850 C to 950 C) co-fired ceramic (LTCC) substrate, planar LTCC ferrite transformers/inductors and integrated passive components into a multilayer monolithic package using a ferrite-based LTCC material system. A ferrite tape functions as the base material for this LTCC system. The material system includes physically and chemically compatible dielectric paste, dielectric tape and conductor materials which can be co-fired with the base ferrite LTCC tape to create sintered devices with excellent magnetic coupling, high permeability ({approx}400), high resistivity (> 10{sup 12} {Omega} {center_dot} cm) and good saturation ({approx}0.3 T). The co-fired ferrite and dielectric materials can be used as a substrate for attaching or housing semiconductor components and other discrete devices that are part of the power electronics system. Furthermore, the ability to co-fire the ferrite with dielectric and conductor materials allows for the incorporation of embedded passives in the multilayer structure to create hybrid power electronic circuits. Overall this thick film material set offers a unique approach to making hybrid power electronics and could potentially allow a size reduction for many commercial dc-dc converter and other power electronic circuits.
Nanotechnology
Abstract not provided.
Nanoporous carbon (NPC) is a purely graphitic material with highly controlled densities ranging from less than 0.1 to 2.0 g/cm3, grown via pulsed-laser deposition. Decreasing the density of NPC increases the interplanar spacing between graphene-sheet fragments. This ability to tune the interplanar spacing makes NPC an ideal model system to study the behavior of carbon electrodes in electrochemical capacitors and batteries. We examine the capacitance of NPC films in alkaline and acidic electrolytes, and measure specific capacitances as high as 242 F/g.
Abstract not provided.
Abstract not provided.
Advances in structural adhesives have permitted engineers to contemplate the use of bonded joints in areas that have long been dominated by mechanical fasteners and welds. Although strength, modulus, and toughness have been improved in modern adhesives, the typical concerns with using these polymers still exist. These include concerns over long-term durability and an inability to quantify bond strength (i.e., identify weak bonds) in adhesive joints. Bond deterioration in aging structures and bond strength in original construction are now critical issues that require more than simple flaw detection. Whether the structure involves metallic or composite materials, it is necessary to extend inspections beyond the detection of disbond flaws to include an assessment of the strength of the bond. Use of advanced nondestructive inspection (NDI) methods to measure the mechanical properties of a bonded joint and associated correlations with post-inspection failure tests have provided some clues regarding the key parameters involved in assessing bond strength. Recent advances in ultrasonic- and thermographic-based inspection methods have shown promise for measuring such properties. Specialized noise reduction and signal enhancement schemes have allowed thermographic interrogations to image the subtle differences between bond lines of various strengths. Similarly, specialized ultrasonic (UT) inspection techniques, including laser UT, guided waves, UT spectroscopy, and resonance methods, can be coupled with unique signal analysis algorithms to accurately characterize the properties of weak interfacial bonds. The generation of sufficient energy input levels to derive bond strength variations, the production of sufficient technique sensitivity to measure such minor response variations, and the difficulty in manufacturing repeatable weak bond specimens are all issues that exacerbate these investigations. The key to evaluating the bond strength lies in the ability to exploit the critical characteristics of weak bonds such as nonlinear responses, poor transmission of shear waves, and changes in response to stiffness-based interrogations. This paper will present several ongoing efforts that have identified promising methods for quantifying bond strength and discuss some completed studies that provide a foundation for further evolution in weak bond assessments.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The authors have developed two versions of a flexible fabrication technique known as membrane projection lithography that can produce nearly arbitrary patterns in '212 D' and fully three-dimensional (3D) structures. The authors have applied this new technique to the fabrication of split ring resonator-based metamaterials in the midinfrared. The technique utilizes electron beam lithography for resolution, pattern design flexibility, and alignment. The resulting structures are nearly three orders of magnitude smaller than equivalent microwave structures that were first used to demonstrate a negative index material. The fully 3D structures are highly isotropic and exhibit both electrically and magnetically excited resonances for incident transverse electromagnetic waves.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Results show that a time-series based classification may be possible. For the test cases considered, the correct model can be selected and the number of index case can be captured within {+-} {sigma} with 5-10 days of data. The low signal-to-noise ratio makes the classification difficult for small epidemics. The problem statement is: (1) Create Bayesian techniques to classify and characterize epidemics from a time-series of ICD-9 codes (will call this time-series a 'morbidity stream'); and (2) It is assumed the morbidity stream has already set off an alarm (through a Kalman filter anomaly detector) Starting with a set of putative diseases: Identify which disease or set of diseases 'fit the data best' and, Infer associated information about it, i.e. number of index cases, start time of the epidemic, spread rate, etc.
Journal of Materials Research
Abstract not provided.
In order to provide large quantities of high-reliability disk-based storage, it has become necessary to aggregate disks into fault-tolerant groups based on the RAID methodology. Most RAID levels do provide some fault tolerance, but there are certain classes of applications that require increased levels of fault tolerance within an array. Some of these applications include embedded systems in harsh environments that have a low level of serviceability, or uninhabited data centers servicing cloud computing. When describing RAID reliability, the Mean Time To Data Loss (MTTDL) calculations will often assume that the time to replace a failed disk is relatively low, or even negligible compared to rebuild time. For platforms that are in remote areas collecting and processing data, it may be impossible to access the system to perform system maintenance for long periods. A disk may fail early in a platform's life, but not be replaceable for much longer than typical for RAID arrays. Service periods may be scheduled at intervals on the order of months, or the platform may not be serviced until the end of a mission in progress. Further, this platform may be subject to extreme conditions that can accelerate wear and tear on a disk, requiring even more protection from failures. We have created a high parity RAID implementation that uses a Graphics Processing Unit (GPU) to compute more than two blocks of parity information per stripe, allowing extra parity to eliminate or reduce the requirement for rebuilding data between service periods. While this type of controller is highly effective for RAID 6 systems, an important benefit is the ability to incorporate more parity into a RAID storage system. Such RAID levels, as yet unnamed, can tolerate the failure of three or more disks (depending on configuration) without data loss. While this RAID system certainly has applications in embedded systems running applications in the field, similar benefits can be obtained for servers that are engineered for storage density, with less regard for serviceability or maintainability. A storage brick can be designed to have a MTTDL that extends well beyond the useful lifetime of the hardware used, allowing the disk subsystem to require less service throughout the lifetime of a compute resource. This approach is similar to the Xiotech ISE. Such a design can be deliberately placed remotely (without frequent support) in order to provide colocation, or meet cost goals. For workloads where reliability is key, but conditions are sub-optimal for routine serviceability, a high-parity RAID can provide extra reliability in extraordinary situations. For example, for installations requiring very high Mean Time To Repair, the extra parity can eliminate certain problems with maintaining hot spares, increasing overall reliability. Furthermore, in situations where disk reliability is reduced because of harsh conditions, extra parity can guard against early data loss due to lowered Mean Time To Failure. If used through an iSCSI interface with a streaming workload, it is possible to gain all of these benefits without impacting performance.
Abstract not provided.
Chemically reacting flow models generally involve inputs and parameters that are determined from empirical measurements, and therefore exhibit a certain degree of uncertainty. Estimating the propagation of this uncertainty into computational model output predictions is crucial for purposes of reacting flow model validation, model exploration, as well as design optimization. Recent years have seen great developments in probabilistic methods and tools for efficient uncertainty quantification (UQ) in computational models. These tools are grounded in the use of Polynomial Chaos (PC) expansions for representation of random variables. The utility and effectiveness of PC methods have been demonstrated in a range of physical models, including structural mechanics, transport in porous media, fluid dynamics, aeronautics, heat transfer, and chemically reacting flow. While high-dimensionality remains nominally an ongoing challenge, great strides have been made in dealing with moderate dimensionality along with non-linearity and oscillatory dynamics. In this talk, I will give an overview of UQ in chemical systems. I will cover both: (1) the estimation of uncertain input parameters from empirical data, and (2) the forward propagation of parametric uncertainty to model outputs. I will cover the basics of forward PC UQ methods with examples of their use. I will also highlight the need for accurate estimation of the joint probability density over the uncertain parameters, in order to arrive at meaningful estimates of model output uncertainties. Finally, I will discuss recent developments on the inference of this density given partial information from legacy experiments, in the absence of raw data.
In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in this study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs2MoO4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses.
As part of a Nuclear Regulatory Commission (NRC) research program to evaluate the impact of using mixed-oxide (MOX) fuel in commercial nuclear power plants, a study was undertaken to evaluate the impact of the usage of MOX fuel on the consequences of postulated severe accidents. A series of 23 severe accident calculations was performed using MELCOR 1.8.5 for a four-loop Westinghouse reactor with an ice condenser containment. The calculations covered five basic accident classes that were identified as the risk- and consequence-dominant accident sequences in plant-specific probabilistic risk assessments for the McGuire and Catawba nuclear plants, including station blackouts and loss-of-coolant accidents of various sizes, with both early and late containment failures. Ultimately, the results of these MELCOR simulations will be used to provide a supplement to the NRC's alternative source term described in NUREG-1465. Source term magnitude and timing results are presented consistent with the NUREG-1465 format. For each of the severe accident release phases (coolant release, gap release, in-vessel release, ex-vessel release, and late in-vessel release), source term timing information (onset of release and duration) is presented. For all release phases except for the coolant release phase, magnitudes are presented for each of the NUREG-1465 radionuclide groups. MELCOR results showed variation of noble metal releases between those typical of ruthenium (Ru) and those typical of molybdenum (Mo); therefore, results for the noble metals were presented for Ru and Mo separately. The collection of the source term results can be used as the basis to develop a representative source term (across all accident types) that will be the MOX supplement to NUREG-1465.
The Oak Ridge National Laboratory computer code, ORIGEN2.2 (CCC-371, 2002), was used to obtain the elemental composition of irradiated low-enriched uranium (LEU)/mixed-oxide (MOX) pressurized-water reactor fuel assemblies. Described in this report are the input parameters for the ORIGEN2.2 calculations. The rationale for performing the ORIGEN2.2 calculation was to generate inventories to be used to populate MELCOR radionuclide classes. Therefore the ORIGEN2.2 output was subsequently manipulated. The procedures performed in this data reduction process are also described herein. A listing of the ORIGEN2.2 input deck for two-cycle MOX is provided in the appendix. The final output from this data reduction process was three tables containing the radionuclide inventories for LEU/MOX in elemental form. Masses, thermal powers, and activities were reported for each category.
Abstract not provided.
International Journal of Distributed Systems and Technologies
In a recent acquisition by DOE/NNSA several large capacity computing clusters called TLCC have been installed at the DOE labs: SNL, LANL and LLNL. TLCC architecture with ccNUMA, multi-socket, multi-core nodes, and InfiniBand interconnect, is representative of the trend in HPC architectures. This paper examines application performance on TLCC contrasting them with Red Storm/Cray XT4. TLCC and Red Storm share similar AMD processors and memory DIMMs. Red Storm however has single socket nodes and custom interconnect. Micro-benchmarks and performance analysis tools help understand the causes for the observed performance differences. Control of processor and memory affinity on TLCC with the numactl utility is shown to result in significant performance gains and is essential to attenuate the detrimental impact of OS interference and cache-coherency overhead. While previous studies have investigated impact of affinity control mostly in the context of small SMP systems, the focus of this paper is on highly parallel MPI applications.
Geomechanical analyses have been performed to investigate potential mine interactions with wellbores that could occur in the Potash Enclave of Southeastern New Mexico. Two basic models were used in the study; (1) a global model that simulates the mechanics associated with mining and subsidence and (2) a wellbore model that examines the resulting interaction impacts on the wellbore casing. The first model is a 2D approximation of a potash mine using a plane strain idealization for mine depths of 304.8 m (1000 ft) and 609.6 m (2000 ft). A 3D wellbore model then considers the impact of bedding plane slippage across single and double cased wells cemented through the Salado formation. The wellbore model establishes allowable slippage to prevent casing yield.
IEEE Transactions on Plasma Science
An electromagnetic analysis is performed on the ITER shield modules under different plasma-disruption scenarios using the OPERA-3d software. The models considered include the baseline design as provided by the International Organization and an enhanced design that includes the more realistic geometrical features of a shield module. The modeling procedure is explained, electromagnetic torques are presented, and results of the modeling are discussed. © 2010 IEEE.
IEEE Transactions on Plasma Science
Two-dimensional (r, z) magnetohydrodynamic simulations with nonlocal thermodynamic equilibrium ionization and radiation transport are used to investigate the K-shell radiation output from doubly nested large-diameter (> 60 mm) stainlesssteel arrays fielded on the refurbished Z pulsed-power generator. The effects of the initial density perturbations, wire ablation rate, and current loss near the load on the total power, K-shell power, and K-shell yield are examined. The broad mass distribution produced by wire ablation largely overcomes the deleterious impact on the K-shell power and yield of 2-D instability growth. On the other hand, the possible current losses in the final feed section lead to substantial reductions in K-shell yield. Following a survey of runs, the parameters for the perturbation level, ablation rate, and current loss are chosen to benchmark the simulations against existing 65-mm-diameter radiation data. Themodel is then used to predict the K-shell properties of larger diameter (70 mm) arrays to be imploded on the Z generator. © 2010 IEEE.
There has been a concerted effort since 2007 to establish a dashboard of metrics for the Science, Technology, and Engineering (ST&E) work at Sandia National Laboratories. These metrics are to provide a self assessment mechanism for the ST&E Strategic Management Unit (SMU) to complement external expert review and advice and various internal self assessment processes. The data and analysis will help ST&E Managers plan, implement, and track strategies and work in order to support the critical success factors of nurturing core science and enabling laboratory missions. The purpose of this SAND report is to provide a guide for those who want to understand the ST&E SMU metrics process. This report provides an overview of why the ST&E SMU wants a dashboard of metrics, some background on metrics for ST&E programs from existing literature and past Sandia metrics efforts, a summary of work completed to date, specifics on the portfolio of metrics that have been chosen and the implementation process that has been followed, and plans for the coming year to improve the ST&E SMU metrics process.
International Journal of Distributed Systems and Technologies
Efficient design of hardware and software for large-scale parallel execution requires detailed understanding of the interactions between the application, computer, and network. The authors have developed a macroscale simulator (SST/macro) that permits the coarse-grained study of distributed-memory applications. In the presented work, applications using the Message Passing Interface (MPI) are simulated; however, the simulator is designed to allow inclusion of other programming models. The simulator is driven from either a trace file or a skeleton application. Trace files can be either a standard format (Open Trace Format) or a more detailed custom format (DUMPI). The simulator architecture is modular, allowing it to easily be extended with additional network models, trace file formats, and more detailed processor models. This paper describes the design of the simulator, provides performance results, and presents studies showing how application performance is affected by machine characteristics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Phebus and VERCORS data have played an important role in contemporary understanding and modeling of fission product release and transport from damaged LWR fuel. The data from these test programs have allowed improvement of MELCOR modeling of release and transport processes for both low enrichment uranium fuel as well as high burnup and MOX fuels. The following paper describes the derivation, testing and incorporation of improved radionuclide release models into the MELCOR severe accident code.
The Phebus and VERCORS data have played an important role in contemporary understanding and modeling of fission product release and transport from damaged light water reactor fuel. The data from these test programs have allowed improvement of MELCOR modeling of release and transport processes for both low enrichment uranium fuel as well as high burnup and mixed oxide (MOX) fuels. This paper discusses the synthesis of these findings in the MELCOR severe accident code. Based on recent assessments of MELCOR 1.8.5 fission product release modeling against the Phebus FPT-1 test and on observations from the ISP-46 exercise, modifications to the default MELCOR 1.8.5 release models are recommended. The assessments identified an alternative set of Booth diffusion parameters recommended by ORNL (ORNL-Booth), which produced significantly improved release predictions for cesium and other fission product groups. Some adjustments to the scaling factors in the ORNL-Booth model were made for selected fission product groups, including UO{sub 2}, Mo and Ru in order to obtain better comparisons with the FPT-1 data. The adjusted model, referred to as 'Modified ORNL-Booth,' was subsequently compared to original ORNL VI fission product release experiments and to more recently performed French VERCORS tests, and the comparisons was as favorable or better than the original CORSOR-M MELCOR default release model. These modified ORNL-Booth parameters, input to MELCOR 1.8.5 as 'sensitivity coefficients' (i.e. user input that over-rides the code defaults) are recommended for the interim period until improved release models can be implemented into MELCOR. For the case of ruthenium release in air-oxidizing conditions, some additional modifications to the Ru class vapor pressure are recommended based on estimates of the RuO{sub 2} vapor pressure over mildly hyperstoichiometric UO{sub 2}. The increased vapor pressure for this class significantly increases the net transport of Ru from the fuel to the gas stream. A formal model is needed. Deposition patterns in the Phebus FPT-1 circuit were also significantly improved by using the modified ORNL-Booth parameters, where retention of lower volatile Cs{sub 2}MoO{sub 4} is now predicted in the heated exit regions of the FPT-1 test, bringing down depositions in the FPT-1 steam generator tube to be in closer alignment with the experimental data. This improvement in 'RCS' deposition behavior preserves the overall correct release of cesium to the containment that was observed even with the default CORSOR-M model. Not correctly treated however is the release and transport of Ag to the FPT-1 containment. A model for Ag release from control rods is presently not available in MELCOR. Lack of this model is thought to be responsible for the underprediction by a factor of two of the total aerosol mass to the FPT-1 containment. It is suggested that this underprediction of airborne mass led to an underprediction of the aerosol agglomeration rate. Underprediction of the agglomeration rate leads to low predictions of the aerosol particle size in comparison to experimentally measured ones. Small particle size leads low predictions of the gravitational settling rate relative to the experimental data. This error, however, is a conservative one in that too-low settling rate would result in a larger source term to the environment. Implementation of an interim Ag release model is currently under study. In the course of this assessment, a review of MELCOR release models was performed and led to the identification of several areas for future improvements to MELCOR. These include upgrading the Booth release model to account for changes in local oxidizing/reducing conditions and including a fuel oxidation model to accommodate effects of fuel stoichiometry. Models such as implemented in the French ELSA code and described by Lewis are considered appropriate for MELCOR. A model for ruthenium release under air oxidizing conditions is also needed and should be included as part of a fuel oxidation model since fuel stoichiometry is a fundamental parameter in determining the vapor pressure of ruthenium oxides over the fuel. There is also a need to expand the MELCOR architecture for tracking fission product classes to allow for more speciation of fission products. An example is the formation of CsI and Cs{sub 2}MoO{sub 4} and potentially CsOH if all Mo is combined with Cs such that excess Cs exists in the fuel. Presently, MELCOR can track only one class combination (CsI) accurately, where excess Cs is assumed to be CsOH. Our recommended interim modifications map the CsOH (MELCOR Radionuclide Class 2) and Mo (Class 7) vapor pressure properties to Cs{sub 2}MoO{sub 4}, which approximates the desired formal class combination of Cs and Mo. Other extensions to handle properly iodine speciation from pool/gas chemistry are also needed.
We present the Sandia Advanced Personnel Locator Engine (SAPLE) web application, a directory search application for use by Sandia National Laboratories personnel. SAPLE's purpose is to return Sandia personnel 'results' as a function of user search queries, with its mission to make it easier and faster to find people at Sandia. To accomplish this, SAPLE breaks from more traditional directory application approaches by aiming to return the correct set of results while placing minimal constraints on the user's query. Two key features form the core of SAPLE: advanced search query interpretation and inexact string matching. SAPLE's query interpretation permits the user to perform compound queries when typing into a single search field; where able, SAPLE infers the type of field that the user intends to search on based on the value of the search term. SAPLE's inexact string matching feature yields a high-quality ranking of personnel search results even when there are no exact matches to the user's query. This paper explores these two key features, describing in detail the architecture and operation of SAPLE. Finally, an extensive analysis on logged search query data taken from an 11-week sample period is presented.
Sandia National Laboratories (SNL) is a multi-program national laboratory in the business of national security, whose primary mission is nuclear weapons (NW). It is a prime contractor to the USDOE, operating under the NNSA and is one of the three NW national laboratories. It has a long history of involvement in the area of geomechanics, starting with the some of the earliest weapons tests at Nevada. Projects in which geomechanics support (in general) and computational geomechanics support (in particular) are at the forefront at Sandia, range from those associated with civilian programs to those in the defense programs. SNL has had significant involvement and participation in the Waste Isolation Pilot Plant (low-level defense nuclear waste), the Yucca Mountain Project (formerly proposed for commercial spent fuel and high-level nuclear waste), and the Strategic Petroleum Reserve (the nation's emergency petroleum store). In addition, numerous industrial partners seek-out our computational/geomechanics expertise, and there are efforts in compressed air and natural gas storage, as well as in CO{sub 2} Sequestration. Likewise, there have also been collaborative past efforts in the areas of compactable reservoir response, the response of salt structures associated with reservoirs, and basin modeling for the Oil & Gas industry. There are also efforts on the defense front, ranging from assessment of vulnerability of infrastructure to defeat of hardened targets, which require an understanding and application of computational geomechanics. Several examples from some of these areas will be described and discussed to give the audience a flavor of the type of work currently being performed at Sandia in the general area of geomechanics.
This report describes the Sandia National Laboratories Medical Isotope Reactor and hot cell facility concepts. The reactor proposed is designed to be capable of producing 100% of the U.S. demand for the medical isotope {sup 99}Mo. The concept is novel in that the fuel for the reactor and the targets for the {sup 99}Mo production are the same. There is no driver core required. The fuel pins that are in the reactor core are processed on a 7 to 21 day irradiation cycle. The fuel is low enriched uranium oxide enriched to less than 20% {sup 235}U. The fuel pins are approximately 1 cm in diameter and 30 to 40 cm in height, clad with Zircaloy (zirconium alloy). Approximately 90 to 150 fuel pins are arranged in the core in a water pool {approx}30 ft deep. The reactor power level is 1 to 2 MW. The reactor concept is a simple design that is passively safe and maintains negative reactivity coefficients. The total radionuclide inventory in the reactor core is minimized since the fuel/target pins are removed and processed after 7 to 21 days. The fuel fabrication, reactor design and operation, and {sup 99}Mo production processing use well-developed technologies that minimize the technological and licensing risks. There are no impediments that prevent this type of reactor, along with its collocated hot cell facility, from being designed, fabricated, and licensed today.
The Department of Homeland Security (DHS), National Cyber Security Division (NSCD), Control Systems Security Program (CSSP), contracted Sandia National Laboratories to develop a generic methodology for prioritizing cyber-vulnerable, critical infrastructure assets and the development of mitigation strategies for their loss or compromise. The initial project has been divided into three discrete deliverables: (1) A generic methodology report suitable to all Critical Infrastructure and Key Resource (CIKR) Sectors (this report); (2) a sector-specific report for Electrical Power Distribution; and (3) a sector-specific report for the water sector, including generation, water treatment, and wastewater systems. Specific reports for the water and electric sectors are available from Sandia National Laboratories.
Abstract not provided.
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
Abstract not provided.
SIAM Journal on Applied Mathematics
Abstract not provided.
Abstract not provided.
Mechanical Systems and Signal Processing
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Acta Materialia
Abstract not provided.
We use the modified iSAFT density functional theory (DFT) to calculate interactions among nanoparticles immersed in a polymer melt. Because a polymer can simultaneously interact with more than two nanoparticles, three-body interactions are important in this system. We treat the nanoparticles as spherical surfaces, and solve for the polymer densities around the nanoparticles in three dimensions. The polymer is modeled as a freely-jointed chain of spherical sites, and all interactions are repulsive. The potential of mean force (PMF) between two nanoparticles displays a minimum at contact due to the depletion effect. The PMF calculated from the DFT agrees nearly quantitatively with that calculated from self-consistent PRISM theory. From the DFT we find that the three-body free energy is significantly different in magnitude than the effective three-body free energy derived from the two-particle PMF.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Electron Device Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Experimental Mechanics
Abstract not provided.