10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
Boring, Ronald L.; Forester, John A.; Bye, Andreas; Dang, Vinh N.; Lois, Erasmia
The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to "translate" the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.
10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
Dang, Vinh N.; Massaiu, Salvatore; Bye, Andreas; Forester, John A.
In the International HRA Empirical Study, diverse Human Reliability Analysis (HRA) methods are assessed based on data from a dedicated simulator study, which examined the performance of licensed crews in nuclear power plant emergency scenarios. The HRA method assessments involve comparing the predictions obtained with the method with empirical reference data, in quantitative as well as qualitative terms. This paper discusses the assessment approach and criteria, the quantitative reference data, and the comparisons that use these data. Consistent with the expectations at the outset of the study, the statistical limitations of the data are a key issue. These limitations preclude concentrating solely on the failure counts defined by the Human Failure Event (HFE) success criteria and the failure probabilities based on these counts. In assessing quantitative predictive power, this study additionally uses a reference HFE difficulty (qualitative failure likelihood) ranking that accounts for qualitative observations in addition to the failure counts. Overall, the method assessment prioritizes qualitative comparisons, using the rich set of data collected on performance issues. Here, the quantitative predictions and data are used to determine the essential qualitative comparisons, demonstrating how quantitative and qualitative comparisons and criteria can be usefully combined in HRA method assessment.
10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
Dang, Vinh N.; Forester, John A.; Mosleh, Ali
The Office of Nuclear Regulatory Research (RES) of the U.S. Nuclear Regulatory Commission is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. One motivation is the variability in Human Failure Event (HFE) probabilities estimated by different analysts and methods. This work considers that a reduction of the variability in the HRA quantification outputs must address three sources: differences in the scope and implementation of qualitative analysis, the qualitative output-quantitative input interface, and the diversity of algorithms for estimating failure probabilities from these inputs. Two companion papers (Mosleh et al. and Hendrickson et al.) describe a proposed qualitative analysis approach The development of the corresponding quantification approach considers a number of alternatives including a module-based hybrid method and a data-driven quantification scheme. This paper presents on-going work and the views of the contributors.
Cyber security analysis tools are necessary to evaluate the security, reliability, and resilience of networked information systems against cyber attack. It is common practice in modern cyber security analysis to separately utilize real systems computers, routers, switches, firewalls, computer emulations (e.g., virtual machines) and simulation models to analyze the interplay between cyber threats and safeguards. In contrast, Sandia National Laboratories has developed new methods to combine these evaluation platforms into a cyber Live, Virtual, and Constructive (LVC) testbed. The combination of real, emulated, and simulated components enables the analysis of security features and components of a networked information system. When performing cyber security analysis on a target system, it is critical to represent realistically the subject security components in high fidelity. In some experiments, the security component may be the actual hardware and software with all the surrounding components represented in simulation or with surrogate devices. Sandia National Laboratories has developed a cyber LVC testbed that combines modeling and simulation capabilities with virtual machines and real devices to represent, in varying fidelity, secure networked information system architectures and devices. Using this capability, secure networked information system architectures can be represented in our testbed on a single computing platform. This provides an "experiment-in-a-box" capability. The result is rapidly produced, large scale, relatively low-cost, multi-fidelity representations of networked information systems. These representations enable analysts to quickly investigate cyber threats and test protection approaches and configurations.
The launch of nuclear materials requires special care to minimize the risk of adverse effects to human health and the environment. This paper describes the special sources of risk that are inherent to the launch of radioactive materials and provides insights into the analysis and control of these risks that have been gained through the experience of previous US launches. Historically, launch safety has been achieved by eliminating, to the greatest degree possible, the potential for energetic insults to affect the radioactive material. For those insults that cannot be precluded, designers minimize the likelihood, magnitude and duration of their interaction with the material. Finally, when a radioactive release cannot be precluded, designers limit the magnitude and spatial extent of its dispersal.
Phillips, Stan D.; Moen, Kurt A.; Najafizadeh, Laleh; Diestelhorst, Ryan M.; Sutton, Akil K.; Cressler, John D.; Vizkelethy, Gyorgy; Dodd, Paul E.; Marshall, Paul W.
Human Reliability Analysis (HRA) methods have been developed primarily to provide information for use in probabilistic risk assessments analyzing nuclear power plant (NPP) operations. Despite this historical focus on the control room, there has been growing interest in applying HRA methods to other NPP activities such as dry cask storage operations (DCSOs) in which spent fuel is transferred into dry cask storage systems. This paper describes a successful application of aspects of the "A Technique for Human Event Analysis" (ATHEANA) HRA approach [1, 2] in performing qualitative HRA activities that generated insights on the potential for dropping a spent fuel cask during DCSOs. This paper provides a description of the process followed during the analysis, a description of the human failure event (HFE) scenario groupings, discussion of inferred human performance vulnerabilities, a detailed examination of one HFE scenario and illustrative approaches for avoiding or mitigating human performance vulnerabilities that may contribute to dropping a spent fuel cask.
39th ASES National Solar Conference 2010, SOLAR 2010
Gupta, Vipin P.; Boudra, Will; Kuszmaul, Scott S.; Rosenthal, Andrew; Cisneros, Gaby; Merrigan, Tim; Miller, Ryan; Dominick, Jeff
In May 2007, Forest City Military Communities won a US Department of Energy Solar America Showcase Award. As part of this award, executives and staff from Forest City Military Communities worked side-by-side with a DOE technical assistance team to overcome technical obstacles encountered by this large-scale real estate developer and manager. This paper describes the solar technical assistance that was provided and the key solar experiences acquired by Forest City Military Communities over an 18 month period.
The Department of Energy's 2008 Yucca Mountain Performance Assessment represents the culmination of more than two decades of analyses of post-closure repository performance in support of programmatic decision making for the proposed Yucca Mountain repository. The 2008 performance assessment summarizes the estimated long-term risks to the health and safety of the public resulting from disposal of spent nuclear fuel and high-level radioactive waste in the proposed Yucca Mountain repository. The standards at 10 CFR Part 63 request several numerical estimates quantifying performance of the repository over time. This paper summarizes the key quantitative results from the performance assessment and presents uncertainty and sensitivity analyses for these results.
We present an RDFS closure algorithm, specifically designed and implemented on the Cray XMT supercomputer, that obtains inference rates of 13 million inferences per second on the largest system configuration we used. The Cray XMT, with its large global memory (4TB for our experiments), permits the construction of a conceptually straightforward algorithm, fundamentally a series of operations on a shared hash table. Each thread is given a partition of triple data to process, a dedicated copy of the ontology to apply to the data, and a reference to the hash table into which it inserts inferred triples. The global nature of the hash table allows the algorithm to avoid a common obstacle for distributed memory machines: the creation of duplicate triples. On LUBM data sets ranging between 1.3 billion and 5.3 billion triples, we obtain nearly linear speedup except for two portions: file I/O, which can be ameliorated with the additional service nodes, and data structure initialization, which requires nearly constant time for runs involving 32 processors or more.
Innovative energy system optimization models are deployed to evaluate novel fuel cell system (FCS) operating strategies, not typically pursued by commercial industry. Most FCS today are installed according to a 'business-as-usual' approach: (1) stand-alone (unconnected to district heating networks and low-voltage electricity distribution lines), (2) not load following (not producing output equivalent to the instantaneous electrical or thermal demand of surrounding buildings), (3) employing a fairly fixed heat-to-power ratio (producing heat and electricity in a relatively constant ratio to each other), and (4) producing only electricity and no recoverable heat. By contrast, models discussed here consider novel approaches as well. Novel approaches include (1) networking (connecting FCSs to electrical and/or thermal networks), (2) load following (having FCSs produce only the instantaneous electricity or heat demanded by surrounding buildings), (3) employing a variable heat-to-power ratio (such that FCS can vary the ratio of heat and electricity they produce), (4) co-generation (combining the production of electricity and recoverable heat), (5) permutations of these together, and (6) permutations of these combined with more 'business-as-usual' approaches. The detailed assumptions and methods behind these models are described in Part I of this article pair.
The U.S. Strategic Petroleum Reserve stores crude oil in 62 solution-mined caverns in salt domes located in Texas and Louisiana. Historically, three-dimensional geomechanical simulations of the behavior of the caverns have been performed using a power law creep model. Using this method, and calibrating the creep coefficient to field data such as cavern closure and surface subsidence, has produced varying degrees of agreement with observed phenomena. However, as new salt dome locations are considered for oil storage facilities, pre-construction geomechanical analyses are required that need site-specific parameters developed from laboratory data obtained from core samples. The multi-mechanism deformation (M-D) model is a rigorous mathematical description of both transient and steady-state creep phenomena. Recent enhancements to the numerical integration algorithm within the model have created a more numerically stable implementation of the M-D model. This report presents computational analyses to compare the results of predictions of the geomechanical behavior at the West Hackberry SPR site using both models. The recently-published results using the power law creep model produced excellent agreement with an extensive set of field data. The M-D model results show similar agreement using parameters developed directly from laboratory data. It is also used to predict the behavior for the construction and operation of oil storage caverns at a new site, to identify potential problems before a final cavern layout is designed. Copyright 2010 ARMA, American Rock Mechanics Association.
Here we demonstrate the suitability of robust nucleic acid affinity reagents in an integrated point-of-care diagnostic platform for monitoring proteomic biomarkers indicative of astronaut health in spaceflight applications. A model thioaptamer[1] targeting nuclear factor-kappa B (NF-KB) is evaluated in an on-chip electrophoretic gel-shift assay for human serum. Key steps of i) mixing sample with the aptamer, ii) buffer exchange, and iii) preconcentration of sample were successfully integrated upstream of fluorescence-based detection. Challenges due to i) nonspecific interactions with serum, and ii) preconcentration at a nanoporous membrane are discussed and successfully resolved to yield a robust, rapid, and fully-integrated diagnostic system.
We present a method for counting white blood cells that is uniquely compatible with centrifugation based microfluidics. Blood is deposited on top of one or more layers of density media within a microfluidic disk. Spinning the disk causes the cell populations within whole blood to settle through the media, reaching an equilibrium based on the density of each cell type. Separation and fluorescence measurement of cell types stained with a DNA dye is demonstrated using this technique. The integrated signal from bands of fluorescent microspheres is shown to be proportional to their initial concentration in suspension.
We present a platform that combines patterned photopolymerized polymer monoliths with living radical polymerization (LRP) to develop a low cost microfluidic based immunoassay capable of sensitive (low to sub pM) and rapid (<30 minute) detection of protein in 100 μL sample. The introduction of LRP functionality to the porous monolith allows one step grafting of functionalized affinity probes from the monolith surface while the composition of the hydrophilic graft chain reduces non-specific interactions and helps to significantly improve the limit of detection.
We have designed, fabricated, and characterized a digital microfluidic (DMF) platform to function as a central hub for interfacing multiple lab-on-a-chip sample processing modules towards automating the preparation of clinically-derived DNA samples for ultrahigh throughput sequencing (UHTS). The platform enables plug-and-play installation of a two-plate DMF device with consistent spacing, offers flexible connectivity for transferring samples between modules, and uses an intuitive programmable interface to control droplet/electrode actuations. Additionally, the hub platform uses transparent indium-tin oxide (ITO) electrodes to allow complete top and bottom optical access to the droplets on the DMF array, providing additional flexibility for various detection schemes.
We report on advancements of our microscale isoelectric fractionation (μIEFr) methodology for fast on-chip separation and concentration of proteins based on their isoelectric points (pI). We establish that proteins can be fractionated depending on posttranslational modifications into different pH specific bins, from where they can be efficiently transferred to downstream membranes for additional processing and analysis. This technology can enable on-chip multidimensional glycoproteomics analysis, as a new approach to expedite biomarker identification and verification.
We report on a scalable electrostatic process to transfer epitaxial graphene to arbitrary glass substrates, including Pyrex and Zerodur. This transfer process could enable wafer-level integration of graphene with structured and electronically-active substrates such as MEMS and CMOS. We will describe the electrostatic transfer method and will compare the properties of the transferred graphene with nominally-equivalent 'as-grown' epitaxial graphene on SiC. The electronic properties of the graphene will be measured using magnetoresistive, four-probe, and graphene field effect transistor geometries [1]. To begin, high-quality epitaxial graphene (mobility 14,000 cm2/Vs and domains >100 {micro}m2) is grown on SiC in an argon-mediated environment [2,3]. The electrostatic transfer then takes place through the application of a large electric field between the donor graphene sample (anode) and the heated acceptor glass substrate (cathode). Using this electrostatic technique, both patterned few-layer graphene from SiC(000-1) and chip-scale monolayer graphene from SiC(0001) are transferred to Pyrex and Zerodur substrates. Subsequent examination of the transferred graphene by Raman spectroscopy confirms that the graphene can be transferred without inducing defects. Furthermore, the strain inherent in epitaxial graphene on SiC(0001) is found to be partially relaxed after the transfer to the glass substrates.
We report on the novel room temperature method of synthesizing advanced nuclear fuels; a method that virtually eliminates any volatility of components. This process uses radiolysis to form stable nanoparticle (NP) nuclear transuranic (TRU) fuel surrogates and in-situ heated stage TEM to sinter the NPs. The radiolysis is performed at Sandia's Gamma Irradiation Facility (GIF) 60Co source (3 x 10{sup 6} rad/hr). Using this method, sufficient quantities of fuels for research purposes can be produced for accelerated advanced nuclear fuel development. We are focused on both metallic and oxide alloy nanoparticles of varying compositions, in particular d-U, d-U/La alloys and d-UO2 NPs. We present detailed descriptions of the synthesis procedures, the characterization of the NPs, the sintering of the NPs, and their stability with temperature. We have employed UV-vis, HRTEM, HAADF-STEM imaging, single particle EDX and EFTEM mapping characterization techniques to confirm the composition and alloying of these NPs.
This presentation on wind energy discusses: (1) current industry status; (2) turbine technologies; (3) assessment and siting; and (4) grid integration. There are no fundamental technical barriers to the integration of 20% wind energy into the nation's electrical system, but there needs to be a continuing evolution of transmission planning and system operation policy and market development for this to be most economically achieved.
Magnetic Liner Inertial Fusion (MagLIF) [S. A. Slutz, et al., Phys. Plasmas 17 056303 (2010)] is a promising new concept for achieving >100 kJ of fusion yield on Z. The greatest threat to this concept is the Magneto-Rayleigh-Taylor (MRT) instability. Thus an experimental campaign has been initiated to study MRT growth in fast-imploding (<100 ns) cylindrical liners. The first sets of experiments studied aluminum liner implosions with prescribed sinusoidal perturbations (see talk by D. Sinars). By contrast, this poster presents results from the latest sets of experiments that used unperturbed beryllium (Be) liners. The purpose for using Be is that we are able to radiograph 'through' the liner using the 6-keV photons produced by the Z-Beamlet backlighting system. This has enabled us to obtain time-resolved measurements of the imploding liner's density as a function of both axial and radial location throughout the field of view. This data is allowing us to evaluate the integrity of the inside (fuel-confining) surface of the imploding liner as it approaches stagnation.
We present the preliminary design of a Z experiment intended to observe the growth of several hydrodynamic instabilities (RT, RM, and KH) in a high-energy-density plasma. These experiments rely on the Z-machine's unique ability to launch cm-sized slabs of cold material (known as flyer plates) to velocities of several times 10 km/s. During the proposed experiment, the flyer plate will impact a cm-sized target with an embedded interface that has a prescribed sinusoidal perturbation. The flyer plate will generate a strong shock that propagates into the target and later initiates unstable growth of the perturbation. The goal of the experiment is to observe the perturbation at various stages of its evolution as it transitions from linear to non-linear growth, and finally to a fully turbulent state.
Laser-accelerated proton beams can be used in a variety of applications, e.g. ultrafast radiography of dense objects or strong electromagnetic fields. Therefore high energies of tens of MeV are required. We report on proton-acceleration experiments with a 150 TW laser system using mm-sized thin foils and mass-reduced targets of various thicknesses. Thin- foil targets yielded maximum energies of 50 MeV. A further reduction of the target dimensions from mm-size to 250 x 250 x 25 microns increased the maximum proton energy to >65 MeV, which is comparable to proton energies measured only at higher-energy, Petawatt-class laser systems. The dependence of the maximum energy on target dimensions was investigated, and differences between mm-sized thin foils and mass-reduced targets will be reported.
The objectives of this presentation are: (1) To determine if healthcare settings serve as intensive transmission environments for influenza epidemics, increasing effects on communities; (2) To determine which mitigation strategies are best for use in healthcare settings and in communities to limit influenza epidemic effects; and (3) To determine which mitigation strategies are best to prevent illness in healthcare workers.
Cigarette smoking presented the most significant public health challenge in the United States in the 20th Century and remains the single most preventable cause of morbidity and mortality in this country. A number of System Dynamics models exist that inform tobacco control policies. We reviewed them and discuss their contributions. We developed a theory of the societal lifecycle of smoking, using a parsimonious set of feedback loops to capture historical trends and explore future scenarios. Previous work did not explain the long-term historical patterns of smoking behaviors. Much of it used stock-and-flow to represent the decline in prevalence in the recent past. With noted exceptions, information feedbacks were not embedded in these models. We present and discuss our feedback-rich conceptual model and illustrate the results of a series of simulations. A formal analysis shows phenomena composed of different phases of behavior with specific dominant feedbacks associated with each phase. We discuss the implications of our society's current phase, and conclude with simulations of what-if scenarios. Because System Dynamics models must contain information feedback to be able to anticipate tipping points and to help identify policies that exploit leverage in a complex system, we expanded this body of work to provide an endogenous representation of the century-long societal lifecycle of smoking.
Negative bias temperature instability is an issue of critical importance as the space electronics industry evolves because it may dominate the reliability lifetime. Understanding its physical origin is therefore essential in determining how best to search for methods of mitigation. It has been suggested that the magnitude of the effect is strongly dependent on circuit operation conditions (static or dynamic modes). In the present work, we examine the time constants related to the charging and recovery of trapped charged induced by NBTI in HfSiON gate dielectric devices. In previous work, we avoided the issue of charge relaxation during acquisition of the I{sub ds}(V{sub gs}) curve by invoking a continuous stressing technique whereby {Delta}V{sub th} was extracted from a series of single point I{sub ds} measurements. This method relied heavily on determination of the initial value of the source-drain current (I{sub ds}{sup o}) prior to application of gate-source stress. In the present work we have used a new pulsed measurement system (Keithley SCS 4200-PIV) which not only removes this uncertainty but also permits dynamic measurements in which devices are AC stressed (Fig. 1a) or subjected to cycles of continued DC stresses followed by relaxation (Fig. 1b). We can now examine the charging and recovery characteristics of NBTI with higher precision than previously possible. We have performed NBTI stress experiments at room temperature on p-channel MOSFETs made with HfSiON gate dielectrics. In all cases the devices were stressed in the linear regime with V{sub ds}=-0.1V. We have defined two separate waveforms/pulse trains as illustrated in Fig 1. These were applied to the gate of the MOSFET. Firstly we examined the charging characteristics by applying an AC stress at 2.5MHz or 10Hz for different times. For a 50% duty cycle this corresponded to V{sub gs} = - 2V pulses for 200ns or 500ms followed by V{sub gs} = 0V pulses for 200ns or 500ms recovery respectively. In between 'bursts' of AC stress cycles, the I{sub ds}(V{sub gs}) characteristic in the range (-0.6V, -1.3V) was measured in 10.2 {micro}s. V{sub th} was extracted directly from this curve, or from a single I{sub ds} point normalized to the initial I{sub ds}{sup o} using our previous method. The resulting I{sub ds}/I{sub ds}{sup o} curves are compared; in Fig 2, the continuous stress results are included. In the second method, we examined the recovery dynamic by holding V{sub gs} = 0V for a finite amount of time (range 100 ns to 100 ms) following stress at V{sub gs} = - 2V for various times. In Fig 3 we compare |{Delta}V{sub th}(t)| results for recovery times of 100ms, 1ms, 100{micro}s, 50{micro}s, 25{micro}s, 10{micro}s, 100ns, and DC (i.e. no recovery) The data in Fig 2 shows that with a high frequency stress (2.5MHz) devices undergo significantly less (but finite) current degradation than devices stressed at 10Hz. This appears to be limited by charging and not by recovery. Fig 3 supports this hypothesis since for 100ns recovery periods, only a small percentage of the trapped charge relaxes. Detailed explanation of these experiments will be presented at the conference.
Loki-Infect 3 is a desktop application intended for use by community-level decision makers. It allows rapid construction of small-scale studies of emerging or hypothetical infectious diseases in their communities and evaluation of the potential effectiveness of various containment strategies. It was designed with an emphasis on modularity, portability, and ease of use. Our goal is to make this program freely available to community workers across the world.
Injection of CO2 into formations containing brine is proposed as a long-term sequestration solution. A significant obstacle to sequestration performance is the presence of existing wells providing a transport pathway out of the sequestration formation. To understand how heterogeneity impacts the leakage rate, we employ two dimensional models of the CO2 injection process into a sandstone aquifer with shale inclusions to examine the parameters controlling release through an existing well. This scenario is modeled as a constant-rate injection of super-critical CO2 into the existing formation where buoyancy effects, relative permeabilities, and capillary pressures are employed. Three geologic controls are considered: stratigraphic dip angle, shale inclusion size and shale fraction. In this study, we examine the impact of heterogeneity on the amount and timing of CO2 released through a leaky well. Sensitivity analysis is performed to classify how various geologic controls influence CO2 loss. A 'Design of Experiments' approach is used to identify the most important parameters and combinations of parameters to control CO2 migration while making efficient use of a limited number of computations. Results are used to construct a low-dimensional description of the transport scenario. The goal of this exploration is to develop a small set of parametric descriptors that can be generalized to similar scenarios. Results of this work will allow for estimation of the amount of CO2 that will be lost for a given scenario prior to commencing injection. Additionally, two-dimensional and three-dimensional simulations are compared to quantify the influence that surrounding geologic media has on the CO2 leakage rate.