Publications

Results 65401–65600 of 96,771

Search results

Jump to search filters

Sandia National Laboratories results for the 2010 criticality accident dosimetry exercise, at the CALIBAN reactor, CEA Valduc France

Ward, Dann C.

This document describes the personal nuclear accident dosimeter (PNAD) used by Sandia National Laboratories (SNL) and presents PNAD dosimetry results obtained during the Nuclear Accident Dosimeter Intercomparison Study held 20-23 September, 2010, at CEA Valduc, France. SNL PNADs were exposed in two separate irradiations from the CALIBAN reactor. Biases for reported neutron doses ranged from -15% to +0.4% with an average bias of -7.7%. PNADs were also exposed on the back side of phantoms to assess orientation effects.

More Details

Detection of embedded radiation sources using temporal variation of gamma spectral data

Shokair, Isaac R.

Conventional full spectrum gamma spectroscopic analysis has the objective of quantitative identification of all the isotopes present in a measurement. For low energy resolution detectors, when photopeaks alone are not sufficient for complete isotopic identification, such analysis requires template spectra for all the isotopes present in the measurement. When many isotopes are present it is difficult to make the correct identification and this process often requires many trial solutions by highly skilled spectroscopists. This report investigates the potential of a new analysis method which uses spatial/temporal information from multiple low energy resolution measurements to test the hypothesis of the presence of a target spectrum of interest in these measurements without the need to identify all the other isotopes present. This method is referred to as targeted principal component analysis (TPCA). For radiation portal monitor applications, multiple measurements of gamma spectra are taken at equally spaced time increments as a vehicle passes through the portal and the TPCA method is directly applicable to this type of measurement. In this report we describe the method and investigate its application to the problem of detection of a radioactive localized source that is embedded in a distributed source in the presence of an ambient background. Examples using simulated spectral measurements indicate that this method works very well and has the potential for automated analysis for RPM applications. This method is also expected to work well for isotopic detection in the presence of spectrally and spatially varying backgrounds as a result of vehicle-induced background suppression. Further work is needed to include effects of shielding, to understand detection limits, setting of thresholds, and to estimate false positive probability.

More Details

Statistical surrogate models for prediction of high-consequence climate change

Field, Richard V.; Constantine, Paul C.; Boslough, Mark B.

In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.

More Details

Experimental investigation of the Richtmyer-Meshkov instability

Weber, Christopher R.

The Richtmyer-Meshkov instability (RMI) is experimentally investigated using several different initial conditions and with a range of diagnostics. First, a broadband initial condition is created using a shear layer between helium+acetone and argon. The post-shocked turbulent mixing is investigated using planar laser induced fluorescence (PLIF). The signature of turbulent mixing is present in the appearance of an inertial range in the mole fraction energy spectrum and the isotropy of the late-time dissipation structures. The distribution of the mole fraction values does not appear to transition to a homogeneous mixture, and it is possible that this effect may be slow to develop for the RMI. Second, the influence of the RMI on the kinetic energy spectrum is investigated using particle image velocimetry (PIV). The influence of the perturbation is visible relatively far from the interface when compared to the energy spectrum of an initially flat interface. Closer to the perturbation, an increase in the energy spectrum with time is observed and is possibly due to a cascade of energy from the large length scales of the perturbation. Finally, the single mode perturbation growth rate is measured after reshock using a new high speed imaging technique. This technique produced highly time-resolved interface position measurements. Simultaneous measurements at the spike and bubble location are used to compute a perturbation growth rate history. The growth rates from several experiments are compared to a new reshock growth rate model.

More Details

DOE/SNL-TTU scaled wind farm technology facility :

Barone, Matthew F.

The proposed DOE/Sandia Scaled Wind Farm Technology Facility (SWiFT) hosted by Texas Tech University at Reese Technology Center in Lubbock, TX, will provide a facility for experimental study of turbine-turbine interaction and complex wind farm aerodynamics. This document surveys the current status of wind turbine wake and turbine-turbine interaction research, identifying knowledge and data gaps that the proposed test site can potentially fill. A number of turbine layouts is proposed, allowing for up to ten turbines at the site.

More Details

Sodium fast reactor fuels and materials : research needs

Denman, Matthew R.; Porter, Douglas; Wright, Art; Lambert, John; Hayes, Steven; Natesan, Ken; Ott, Larry J.; Garner, Frank; Walters, Leon; Yacout, Abdellatif

An expert panel was assembled to identify gaps in fuels and materials research prior to licensing sodium cooled fast reactor (SFR) design. The expert panel considered both metal and oxide fuels, various cladding and duct materials, structural materials, fuel performance codes, fabrication capability and records, and transient behavior of fuel types. A methodology was developed to rate the relative importance of phenomena and properties both as to importance to a regulatory body and the maturity of the technology base. The technology base for fuels and cladding was divided into three regimes: information of high maturity under conservative operating conditions, information of low maturity under more aggressive operating conditions, and future design expectations where meager data exist.

More Details

Transformative monitoring approaches for reprocessing

Cipiti, Benjamin B.

The future of reprocessing in the United States is strongly driven by plant economics. With increasing safeguards, security, and safety requirements, future plant monitoring systems must be able to demonstrate more efficient operations while improving the current state of the art. The goal of this work was to design and examine the incorporation of advanced plant monitoring technologies into safeguards systems with attention to the burden on the operator. The technologies examined include micro-fluidic sampling for more rapid analytical measurements and spectroscopy-based techniques for on-line process monitoring. The Separations and Safeguards Performance Model was used to design the layout and test the effect of adding these technologies to reprocessing. The results here show that both technologies fill key gaps in existing materials accountability that provide detection of diversion events that may not be detected in a timely manner in existing plants. The plant architecture and results under diversion scenarios are described. As a tangent to this work, both the AMUSE and SEPHIS solvent extraction codes were examined for integration in the model to improve the reality of diversion scenarios. The AMUSE integration was found to be the most successful and provided useful results. The SEPHIS integration is still a work in progress and may provide an alternative option.

More Details

Stainless steel corrosion by molten nitrates : analysis and lessons learned

Kruizenga, Alan M.

A secondary containment vessel, made of stainless 316, failed due to severe nitrate salt corrosion. Corrosion was in the form of pitting was observed during high temperature, chemical stability experiments. Optical microscopy, scanning electron microscopy and energy dispersive spectroscopy were all used to diagnose the cause of the failure. Failure was caused by potassium oxide that crept into the gap between the primary vessel (alumina) and the stainless steel vessel. Molten nitrate solar salt (89% KNO{sub 3}, 11% NaNO{sub 3} by weight) was used during chemical stability experiments, with an oxygen cover gas, at a salt temperature of 350-700 C. Nitrate salt was primarily contained in an alumina vessel; however salt crept into the gap between the alumina and 316 stainless steel. Corrosion occurred over a period of approximately 2000 hours, with the end result of full wall penetration through the stainless steel vessel; see Figures 1 and 2 for images of the corrosion damage to the vessel. Wall thickness was 0.0625 inches, which, based on previous data, should have been adequate to avoid corrosion-induced failure while in direct contact with salt temperature at 677 C (0.081-inch/year). Salt temperatures exceeding 650 C lasted for approximately 14 days. However, previous corrosion data was performed with air as the cover gas. High temperature combined with an oxygen cover gas obviously drove corrosion rates to a much higher value. Corrosion resulted in the form of uniform pitting. Based on SEM and EDS data, pits contained primarily potassium oxide and potassium chromate, reinforcing the link between oxides and severe corrosion. In addition to the pitting corrosion, a large blister formed on the side wall, which was mainly composed of potassium, chromium and oxygen. All data indicated that corrosion initiated internally and moved outward. There was no evidence of intergranular corrosion nor were there any indication of fast pathways along grain boundaries. Much of the pitting occurred near welds; however this was the hottest region in the chamber. Pitting was observed up to two inches above the weld, indicating independence from weld effects.

More Details

LDRD final report : chromophore-functionalized aligned carbon nanotube arrays

Krafcik, Karen L.; Yang, Chu-Yeu P.

The goal of this project was to expand upon previously demonstrated single carbon nanotube devices by preparing a more practical, multi-single-walled carbon nanotube (SWNT) device. As a late-start, proof-of-concept project, the work focused on the fabrication and testing of chromophore-functionalized aligned SWNT field effect transistors (SWNT-FET). Such devices have not yet been demonstrated. The advantages of fabricating aligned SWNT devices include increased device cross-section to improve sensitivity to light, elimination of increased electrical resistance at nanotube junctions in random mat devices, and the ability to model device responses. The project did not achieve the goal of fabricating and testing chromophore-modified SWNT arrays, but a new SWNT growth capability was established that will benefit future projects. Although the ultimate goal of fabricating and testing chromophore-modified SWNT arrays was not achieved, the work did lead to a new carbon nanotube growth capability at Sandia/CA. The synthesis of dense arrays of horizontally aligned SWNTs is a developing area of research with significant potential for new discoveries. In particular, the ability to prepare arrays of carbon nanotubes of specific electronic types (metallic or semiconducting) could yield new classes of nanoscale devices.

More Details

Solving the software protection problem with intrinsic personal physical unclonable functions

Nithyanand, Rishab; Sion, Radu

Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. The unclonability property comes from the accepted hardness of replicating the multitude of characteristics introduced during the manufacturing process. This makes PUFs useful for solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection in offline settings. We first argue that traditional (black-box) PUFs are not useful for protecting software in settings where communication with a vendor's server or third party network device is infeasible or impossible. Instead, we argue that Intrinsic PUFs are needed to solve the above mentioned problems because they are intrinsically involved in processing the information that is to be protected. Finally, we describe how sources of randomness in any computing device can be used for creating intrinsic-personal-PUFs (IP-PUF) and present experimental results in using standard off-the-shelf computers as IP-PUFs.

More Details

Deriving a model for influenza epidemics from historical data

Ray, Jaideep R.

In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. The model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of parameter values. We suggest 'consensus' parameter values for military and civilian populations in the form of normal distributions so that they may be further used in other applications. Representing the parameter values as distributions, instead of point values, allows us to capture the uncertainty and scatter in the parameters. Quantifying the uncertainty allows us to use these models further in inverse problems, predictions under uncertainty and various other studies involving risk.

More Details

Computational thermal, chemical, fluid, and solid mechanics for geosystems management

Martinez, Mario J.; Red-Horse, John R.; Carnes, Brian C.; Mesh, Mikhail M.; Field, Richard V.; Davison, Scott M.; Yoon, Hongkyu Y.; Bishop, Joseph E.; Newell, Pania N.; Notz, Patrick N.; Turner, Daniel Z.; Subia, Samuel R.; Hopkins, Polly L.; Moffat, Harry K.; Jove Colon, Carlos F.; Dewers, Thomas D.; Klise, Katherine A.

This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.

More Details

EMPHASIS/Nevada UTDEM user guide. Version 2.0

Turner, C.D.; Pasik, Michael F.; Seidel, David B.

The Unstructured Time-Domain ElectroMagnetics (UTDEM) portion of the EMPHASIS suite solves Maxwell's equations using finite-element techniques on unstructured meshes. This document provides user-specific information to facilitate the use of the code for applications of interest. UTDEM is a general-purpose code for solving Maxwell's equations on arbitrary, unstructured tetrahedral meshes. The geometries and the meshes thereof are limited only by the patience of the user in meshing and by the available computing resources for the solution. UTDEM solves Maxwell's equations using finite-element method (FEM) techniques on tetrahedral elements using vector, edge-conforming basis functions. EMPHASIS/Nevada Unstructured Time-Domain ElectroMagnetic Particle-In-Cell (UTDEM PIC) is a superset of the capabilities found in UTDEM. It adds the capability to simulate systems in which the effects of free charge are important and need to be treated in a self-consistent manner. This is done by integrating the equations of motion for macroparticles (a macroparticle is an object that represents a large number of real physical particles, all with the same position and momentum) being accelerated by the electromagnetic forces upon the particle (Lorentz force). The motion of these particles results in a current, which is a source for the fields in Maxwell's equations.

More Details

EMPHASIS/Nevada CABANA user Guide. Version 2.0

Turner, C.D.; Bohnhoff, William J.; Troup, Jennifer L.

The CABle ANAlysis (CABANA) portion of the EMPHASIS{trademark} suite is designed specifically for the simulation of cable system-generated electromagnetic pulse (SGEMP). The code can be used to evaluate the response of a specific cable design to threat or to compare and minimize the relative response of difference designs. This document provides user-specific information to facilitate the application of the code to cables of interest. It solves the electrical portion of a cable SGEMP simulation. It takes specific results from the deterministic radiation-transport code CEPTRE as sources and computes the resulting electrical response to an arbitrary cable load. The cable geometry itself is also arbitrary and is limited only by the patience of the user in meshing and by the available computing resources for the solution. The CABANA simulation involves solution of the quasi-static Maxwell equations using finite-element method (FEM) techniques.

More Details

Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids

Wagner, Gregory J.; Deng, Jie D.; Erickson, Lindsay C.; Plimpton, Steven J.; Thompson, Aidan P.; Zhou, Xiaowang Z.; Zimmerman, Jonathan A.

Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.

More Details

A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

Nithyanand, Rishab; Solis, John H.

Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributions are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.

More Details

A design for a V&V and UQ discovery process

Knupp, Patrick K.; Urbina, Angel U.

There is currently sparse literature on how to implement systematic and comprehensive processes for modern V&V/UQ (VU) within large computational simulation projects. Important design requirements have been identified in order to construct a viable 'system' of processes. Significant processes that are needed include discovery, accumulation, and assessment. A preliminary design is presented for a VU Discovery process that accounts for an important subset of the requirements. The design uses a hierarchical approach to set context and a series of place-holders that identify the evidence and artifacts that need to be created in order to tell the VU story and to perform assessments. The hierarchy incorporates VU elements from a Predictive Capability Maturity Model and uses questionnaires to define critical issues in VU. The place-holders organize VU data within a central repository that serves as the official VU record of the project. A review process ensures that those who will contribute to the record have agreed to provide the evidence identified by the Discovery process. VU expertise is an essential part of this process and ensures that the roadmap provided by the Discovery process is adequate. Both the requirements and the design were developed to support the Nuclear Energy Advanced Modeling and Simulation Waste project, which is developing a set of advanced codes for simulating the performance of nuclear waste storage sites. The Waste project served as an example to keep the design of the VU Discovery process grounded in practicalities. However, the system is represented abstractly so that it can be applied to other M&S projects.

More Details

A toolbox for a class of discontinuous Petrov-Galerkin methods using trilinos

Ridzal, Denis R.; Bochev, Pavel B.

The class of discontinuous Petrov-Galerkin finite element methods (DPG) proposed by L. Demkowicz and J. Gopalakrishnan guarantees the optimality of the solution in an energy norm and produces a symmetric positive definite stiffness matrix, among other desirable properties. In this paper, we describe a toolbox, implemented atop Sandia's Trilinos library, for rapid development of solvers for DPG methods. We use this toolbox to develop solvers for the Poisson and Stokes problems.

More Details

Analysis of sheltering and evacuation strategies for a Chicago nuclear detonation scenario

Brandt, Larry D.; Yoshimura, Ann S.

Development of an effective strategy for shelter and evacuation is among the most important planning tasks in preparation for response to a low yield, nuclear detonation in an urban area. Extensive studies have been performed and guidance published that highlight the key principles for saving lives following such an event. However, region-specific data are important in the planning process as well. This study examines some of the unique regional factors that impact planning for a 10 kt detonation in Chicago. The work utilizes a single scenario to examine regional impacts as well as the shelter-evacuate decision alternatives at selected exemplary points. For many Chicago neighborhoods, the excellent assessed shelter quality available make shelter-in-place or selective transit to a nearby shelter a compelling post-detonation strategy.

More Details

Earthquake warning system for infrastructures : a scoping analysis

Kelic, Andjelka; Stamber, Kevin L.; Brodsky, Nancy S.; Vugrin, Eric D.; Corbet, Thomas F.; O'Connor, Sharon L.

This report provides the results of a scoping study evaluating the potential risk reduction value of a hypothetical, earthquake early-warning system. The study was based on an analysis of the actions that could be taken to reduce risks to population and infrastructures, how much time would be required to take each action and the potential consequences of false alarms given the nature of the action. The results of the scoping analysis indicate that risks could be reduced through improving existing event notification systems and individual responses to the notification; and production and utilization of more detailed risk maps for local planning. Detailed maps and training programs, based on existing knowledge of geologic conditions and processes, would reduce uncertainty in the consequence portion of the risk analysis. Uncertainties in the timing, magnitude and location of earthquakes and the potential impacts of false alarms will present major challenges to the value of an early-warning system.

More Details

Surveillance metrics sensitivity study

Bierbaum, Rene L.

In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

More Details

Real-time characterization of partially observed epidemics using surrogate models

Safta, Cosmin S.; Ray, Jaideep R.; Sargsyan, Khachik S.; Lefantzi, Sophia L.

We present a statistical method, predicated on the use of surrogate models, for the 'real-time' characterization of partially observed epidemics. Observations consist of counts of symptomatic patients, diagnosed with the disease, that may be available in the early epoch of an ongoing outbreak. Characterization, in this context, refers to estimation of epidemiological parameters that can be used to provide short-term forecasts of the ongoing epidemic, as well as to provide gross information on the dynamics of the etiologic agent in the affected population e.g., the time-dependent infection rate. The characterization problem is formulated as a Bayesian inverse problem, and epidemiological parameters are estimated as distributions using a Markov chain Monte Carlo (MCMC) method, thus quantifying the uncertainty in the estimates. In some cases, the inverse problem can be computationally expensive, primarily due to the epidemic simulator used inside the inversion algorithm. We present a method, based on replacing the epidemiological model with computationally inexpensive surrogates, that can reduce the computational time to minutes, without a significant loss of accuracy. The surrogates are created by projecting the output of an epidemiological model on a set of polynomial chaos bases; thereafter, computations involving the surrogate model reduce to evaluations of a polynomial. We find that the epidemic characterizations obtained with the surrogate models is very close to that obtained with the original model. We also find that the number of projections required to construct a surrogate model is O(10)-O(10{sup 2}) less than the number of samples required by the MCMC to construct a stationary posterior distribution; thus, depending upon the epidemiological models in question, it may be possible to omit the offline creation and caching of surrogate models, prior to their use in an inverse problem. The technique is demonstrated on synthetic data as well as observations from the 1918 influenza pandemic collected at Camp Custer, Michigan.

More Details

Ductile failure X-prize

Boyce, Brad B.; Foulk, James W.; Littlewood, David J.; Mota, Alejandro M.; Ostien, Jakob O.; Silling, Stewart A.; Spencer, Benjamin S.; Wellman, Gerald W.; Bishop, Joseph E.; Brown, Arthur B.; Córdova, Theresa E.; Cox, James C.; Crenshaw, Thomas B.; Dion, Kristin D.; Emery, John M.

Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictions had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.

More Details
Results 65401–65600 of 96,771
Results 65401–65600 of 96,771