Publications

Results 176–194 of 194

Search results

Jump to search filters

Long-Term Pumping Test at MIU Site, Toki, Japan: Hydrogeological Modeling and Groundwater Flow Simulation

Mckenna, Sean A.; Eliassi, Mehdi E.

A conceptual model of the MIU site in central Japan, was developed to predict the groundwater system response to pumping. The study area consisted of a fairly large three-dimensional domain, having the size 4.24 x 6 x 3 km{sup 3} with three different geological units, upper and lower fractured zones and a single fault unit. The resulting computational model comprised of 702,204 finite difference cells with variable grid spacing. Both steady-state and transient simulations were completed to evaluate the influence of two different surface boundary conditions: fixed head and no flow. Steady state results were used for particle tracking and also serving as the initial conditions (i.e., starting heads) for the transient simulations. Results of the steady state simulations indicate the significance of the choice of surface (i.e., upper) boundary conditions and its effect on the groundwater flow patterns along the base of the upper fractured zone. Steady state particle tracking results illustrate that all particles exit the top of the model in areas where groundwater discharges to the Hiyoshi and Toki rivers. Particle travel times range from 3.6 x 10{sup 7} sec (i.e., {approx}1.1 years) to 4.4 x 10{sup 10} sec (i.e., {approx}1394 years). For the transient simulations, two pumping zones one above and another one below the fault are considered. For both cases, the pumping period extends for 14 days followed by an additional 36 days of recovery. For the pumping rates used, the maximum drawdown is quite small (ranging from a few centimeters to a few meters) and thus, pumping does not severely impact the groundwater flow system. The range of drawdown values produced by pumping below the fault are generally much less sensitive to the choice of the boundary condition than are the drawdowns resulted from the pumping zone above the fault.

More Details

Parameter adjustment of indicator variograms for groundwater flow modeling using genetic algorithms

Mckenna, Sean A.

Current algorithms for the inverse calibration of hydraulic conductivity (K) fields to observed head data update the K values to achieve calibration but consider the parameters defining the spatial correlation of the K values to be fixed. Here we examine the ability of a genetic algorithm (GA) to update indicator variogram parameters defining the spatial correlation of the K field subject to minimizing differences between modeled and observed head values and also to minimizing the advective travel time across the model. The technique is presented on a test problem consisting of 83 K values randomly selected from 8649 gas-permeameter measurements made on a block of heterogeneous sandstone. Indicator variograms at the 10th, 40th, 60th and 90th percentiles of the cumulative log10 K distribution are used to describe the spatial variability of the log10 hydraulic conductivity data. For each threshold percentile, the variogram models are parameterized by the nugget, sill, anisotropic range values and the direction of principal correlation. The 83 conditioning data and the variogram models are used as input to a geostatistical indicator simulation algorithm.

More Details

Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

Bilisoly, Roger L.; Mckenna, Sean A.

Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.

More Details

Examining the effects of variability in short time scale demands on solute transport

Mckenna, Sean A.; Mckenna, Sean A.; Tidwell, Vincent C.

Variations in water use at short time scales, seconds to minutes, produce variation in transport of solutes through a water supply network. However, the degree to which short term variations in demand influence the solute concentrations at different locations in the network is poorly understood. Here we examine the effect of variability in demand on advective transport of a conservative solute (e.g. chloride) through a water supply network by defining the demand at each node in the model as a stochastic process. The stochastic demands are generated using a Poisson rectangular pulse (PRP) model for the case of a dead-end water line serving 20 homes represented as a single node. The simple dead-end network model is used to examine the variation in Reynolds number, the proportion of time that there is no flow (i.e., stagnant conditions, in the pipe) and the travel time defined as the time for cumulative demand to equal the volume of water in 1000 feet of pipe. Changes in these performance measures are examined as the fine scale demand functions are aggregated over larger and larger time scales. Results are compared to previously developed analytical expressions for the first and second moments of these three performance measures. A new approach to predict the reduction in variance of the performance measures based on perturbation theory is presented and compared to the results of the numerical simulations. The distribution of travel time is relatively consistent across time scales until the time step approaches that of the travel time. However, the proportion of stagnant flow periods decreases rapidly as the simulation time step increases. Both sets of analytical expressions are capable of providing adequate, first-order predictions of the simulation results.

More Details

Syndrome Surveillance Using Parametric Space-Time Clustering

Koch, Mark W.; Mckenna, Sean A.; Bilisoly, Roger L.

As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.

More Details

Predictive Modeling of MIU3-MIU2 Interference Tests

Mckenna, Sean A.; Roberts, Randall M.

The goal of this project is to predict the drawdown that will be observed in specific piezometers placed in the MIU-2 borehole due to pumping at a single location in the MIU-3 borehole. These predictions will be in the form of distributions obtained through multiple forward runs of a well-test model. Specifically, two distributions will be created for each pumping location--piezometer location pair: (1) the distribution of the times to 1.0 meter of drawdown and (2) the distribution of the drawdown predicted after 12 days of pumping at a discharge rates of 25, 50, 75 and 100 l/hr. Each of the steps in the pumping rate lasts for 3 days (259,200 seconds). This report is based on results that were presented at the Tono Geoscience Center on January 27th, 2000, which was approximately one week prior to the beginning of the interference tests. Hydraulic conductivity (K), specific storage (S{sub s}) and the length of the pathway (L{sub p}) are the input parameters to the well-test analysis model. Specific values of these input parameters are uncertain. This parameter uncertainty is accounted for in the modeling by drawing individual parameter values from distributions defined for each input parameter. For the initial set of runs, the fracture system is assumed to behave as an infinite, homogeneous, isotropic aquifer. These assumptions correspond to conceptualizing the aquifer as having Theis behavior and producing radial flow to the pumping well. A second conceptual model is also used in the drawdown calculations. This conceptual model considers that the fracture system may cause groundwater to move to the pumping well in a more linear (non-radial) manner. The effects of this conceptual model on the drawdown values are examined by casting the flow dimension (F{sub d}) of the fracture pathways as an uncertain variable between 1.0 (purely linear flow) and 2.0 (completely radial flow).

More Details

Probabilistic Approach to Site Characterization: MIU site, Tono Region, Japan

Mckenna, Sean A.

Geostatistical simulation is used to extrapolate data derived from site characterization activities at the MIU site into information describing the three-dimensional distribution of hydraulic conductivity at the site and the uncertainty in the estimates of hydraulic conductivity. This process is demonstrated for six different data sets representing incrementally increasing amounts of characterization data. Short horizontal ranges characterize the spatial variability of both the rock types (facies) and the hydraulic conductivity measurements. For each of the six data sets, 50 geostatistical realizations of the facies and 50 realizations of the hydraulic conductivity are combined to produce 50 final realizations of the hydraulic conductivity distribution. Analysis of these final realizations indicates that the mean hydraulic conductivity value increases with the addition of site characterization data. The average hydraulic conductivity as a function of elevation changes from a uniform profile to a profile showing relatively high hydraulic conductivity values near the top and bottom of the simulation domain. Three-dimensional uncertainty maps show the highest amount of uncertainty in the hydraulic conductivity distribution near the top and bottom of the model. These upper and lower areas of high uncertainty are interpreted to be due to the unconformity at the top of the granitic rocks and the Tsukyoshi fault respectively.

More Details

On the late-time behavior of tracer test breakthrough curves

Water Resources Research

Mckenna, Sean A.; Meigs, Lucy C.

We investigated the late-time (asymptotic) behavior of tracer test breakthrough curves (BTCs) with rate-limited mass transfer (e.g., in dual-porosity or multiporosity systems) and found that the late-time concentration c is given by the simple expression c = tad{c0g - [m0(∂g/∂t)]}, for t ≫ tad and tα ≫ tad, where tad is the advection time, c0 is the initial concentration in the medium, m0 is the zeroth moment of the injection pulse, and tα is the mean residence time in the immobile domain (i.e., the characteristic mass transfer time). The function g is proportional to the residence time distribution in the immobile domain; we tabulate g for many geometries, including several distributed (multirate) models of mass transfer. Using this expression, we examine the behavior of late-time concentration for a number of mass transfer models. One key result is that if rate-limited mass transfer causes the BTC to behave as a power law at late time (i.e., c ̃ t-k), then the underlying density function of rate coefficients must also be a power law with the form αk-3 as α → 0. This is true for both density functions of first-order and diffusion rate coefficients. BTCs with k < 3 persisting to the end of the experiment indicate a mean residence time longer than the experiment, and possibly an infinite residence time, and also suggest an effective rate coefficient that is either undefined or changes as a function of observation time. We apply our analysis to breakthrough curves from single-well injection-withdrawal tests at the Waste Isolation Pilot Plant, New Mexico. We investigated the late-time (asymptotic) behavior of tracer test breakthrough curves (BTCs) with rate-limited mass transfer (e.g., in dual-porosity or multiporosity systems) and found that the late-time concentration c is given by the simple expression c = tad{c0g - [m0(∂g/∂t)]}, for t ≫ tad and tα ≫ t ad, where tad is the advection time, c0 is the initial concentration in the medium, m0 is the zeroth moment of the injection pulse, and tα is the mean residence time in the immobile domain (i.e., the characteristic mass transfer time). The function g is proportional to the residence time distribution in the immobile domain; we tabulate g for many geometries, including several distributed (multirate) models of mass transfer. Using this expression, we examine the behavior of late-time concentration for a number of mass transfer models. One key result is that if rate-limited mass transfer causes the BTC to behave as a power law at late time (i.e., c t-k), then the underlying density function of rate coefficients must also be a power law with the form αk-3 as α → 0. This is true for both density functions of first-order and diffusion rate coefficients. BTCs with k < 3 persisting to the end of the experiment indicate a mean residence time longer than the experiment, and possibly an infinite residence time, and also suggest an effective rate coefficient that is either undefined or changes as a function of observation time. We apply our analysis to breakthrough curves from single-well injection-withdrawal tests at the Waste Isolation Pilot Plant, New Mexico.

More Details

Development of a Discrete Spatial-Temporal SEIR Simulator for Modeling Infectious Diseases

Mckenna, Sean A.

Multiple techniques have been developed to model the temporal evolution of infectious diseases. Some of these techniques have also been adapted to model the spatial evolution of the disease. This report examines the application of one such technique, the SEIR model, to the spatial and temporal evolution of disease. Applications of the SEIR model are reviewed briefly and an adaptation to the traditional SEIR model is presented. This adaptation allows for modeling the spatial evolution of the disease stages at the individual level. The transmission of the disease between individuals is modeled explicitly through the use of exposure likelihood functions rather than the global transmission rate applied to populations in the traditional implementation of the SEIR model. These adaptations allow for the consideration of spatially variable (heterogeneous) susceptibility and immunity within the population. The adaptations also allow for modeling both contagious and non-contagious diseases. The results of a number of numerical experiments to explore the effect of model parameters on the spread of an example disease are presented.

More Details

Threshold Assessment: Definition of Acceptable Sites as Part of Site Selection for the Japanese HLW Program

Mckenna, Sean A.; Webb, Erik K.

For the last ten years, the Japanese High-Level Nuclear Waste (HLW) repository program has focused on assessing the feasibility of a basic repository concept, which resulted in the recently published H12 Report. As Japan enters the implementation phase, a new organization must identify, screen and choose potential repository sites. Thus, a rapid mechanism for determining the likelihood of site suitability is critical. The threshold approach, described here, is a simple mechanism for defining the likelihood that a site is suitable given estimates of several critical parameters. We rely on the results of a companion paper, which described a probabilistic performance assessment simulation of the HLW reference case in the H12 report. The most critical two or three input parameters are plotted against each other and treated as spatial variables. Geostatistics is used to interpret the spatial correlation, which in turn is used to simulate multiple realizations of the parameter value maps. By combining an array of realizations, we can look at the probability that a given site, as represented by estimates of this combination of parameters, would be good host for a repository site.

More Details

Addressing uncertainty in rock properties through geostatistical simulation

Mckenna, Sean A.

Fracture and matrix properties in a sequence of unsaturated, welded tuffs at Yucca Mountain, Nevada, are modeled in two-dimensional cross-sections through geostatistical simulation. In the absence of large amounts of sample data, an n interpretive, deterministic, stratigraphic model is coupled with a gaussian simulation algorithm to constrain realizations of both matrix porosity and fracture frequency. Use of the deterministic, stratigraphic model imposes scientific judgment, in the form of a conceptual geologic model, onto the property realizations. Linear coregionalization and a regression relationship between matrix porosity and matrix hydraulic conductivity are used to generate realizations of matrix hydraulic conductivity. Fracture-frequency simulations conditioned on the stratigraphic model represent one class of fractures (cooling fractures) in the conceptual model of the geology. A second class of fractures (tectonic fractures) is conceptualized as fractures that cut across strata vertically and includes discrete features such as fault zones. Indicator geostatistical simulation provides locations of this second class of fractures. The indicator realizations are combined with the realizations of fracture spacing to create realizations of fracture frequency that are a combination of both classes of fractures. Evaluations of the resulting realizations include comparing vertical profiles of rock properties within the model to those observed in boreholes and checking intra-unit property distributions against collected data. Geostatistical simulation provides an efficient means of addressing spatial uncertainty in dual continuum rock properties.

More Details

Scaling of material properties for Yucca Mountain: literature review and numerical experiments on saturated hydraulic conductivity

Mckenna, Sean A.

A review of pertinent literature reveals techniques which may be practical for upscaling saturated hydraulic conductivity at Yucca Mountain: geometric mean, spatial averaging, inverse numerical modeling, renormalization, and a perturbation technique. Isotropic realizations of log hydraulic conductivity exhibiting various spatial correlation lengths are scaled from the point values to five discrete scales through these techniques. For the variances in log{sub 10} saturated hydraulic conductivity examined here, geometric mean, numerical inverse and renormalization adequately reproduce point scale fluxes across the modeled domains. Fastest particle velocities and dispersion measured on the point scale are not reproduced by the upscaled fields. Additional numerical experiments examine the utility of power law averaging on a geostatistical realization of a cross-section similar to the cross-sections that will be used in the 1995 groundwater travel time calculations. A literature review on scaling techniques for thermal and mechanical properties is included. 153 refs., 29 figs., 6 tabs.

More Details

Summary evaluation of Yucca Mountain surface transects with implications for downhole sampling. Yucca Mountain Site Characterization Project

Mckenna, Sean A.

The results of previously completed vertical outcrop sampling transacts are summarized with respect to planning downhole sampling. The summary includes statistical descriptions and descriptions of the spatial variability of the sampled parameters. Descriptions are made on each individual transect, each thermal/mechanical unit and each previously defined geohydrologic unit. Correlations between parameters indicate that saturated hydraulic conductivity is not globally correlated to porosity. The correlation between porosity and saturated hydraulic conductivity is both spatially and lithologically dependent. Currently, there are not enough saturated hydraulic conductivity and sorptivity data to define relationships between these properties and porosity on a unit by unit basis. Also, the Prow Pass member of the Crater Flat Tuff and stratigraphically lower units have gone essentially unsampled in these outcrop transacts. The vertical correlation length for hydrologic properties is not constant across the area of the transacts. The average sample spacing within the transacts ranges from 1.25 to 2.1 meters. It appears that, with the exception of the Topopah Spring member units, a comparable sample spacing will give adequate results in the downhole sampling campaign even with the nonstationarity of the vertical correlation. The properties within the thermal/mechanical units and geohydrologic units of the Topopah Spring member appear to have a spatial correlation range less than or equal to the current sample spacing within these units. For the downhole sampling, a sample spacing of less than 1.0 meters may be necessary within these units.

More Details
Results 176–194 of 194
Results 176–194 of 194