Publications

Results 88001–88100 of 96,771

Search results

Jump to search filters

Installation of adhesively bonded composites to repair carbon steel structure

Roach, D.; Rackow, Kirk; Dunn, Dennis

In the past decade, an advanced composite repair technology has made great strides in commercial aviation use. Extensive testing and analysis, through joint programs between the Sandia Labs FAA Airworthiness Assurance Center and the aviation industry, have proven that composite materials can be used to repair damaged aluminum structure. Successful pilot programs have produced flight performance history to establish the viability and durability of bonded composite patches as a permanent repair on commercial aircraft structures. With this foundation in place, efforts are underway to adapt bonded composite repair technology to civil structures. This paper presents a study in the application of composite patches on large trucks and hydraulic shovels typically used in mining operations. Extreme fatigue, temperature, erosive, and corrosive environments induce an array of equipment damage. The current weld repair techniques for these structures provide a fatigue life that is inferior to that of the original plate. Subsequent cracking must be revisited on a regular basis. It is believed that the use of composite doublers, which do not have brittle fracture problems such as those inherent in welds, will help extend the structure's fatigue life and reduce the equipment downtime. Two of the main issues for adapting aircraft composite repairs to civil applications are developing an installation technique for carbon steel structure and accommodating large repairs on extremely thick structures. This paper will focus on the first phase of this study which evaluated the performance of different mechanical and chemical surface preparation techniques. The factors influencing the durability of composite patches in severe field environments will be discussed along with related laminate design and installation issues.

More Details

Alternative separation of exchange and correlation in density-functional theory

Proposed for publication in Physical Review Letters.

Wills, Ann E.

It has recently been shown that local values of the conventional exchange energy per particle cannot be described by an analytic expansion in the density variation. Yet, it is known that the total exchange-correlation (XC) energy per particle does not show any corresponding nonanalyticity. Indeed, the nonanalyticity is here shown to be an effect of the separation into conventional exchange and correlation. We construct an alternative separation in which the exchange part is made well behaved by screening its long-ranged contributions, and the correlation part is adjusted accordingly. This alternative separation is as valid as the conventional one, and introduces no new approximations to the total XC energy. We demonstrate functional development based on this approach by creating and deploying a local-density-approximation-type XC functional. Hence, this work includes both the theory and the practical calculations needed to provide a starting point for an alternative approach towards improved approximations of the total XC energy.

More Details

Functional and structural failure mode overpressurization tests of 1:4-scale prestressed concrete containment vessel model

Hessheimer, Michael F.

A 1:4-scale model of a prestressed concrete containment vessel (PCCV), representative of a pressurized water reactor (PWR) plant in Japan, was constructed by NUPEC at Sandia National Laboratories from January 1997 through June, 2000. Concurrently, Sandia instrumented the model with nearly 1500 transducers to measure strain, displacement and forces in the model from prestressing through the pressure testing. The limit state test of the PCCV model, culminating in functional failure (i.e. leakage by cracking and liner tearing) was conducted in September, 2000 at Sandia National Laboratories. After inspecting the model and the data after the limit state test, it became clear that, other than liner tearing and leakage, structural damage was limited to concrete cracking and the overall structural response (displacements, rebar and tendon strains, etc.) was only slightly beyond yield. (Global hoop strains at the mid-height of the cylinder only reached 0.4%, approximately twice the yield strain in steel.) In order to provide additional structural response data, for comparison with inelastic response conditions, the PCCV model filled nearly full with water and pressurized to 3.6 times the design pressure, when a catastrophic rupture occurred preceded only briefly by successive tensile failure of several hoop tendons. This paper summarizes the results of these tests.

More Details

Carbon sequestration in Synechococcus Sp.: from molecular machines to hierarchical modeling

Proposed for publication in OMICS: A Journal of Integrative Biology, Vol. 6, No.4, 2002.

Heffelfinger, Grant S.; Faulon, Jean-Loup M.; Frink, Laura J.; Haaland, David M.; Hart, William E.; Lane, Todd L.; Plimpton, Steven J.; Roe, Diana C.; Timlin, Jerilyn A.; Martino, Anthony M.; Rintoul, Mark D.; Davidson, George S.

The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Our computational efforts include coupling molecular simulation methods with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes and developing a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. Furthermore, given that the ultimate goal of this effort is to develop a systems-level of understanding of how the Synechococcus genome affects carbon fixation at the global scale, we will develop and apply a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, because the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats, we have also established a companion computational infrastructure to support this effort as well as the Genomes to Life program as a whole.

More Details

Conversion of the Big Hill geological site characterization report to a three-dimensional model

Stein, Joshua S.; Rautman, Christopher A.

The Big Hill salt dome, located in southeastern Texas, is home to one of four underground oil-storage facilities managed by the U. S. Department of Energy Strategic Petroleum Reserve (SPR) Program. Sandia National Laboratories, as the geotechnical advisor to the SPR, conducts site-characterization investigations and other longer-term geotechnical and engineering studies in support of the program. This report describes the conversion of two-dimensional geologic interpretations of the Big Hill site into three-dimensional geologic models. The new models include the geometry of the salt dome, the surrounding sedimentary units, mapped faults, and the 14 oil storage caverns at the site. This work provides a realistic and internally consistent geologic model of the Big Hill site that can be used in support of future work.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Bunge, Scott D.; Boyle, Timothy J.; Headley, Thomas J.; Kotula, Paul G.; Rodriguez, M.A.

Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

More Details

Technical Safety Requirements for the Gamma Irradiation Facility (GIF)

Mahn, Jeffrey A.

This document provides the Technical Safety Requirements (TSR) for the Sandia National Laboratories Gamma Irradiation Facility (GIF). The TSR is a compilation of requirements that define the conditions, the safe boundaries, and the administrative controls necessary to ensure the safe operation of a nuclear facility and to reduce the potential risk to the public and facility workers from uncontrolled releases of radioactive or other hazardous materials. These requirements constitute an agreement between DOE and Sandia National Laboratories management regarding the safe operation of the Gamma Irradiation Facility.

More Details

Final Report of LDRD Project Number 34693: Building Conscious Machines Based Upon the Architecture of Visual Cortex in the Primate Brain

Buttram, Malcolm T.

Our research plan is two-fold: first, we have extended our biological model of bottom-up visual attention with several recently characterized cortical interactions that are known to be responsible for human performance in certain visual tasks, and second, we have used an eyetracking system for collecting human eye movement data, from which we can calibrate the new additions to the model. We acquired an infrared video eyetracking system, which we are using to record observers' eye position with high temporal (120Hz) and spatial ({+-} 0.25 deg visual angle) accuracy. We collected eye movement scan paths from observers as they view computer-generated fractals, rural and urban outdoor scenes, and overhead satellite imagery. We found that, with very high statistical significance (10 to 12 z-scores), the saliency model accurately predicts locations that human observers will find interesting. We adopted our model of short-range interactions among overlapping spatial orientation channels to better predict bottom-up stimulus-driven attention in humans. This enhanced model is even more accurate in its predictions of human observers' eye movements. We are currently incorporating biologically plausible long-range interactions among orientation channels, which will aid in the detection of elongated contours such as rivers, roads, airstrips, and other man-made structures.

More Details

Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

Bilisoly, Roger L.; Mckenna, Sean A.

Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.

More Details

Solidification Diagnostics for Joining and Microstructural Simulations

Robino, Charles V.; Hall, Aaron C.; Brooks, John A.; Headley, Thomas J.; Roach, R.A.

Solidification is an important aspect of welding, brazing, soldering, LENS fabrication, and casting. The current trend toward utilizing large-scale process simulations and materials response models for simulation-based engineering is driving the development of new modeling techniques. However, the effective utilization of these models is, in many cases, limited by a lack of fundamental understanding of the physical processes and interactions involved. In addition, experimental validation of model predictions is required. We have developed new and expanded experimental techniques, particularly those needed for in-situ measurement of the morphological and kinetic features of the solidification process. The new high-speed, high-resolution video techniques and data extraction methods developed in this work have been used to identify several unexpected features of the solidification process, including the observation that the solidification front is often far more dynamic than previously thought. In order to demonstrate the utility of the video techniques, correlations have been made between the in-situ observations and the final solidification microstructure. Experimental methods for determination of the solidification velocity in highly dynamic pulsed laser welds have been developed, implemented, and used to validate and refine laser welding models. Using post solidification metallographic techniques, we have discovered a previously unreported orientation relationship between ferrite and austenite in the Fe-Cr-Ni alloy system, and have characterized the conditions under which this new relationship develops. Taken together, the work has expanded both our understanding of, and our ability to characterize, solidification phenomena in complex alloy systems and processes.

More Details

Constructing Probability Boxes and Dempster-Shafer Structures

Oberkampf, William L.

This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources.

More Details

A Novel Microcombustor for Sensor and Thermal Energy Management Applications in Microsystems

Manginell, Ronald P.; Moorman, Matthew W.; Colburn, Christopher C.; Anderson, Lawrence F.; Gardner, Timothy J.; Mowery-Evans, Deborah L.; Clem, Paul G.; Margolis, Stephen B.

The microcombustor described in this report was developed primarily for thermal management in microsystems and as a platform for micro-scale flame ionization detectors (microFID). The microcombustor consists of a thin-film heater/thermal sensor patterned on a thin insulating membrane that is suspended from its edges over a silicon frame. This micromachined design has very low heat capacity and thermal conductivity and is an ideal platform for heating catalytic materials placed on its surface. Catalysts play an important role in this design since they provide a convenient surface-based method for flame ignition and stabilization. The free-standing platform used in the microcombustor mitigates large heat losses arising from large surface-to-volume ratios typical of the microdomain, and, together with the insulating platform, permit combustion on the microscale. Surface oxidation, flame ignition and flame stabilization have been demonstrated with this design for hydrogen and hydrocarbon fuels premixed with air. Unoptimized heat densities of 38 mW/mm{sup 2} have been achieved for the purpose of heating microsystems. Importantly, the microcombustor design expands the limits of flammability (Low as compared with conventional diffusion flames); an unoptimized LoF of 1-32% for natural gas in air was demonstrated with the microcombustor, whereas conventionally 4-16% observed. The LoF for hydrogen, methane, propane and ethane are likewise expanded. This feature will permit the use of this technology in many portable applications were reduced temperatures, lean fuel/air mixes or low gas flows are required. By coupling miniature electrodes and an electrometer circuit with the microcombustor, the first ever demonstration of a microFID utilizing premixed fuel and a catalytically-stabilized flame has been performed; the detection of -1-3% of ethane in hydrogen/air is shown. This report describes work done to develop the microcombustor for microsystem heating and flame ionization detection and includes a description of modeling and simulation performed to understand the basic operation of this device. Ancillary research on the use of the microcombustor in calorimetric gas sensing is also described where appropriate.

More Details

Making the Connection Between Microstructure and Mechanics

Holm, Elizabeth A.; Battaile, Corbett C.; Fang, H.E.; Buchheit, Thomas E.; Wellman, Gerald W.

The purpose of microstructural control is to optimize materials properties. To that end, they have developed sophisticated and successful computational models of both microstructural evolution and mechanical response. However, coupling these models to quantitatively predict the properties of a given microstructure poses a challenge. This problem arises because most continuum response models, such as finite element, finite volume, or material point methods, do not incorporate a real length scale. Thus, two self-similar polycrystals have identical mechanical properties regardless of grain size, in conflict with theory and observations. In this project, they took a tiered risk approach to incorporate microstructure and its resultant length scales in mechanical response simulations. Techniques considered include low-risk, low-benefit methods, as well as higher-payoff, higher-risk methods. Methods studied include a constitutive response model with a local length-scale parameter, a power-law hardening rate gradient near grain boundaries, a local Voce hardening law, and strain-gradient polycrystal plasticity. These techniques were validated on a variety of systems for which theoretical analyses and/or experimental data exist. The results may be used to generate improved constitutive models that explicitly depend upon microstructure and to provide insight into microstructural deformation and failure processes. Furthermore, because mechanical state drives microstructural evolution, a strain-enhanced grain growth model was coupled with the mechanical response simulations. The coupled model predicts both properties as a function of microstructure and microstructural development as a function of processing conditions.

More Details

Laser Safety and Hazardous Analysis for the ARES (Big Sky) Laser System

Augustoni, Arnold L.

A laser safety and hazard analysis was performed for the ARES laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1,for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The ARES laser system is a Van/Truck based mobile platform, which is used to perform laser interaction experiments and tests at various national test sites.

More Details

Laser Safety and Hazard Analysis for the Trailer (B70) Based AURA Laser System

Augustoni, Arnold L.

A laser safety and hazard analysis was performed for the AURA laser system based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for ''Safe Use of Lasers'' and the 2000 version of the ANSI Standard Z136.6, for ''Safe Use of Lasers Outdoors''. The trailer based AURA laser system is a mobile platform, which is used to perform laser interaction experiments and tests at various national test sites. The trailer (B70) based AURA laser system is generally operated on the United State Air Force Starfire Optical Range (SOR) at Kirtland Air Force Base (KAFB), New Mexico. The laser is used to perform laser interaction testing inside the laser trailer as well as outside the trailer at target sites located at various distances from the exit telescope. In order to protect personnel, who work inside the Nominal Hazard Zone (NHZ), from hazardous laser emission exposures it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength (wavelength bands) and calculate the appropriate minimum Optical Density (OD{sub min}) of the laser safety eyewear used by authorized personnel and the Nominal Ocular Hazard Distance (NOHD) to protect unauthorized personnel who may have violated the boundaries of the control area and enter into the laser's NHZ.

More Details

Opacity measurements of tamped NaBr samples heated by z-pinch X-rays

Journal of Quantitative Spectroscopy and Radiative Transfer

Bailey, James E.; Rochau, G.A.; Cuneo, M.E.

Laboratory measurements provide benchmark data for wavelength-dependent plasma opacities to assist inertial confinement fusion, astrophysics, and atomic physics research. There are several potential benefits to using z-pinch radiation for opacity measurements, including relatively large cm-scale lateral sample sizes and relatively-long 3-5 ns experiment durations. These features enhance sample uniformity. The spectrally resolved transmission through a CH-tamped NaBr foil was measured. The z-pinch produced the X-rays for both the heating source and backlight source. The (50+4) eV foil electron temperature and (3±1) × 1021 cm-3 foil electron density were determined by analysis of the Na absorption features. LTE and NLTE opacity model calculations of the n=2 to 3, 4 transitions in bromine ionized into the M-shell are in reasonably good agreement with the data.

More Details

Computational Algorithms for Device-Circuit Coupling

Gardner, Timothy J.; Mclaughlin, Linda I.; Mowery-Evans, Deborah L.

Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

More Details

Radiation-Induced Prompt Photocurrents in Microelectronics: Physics

Dodd, Paul E.; Vizkelethy, Gyorgy V.; Walsh, David S.; Buller, Daniel L.; Doyle, Barney L.

The effects of photocurrents in nuclear weapons induced by proximal nuclear detonations are well known and remain a serious hostile environment threat for the US stockpile. This report describes the final results of an LDRD study of the physical phenomena underlying prompt photocurrents in microelectronic devices and circuits. The goals of this project were to obtain an improved understanding of these phenomena, and to incorporate improved models of photocurrent effects into simulation codes to assist designers in meeting hostile radiation requirements with minimum build and test cycles. We have also developed a new capability on the ion microbeam accelerator in Sandia's Ion Beam Materials Research Laboratory (the Transient Radiation Microscope, or TRM) to supply ionizing radiation in selected micro-regions of a device. The dose rates achieved in this new facility approach those possible with conventional large-scale dose-rate sources at Sandia such as HERMES III and Saturn. It is now possible to test the physics and models in device physics simulators such as Davinci in ways not previously possible. We found that the physical models in Davinci are well suited to calculating prompt photocurrents in microelectronic devices, and that the TRM can reproduce results from conventional large-scale dose-rate sources in devices where the charge-collection depth is less than the range of the ions used in the TRM.

More Details

Self Organization of Software LDRD Final Report

Osbourn, Gordon C.

We are currently exploring and developing a new statistical mechanics approach to designing self organizing and self assembling systems that is unique to SNL. The primary application target for this ongoing research is the development of new kinds of nanoscale components and hardware systems. However, a surprising out of the box connection to software development is emerging from this effort. With some amount of modification, the collective behavior physics ideas for enabling simple hardware components to self organize may also provide design methods for a new class of software modules. Large numbers of these relatively small software components, if designed correctly, would be able to self assemble into a variety of much larger and more complex software systems. This self organization process would be steered to yield desired sets of system properties. If successful, this would provide a radical (disruptive technology) path to developing complex, high reliability software unlike any known today. The special work needed to evaluate this high risk, high payoff opportunity does not fit well into existing SNL funding categories, as it is well outside of the mainstreams of both conventional software development practices and the nanoscience research area that spawned it. We proposed a small LDRD effort aimed at appropriately generalizing these collective behavior physics concepts and testing their feasibility for achieving the self organization of large software systems. Our favorable results motivate an expanded effort to fully develop self-organizing software as a new technology.

More Details

Cold War Context Statement: Sandia National Laboratories, California Site

Ullrich, Rebecca A.

This document was prepared to support the Department of Energy's compliance with Sections 106 and 110 of the National Historic Preservation Act. It provides an overview of the historic context in which Sandia National Laboratories/California was created and developed. Establishing such a context allows for a reasonable and reasoned historical assessment of Sandia National Laboratories/California properties. The Cold War arms race provides the primary historical context for the SNL/CA built environment.

More Details

Measurement and Modeling of Energetic Material Mass Transfer to Soil Pore Water - Project CP-1227 Annual Technical Report

Phelan, James M.; Webb, Stephen W.; Romero, Joseph V.; Barnett, James B.; Bohlken, Fawn A.

Military test and training ranges operate with live fire engagements to provide realism important to the maintenance of key tactical skills. Ordnance detonations during these operations typically produce minute residues of parent explosive chemical compounds. Occasional low order detonations also disperse solid phase energetic material onto the surface soil. These detonation remnants are implicated in chemical contamination impacts to groundwater on a limited set of ranges where environmental characterization projects have occurred. Key questions arise regarding how these residues and the environmental conditions (e.g. weather and geostratigraphy) contribute to groundwater pollution impacts. This report documents interim results of experimental work evaluating mass transfer processes from solid phase energetics to soil pore water. The experimental work is used as a basis to formulate a mass transfer numerical model, which has been incorporated into the porous media simulation code T2TNT. Experimental work to date with Composition B explosive has shown that column tests typically produce effluents near the temperature dependent solubility limits for RDX and TNT. The influence of water flow rate, temperature, porous media saturation and mass loading is documented. The mass transfer model formulation uses a mass transfer coefficient and surface area function and shows good agreement with the experimental data. Continued experimental work is necessary to evaluate solid phase particle size and 2-dimensional effects, and actual low order detonation debris. Simulation model improvements will continue leading to a capability to complete screening assessments of the impacts of military range operations on groundwater quality.

More Details

Modification of TOUGH2 for Enhanced Coal Bed Methane Simulations

Webb, Stephen W.

The GEO-SEQ Project is investigating methods for geological sequestration of CO{sub 2}. This project, which is directed by LBNL and includes a number of other industrial, university, and National Laboratory partners, is evaluating computer simulation models including TOUGH2. One of the problems to be considered is Enhanced Coal Bed Methane (ECBM) recovery. In this scenario, CO2 is pumped into methane-rich coal beds. Due to adsorption processes, the CO2 is sorbed onto the coal, which displaces the previously sorbed methane (CH4). The released methane can then be recovered, at least partially offsetting the cost of CO2 sequestration. Modifications have been made to the EOS7R equation of state in TOUGH2 to include the extended Langmuir isotherm for sorbing gases, including the change in porosity associated with the sorbed gas mass. Comparison to hand calculations for pure gas and binary mixtures shows very good agreement. Application to a CO{sub 2} well injection problem given by Law et al. (2002) shows good agreement considering the differences in the equations of state.

More Details

Miniature Sensors for Biological Warfare Agents using Fatty Acid Profiles: LDRD 10775 Final Report

Mowry, Curtis D.; Morgan, Christine A.; Theisen, Lisa A.; Trudell, Daniel E.; Martinez, Jesus I.

Rapid detection and identification of bacteria and other pathogens is important for many civilian and military applications. The taxonomic significance, or the ability to differentiate one microorganism from another, using fatty acid content and distribution is well known. For analysis fatty acids are usually converted to fatty acid methyl esters (FAMEs). Bench-top methods are commercially available and recent publications have demonstrated that FAMEs can be obtained from whole bacterial cells in an in situ single-step pyrolysis/methylation analysis. This report documents the progress made during a three year Laboratory Directed Research and Development (LDRD) program funded to investigate the use of microfabricated components (developed for other sensing applications) for the rapid identification of bioorganisms based upon pyrolysis and FAME analysis. Components investigated include a micropyrolyzer, a microGC, and a surface acoustic wave (SAW) array detector. Results demonstrate that the micropyrolyzer can pyrolyze whole cell bacteria samples using only milliwatts of power to produce FAMEs from bacterial samples. The microGC is shown to separate FAMEs of biological interest, and the SAW array is shown to detect volatile FAMEs. Results for each component and their capabilities and limitations are presented and discussed. This project has produced the first published work showing successful pyrolysis/methylation of fatty acids and related analytes using a microfabricated pyrolysis device.

More Details

Tracking Honey Bees Using LIDAR (Light Detection and Ranging) Technology

Bender, Susan F.; Rodacy, Philip J.; Schmitt, Randal L.; Hargis, Philip J.; Johnson, Mark S.; Klarkowski, James R.; Magee, Glen I.; Bender, Gary L.

The Defense Advanced Research Projects Agency (DARPA) has recognized that biological and chemical toxins are a real and growing threat to troops, civilians, and the ecosystem. The Explosives Components Facility at Sandia National Laboratories (SNL) has been working with the University of Montana, the Southwest Research Institute, and other agencies to evaluate the feasibility of directing honeybees to specific targets, and for environmental sampling of biological and chemical ''agents of harm''. Recent work has focused on finding and locating buried landmines and unexploded ordnance (UXO). Tests have demonstrated that honeybees can be trained to efficiently and accurately locate explosive signatures in the environment. However, it is difficult to visually track the bees and determine precisely where the targets are located. Video equipment is not practical due to its limited resolution and range. In addition, it is often unsafe to install such equipment in a field. A technology is needed to provide investigators with the standoff capability to track bees and accurately map the location of the suspected targets. This report documents Light Detection and Ranging (LIDAR) tests that were performed by SNL. These tests have shown that a LIDAR system can be used to track honeybees. The LIDAR system can provide both the range and coordinates of the target so that the location of buried munitions can be accurately mapped for subsequent removal.

More Details

SIERRA Framework Version 3: Transfer Services Design and Use

Stewart, James R.

This paper presents a description of the SIERRA Framework Version 3 parallel transfer operators. The high-level design including object interrelationships, as well as requirements for their use, is discussed. Transfer operators are used for moving field data from one computational mesh to another. The need for this service spans many different applications. The most common application is to enable loose coupling of multiple physics modules, such as for the coupling of a quasi-statics analysis with a thermal analysis. The SIERRA transfer operators support the transfer of nodal and element fields between meshes of different, arbitrary parallel decompositions. Also supplied are ''copy'' transfer operators for efficient transfer of fields between identical meshes. A ''copy'' transfer operator is also implemented for constraint objects. Each of these transfer operators is described. Also, two different parallel algorithms are presented for handling the geometric misalignment between different parallel-distributed meshes.

More Details

Implementation of a High Throughput Variable Decimation Pane Filter Using the Xilinx System Generator

Dubbert, Dale F.

In a Synthetic Aperture Radar (SAR) system, the purpose of the receiver is to process incoming radar signals in order to obtain target information and ultimately construct an image of the target area. Incoming raw signals are usually in the microwave frequency range and are typically processed with analog circuitry, requiring hardware designed specifically for the desired signal processing operations. A more flexible approach is to process the signals in the digital domain. Recent advances in analog-to-digital converter (ADC) and Field Programmable Gate Array (FPGA) technology allow direct digital processing of wideband intermediate frequency (IF) signals. Modern ADCs can achieve sampling rates in excess of 1GS/s, and modern FPGAs can contain millions of logic gates operating at frequencies over 100 MHz. The combination of these technologies is necessary to implement a digital radar receiver capable of performing high speed, sophisticated and scalable DSP designs that are not possible with analog systems. Additionally, FPGA technology allows designs to be modified as the design parameters change without the need for redesigning circuit boards, potentially saving both time and money. For typical radars receivers, there is a need for operation at multiple ranges, which requires filters with multiple decimation rates, i.e., multiple bandwidths. In previous radar receivers, variable decimation was implemented by switching between SAW filters to achieve an acceptable filter configuration. While this method works, it is rather ''brute force'' because it duplicates a large amount of hardware and requires a new filter to be added for each IF bandwidth. By implementing the filter digitally in FPGAs, a larger number of decimation values (and consequently a larger number of bandwidths) can be implemented with no need for extra components. High performance, wide bandwidth radar systems also place high demands on the DSP throughput of a given digital receiver. In such applications, the maximum clock frequency of a given FPGA is not adequate to support the required data throughput. This problem can be overcome by employing a parallel implementation of the pane filter. The parallel pane filter uses a polyphase parallelization technique to achieve an aggregate data rate which is twice that of the FPGA clock frequency. This is achieved at the expense of roughly doubling the FPGA resource usage.

More Details

Accident Conditions versus Regulatory Test for NRC-Approved UF6 Packages

Mills, G.S.; Ammerman, Douglas J.; Lopez Mestre, Carlos L.

The Nuclear Regulatory Commission (NRC) approves new package designs for shipping fissile quantities of UF{sub 6}. Currently there are three packages approved by the NRC for domestic shipments of fissile quantities of UF{sub 6}: NCI-21PF-1; UX-30; and ESP30X. For approval by the NRC, packages must be subjected to a sequence of physical tests to simulate transportation accident conditions as described in 10 CFR Part 71. The primary objective of this project was to relate the conditions experienced by these packages in the tests described in 10 CFR Part 71 to conditions potentially encountered in actual accidents and to estimate the probabilities of such accidents. Comparison of the effects of actual accident conditions to 10 CFR Part 71 tests was achieved by means of computer modeling of structural effects on the packages due to impacts with actual surfaces, and thermal effects resulting from test and other fire scenarios. In addition, the likelihood of encountering bodies of water or sufficient rainfall to cause complete or partial immersion during transport over representative truck routes was assessed. Modeled effects, and their associated probabilities, were combined with existing event-tree data, plus accident rates and other characteristics gathered from representative routes, to derive generalized probabilities of encountering accident conditions comparable to the 10 CFR Part 71 conditions. This analysis suggests that the regulatory conditions are unlikely to be exceeded in real accidents, i.e. the likelihood of UF{sub 6} being dispersed as a result of accident impact or fire is small. Moreover, given that an accident has occurred, exposure to water by fire-fighting, heavy rain or submersion in a body of water is even less probable by factors ranging from 0.5 to 8E-6.

More Details

Nonlinear programming strategies for source detection of municipal water networks

van Bloemen Waanders, Bart G.; van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.

Increasing concerns for the security of the national infrastructure have led to a growing need for improved management and control of municipal water networks. To deal with this issue, optimization offers a general and extremely effective method to identify (possibly harmful) disturbances, assess the current state of the network, and determine operating decisions that meet network requirements and lead to optimal performance. This paper details an optimization strategy for the identification of source disturbances in the network. Here we consider the source inversion problem modeled as a nonlinear programming problem. Dynamic behavior of municipal water networks is simulated using EPANET. This approach allows for a widely accepted, general purpose user interface. For the source inversion problem, flows and concentrations of the network will be reconciled and unknown sources will be determined at network nodes. Moreover, intrusive optimization and sensitivity analysis techniques are identified to assess the influence of various parameters and models in the network in a computational efficient manner. A number of numerical comparisons are made to demonstrate the effectiveness of various optimization approaches.

More Details

Dynamic self-assembly of hierarchical software structures/systems

Osbourn, Gordon C.; Osbourn, Gordon C.; Bouchard, Ann M.

We present initial results on achieving synthesis of complex software systems via a biophysics-emulating, dynamic self-assembly scheme. This approach offers novel constructs for constructing large hierarchical software systems and reusing parts of them. Sets of software building blocks actively participate in the construction and subsequent modification of the larger-scale programs of which they are a part. The building blocks interact through a software analog of selective protein-protein bonding. Self-assembly generates hierarchical modules (including both data and executables); creates software execution pathways; and concurrently executes code via the formation and release of activity triggering bonds. Hierarchical structuring is enabled through encapsulants that isolate populations of building block binding sites. The encapsulated populations act as larger-scale building blocks for the next hierarchy level. Encapsulant populations are dynamic, as their contents can move in and out. Such movement changes the populations of interacting sites and also modifies the software execution. ''External overrides'', analogous to protein phosphorylation, temporarily switch off undesired subsets of behaviors (code execution, data access/modification) of other structures. This provides a novel abstraction mechanism for code reuse. We present an implemented example of dynamic self-assembly and present several alternative strategies for specifying goals and guiding the self-assembly process.

More Details

Dynamic probe of dust wakefield interactions using constrained collisions

Proposed for publication in Physical Review E.

Hebner, Gregory A.; Hebner, Gregory A.; Riley, Merle E.

The magnitude and the structure of the ion-wakefield potential below a negatively charged dust particle levitated in the plasma-sheath region have been determined. Attractive and repulsive components of the interaction force were extracted from a trajectory analysis of low-energy dust collisions in a well-defined electrostatic potential, which constrained the dynamics of the collisions to be one dimensional. The peak attraction was on the order of 100 fN. The structure of the ion-wakefield-induced attractive potential was significantly different from a screened-Coulomb repulsive potential.

More Details

Materials for freeform fabrication of GHz tunable dielectric photonic crystals

Proposed for publication in the Materials Research Society Conference Proceedings held June 3, 2003.

Clem, Paul G.; Clem, Paul G.; Niehaus, Michael K.; Cesarano, Joseph C.; Lin, Shawn-Yu L.

Photonic crystals are of interest for GHz transmission applications, including rapid switching, GHz filters, and phased-array technology. 3D fabrication by Robocasting enables moldless printing of high solid loading slurries into structures such as the ''woodpile'' structures used to fabricate dielectric photonic band gap crystals. In this work, tunable dielectric materials were developed and printed into woodpile structures via solid freeform fabrication (SFF) toward demonstration of tunable photonic crystals. Barium strontium titanate ceramics possess interesting electrical properties including high permittivity, low loss, and high tunability. This paper discusses the processing route and dielectric characterization of (BaxSr1-XTiO3):MgO ceramic composites, toward fabrication of tunable dielectric photonic band gap crystals.

More Details

Solid-state lighting :lamp targets and implications for the semiconductor chip

Tsao, Jeffrey Y.; Tsao, Jeffrey Y.

Once again GaAs MANTECH (with III-Vs Review acting as media sponsor) promises to deliver high quality papers covering all aspects of compound semiconductor manufacturing, with speakers from leading-edge equipment, epiwafer, and device suppliers. Since its launch in 1986, GaAs MANTECH has consistently been one of the highlight events of the conference calendar. Coverage includes all compound-based semiconductors, not just GaAs. With an excellent technical program comprising of almost 80 papers and expanded workshop sessions, the 2003 event should prove the best ever. As in previous years, an Interactive Forum and Ugly Picture Contest will be included. A major attraction will be the associated exhibition, with more than 70 suppliers expected to participate.

More Details

Examining the effects of variability in short time scale demands on solute transport

Mckenna, Sean A.; Mckenna, Sean A.; Tidwell, Vincent C.

Variations in water use at short time scales, seconds to minutes, produce variation in transport of solutes through a water supply network. However, the degree to which short term variations in demand influence the solute concentrations at different locations in the network is poorly understood. Here we examine the effect of variability in demand on advective transport of a conservative solute (e.g. chloride) through a water supply network by defining the demand at each node in the model as a stochastic process. The stochastic demands are generated using a Poisson rectangular pulse (PRP) model for the case of a dead-end water line serving 20 homes represented as a single node. The simple dead-end network model is used to examine the variation in Reynolds number, the proportion of time that there is no flow (i.e., stagnant conditions, in the pipe) and the travel time defined as the time for cumulative demand to equal the volume of water in 1000 feet of pipe. Changes in these performance measures are examined as the fine scale demand functions are aggregated over larger and larger time scales. Results are compared to previously developed analytical expressions for the first and second moments of these three performance measures. A new approach to predict the reduction in variance of the performance measures based on perturbation theory is presented and compared to the results of the numerical simulations. The distribution of travel time is relatively consistent across time scales until the time step approaches that of the travel time. However, the proportion of stagnant flow periods decreases rapidly as the simulation time step increases. Both sets of analytical expressions are capable of providing adequate, first-order predictions of the simulation results.

More Details

Review of low-flow bladder pump and high-volume air piston pump groundwater sampling systems at Sandia National Laboratories, New Mexico

Collins, Sue S.; Collins, Sue S.; Bailey, Gary A.; Jackson, Timmie O.

Since 1996, Sandia National Laboratories, New Mexico (SNL/NM) has run both a portable high-volume air-piston pump system and a dedicated, low-flow bladder pump system to collect groundwater samples. The groundwater contaminants of concern at SNL/NM are nitrate and the volatile organic compounds trichloroethylene (TCE) and tetrachloethene (PCE). Regulatory acceptance is more common for the high-volume air piston pump system, especially for programs like SNL/NM's, which are regulated under the Resource Conservation and Recovery Act (RCRA). This paper describes logistical and analytical results of the groundwater sampling systems used at SNL/NM. With two modifications to the off-the-shelf low-flow bladder pump, SNL/NM consistently operates the dedicated low-flow system at depths greater than 450 feet below ground surface. As such, the low-flow sampling system requires fewer personnel, less time and materials, and generates less purge and decontamination water than does the high-volume system. However, the bladder pump cannot work in wells with less than 4 feet of water. A review of turbidity and laboratory analytical results for TCE, PCE, and chromium (Cr) from six wells highlight the affect or lack of affects the sampling systems have on groundwater samples. In the PVC wells, turbidity typically remained < 5 nephelometric turbidity units (NTU) regardless of the sampling system. In the wells with a stainless steel screen, turbidity typically remained < 5 NTU only with the low-flow system. When the high-volume system was used, the turbidity and Cr concentration typically increased an order of magnitude. TCE concentrations at two wells did not appear to be sensitive to the sampling method used. However, PCE and TCE concentrations dropped an order of magnitude when the high-volume system was used at two other wells. This paper recommends that SNL/NM collaborate with other facilities with similar groundwater depths, continue to pursue regulatory approval for using dedicated the lowflow system, and review data for sample system affects on nitrate concentrations.

More Details

Dynamics of a complex quantum magnet

Proposed for publication in Physical Review Letters.

Landry, James W.; Landry, James W.

We have computed the low energy quantum states and low frequency dynamical susceptibility of complex quantum spin systems in the limit of strong interactions, obtaining exact results for system sizes enormously larger than accessible previously. The ground state is a complex superposition of a substantial fraction of all the classical ground states, and yet the dynamical susceptibility exhibits sharp resonances reminiscent of the behavior of single spins. These results show that strongly interacting quantum systems can organize to generate coherent excitations and shed light on recent experiments demonstrating that coherent excitations are present in a disordered spin liquid. The dependence of the energy spectra on system size differs qualitatively from that of the energy spectra of random undirected bipartite graphs with similar statistics, implying that strong interactions are giving rise to these unusual spectral properties.

More Details

Effect of non-exponential and multi-exponential decay behavior on the performance of the direct exponential curve resolution algorithm (DECRA) in NMR investigations

Journal of Chemometrics

Alam, Todd M.; Alam, Mary K.

The effect of non-exponential and multi-exponential decay or relaxation behavior on the performance of the direct exponential curve resolution algorithm (DECRA) is investigated through a series of numerical simulations. Three different combinations of decay or relaxation behavior were investigated through DECRA analysis of simulated pulse gradient spin echo (PGSE) NMR diffusion spectra that contained the combination of two individual components. The diffusion decay behavior of one component was described by a single-exponential decay, while the second component was described by either (1) a multi-exponential decay, (2) a decay behavior described by the empirical Kohlrausch-Williams-Watts (KWW) relation or (3) a multi-exponential decay behavior correlated with variations in the NMR spectral line shape. The magnitudes and types of errors produced during the DECRA analysis of spectral data with deviations from a pure single-exponential decay behavior are presented. It is demonstrated that the deviation from single-exponential decay impacts the resulting calculated line shapes, the calculated relative concentrations and the quantitative estimation of the decay or relaxation time constants of both components present in the NMR spectra. Copyright © 2004 John Wiley & Sons, Ltd.

More Details

Foam structure :from soap froth to solid foams

Proposed for publication in (MRS) Materials Research Society.

Kraynik, Andrew M.; Kraynik, Andrew M.

The properties of solid foams depend on their structure, which usually evolves in the fluid state as gas bubbles expand to form polyhedral cells. The characteristic feature of foam structure-randomly packed cells of different sizes and shapes-is examined in this article by considering soap froth. This material can be modeled as a network of minimal surfaces that divide space into polyhedral cells. The cell-level geometry of random soap froth is calculated with Brakke's Surface Evolver software. The distribution of cell volumes ranges from monodisperse to highly polydisperse. Topological and geometric properties, such as surface area and edge length, of the entire foam and individual cells, are discussed. The shape of struts in solid foams is related to Plateau borders in liquid foams and calculated for different volume fractions of material. The models of soap froth are used as templates to produce finite element models of open-cell foams. Three-dimensional images of open-cell foams obtained with x-ray microtomography allow virtual reconstruction of skeletal structures that compare well with the Surface Evolver simulations of soap-froth geometry.

More Details

Changing the diffusion mechanism of ge-si dimers on si(001) using an electric field

Physical Review Letters

Swartzentruber, Brian S.; Sanders, Lani M.; Stumpf, Roland R.; Mattsson, Thomas M.

We change the diffusion mechanism of adsorbed Ge-Si dimers on Si(001) using the electric field of a scanning tunneling microscope tip. By comparing the measured field dependence with first-principles calculations we conclude that, in negative field, i.e., when electrons are attracted towards the vacuum, the dimer diffuses as a unit, rotating as it translates, whereas, in positive field the dimer bond is substantially stretched at the transition state as it slides along the substrate. Furthermore, the active mechanism in positive fields facilitates intermixing of Ge in the Si lattice, whereas intermixing is suppressed in negative fields. © 2003 The American Physical Society.

More Details

Fair share on high performance computing systems: What does fair really mean?

Proceedings - CCGrid 2003: 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid

Kleban, S.D.; Clearwater, Scott H.

We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed. © 2003 IEEE.

More Details

A level set approach to 3D mold filling of newtonian fluids

Proceedings of the ASME/JSME Joint Fluids Engineering Conference

Baer, Thomas A.; Noble, David R.; Rao, Rekha R.; Grillet, Anne M.

Filling operations, in which a viscous fluid displaces a gas in a complex geometry, occur with surprising frequency in many manufacturing processes. Difficulties in generating accurate models of these processes involve accurately capturing the interfacial boundary as it undergoes large motions and deformations, preventing dispersion and mass-loss during the computation, and robustly accounting for the effects of surface tension and wetting phenomena. This paper presents a numerical capturing algorithm using level set theory and finite element approximation. Important aspects of this work are addressing issues of mass-conservation and the presence of wetting effects. We have applied our methodology to a three-dimension model of a complicated filling problem. The simulated results are compared to experimental flow visualization data taken for filling of UCON oil in the identical geometry. Comparison of simulation and experiment indicates that the simulation conserved mass adequately and the simulated interface shape was in approximate agreement with experiment. Differences seen were largely attributed to inaccuracies in the wetting line model.

More Details

In Situ Monitoring of Vapor Phase TCE Using a Chemiresistor Microchemical Sensor

Ground Water Monitoring and Remediation

Ho, Clifford K.; Lohrstorfer, Charles F.

A chemiresistor microchemical sensor has been developed to detect and monitor volatile organic compounds in unsaturated and saturated subsurface environments. A controlled study was conducted at the HAZMAT Spill Center at the Nevada Test Site, where the sensor was tested under a range of temperature, moisture, and trichloroethylene (TCE) concentrations. The sensor responded rapidly when exposed to TCE placed in sand, and it also responded to decreases in TCE vapor concentration when clean air was vented through the system. Variations in temperature and water vapor concentration impacted baseline chemiresistor signals, but at high TCE concentrations the sensor response was dominated by the TCE exposure. Test results showed that the detection limit of the chemiresistor to TCE vapor in the presence of fluctuating environmental variables (i.e., temperature and water vapor concentration) was on the order of 1000 parts per million by volume, which is about an order of magnitude higher than values obtained in controlled laboratory environments. Automated temperature control and preconcentration is recommended to improve the stability and sensitivity of the chemiresistor sensor.

More Details

Mechanistic modeling of fingering, nonmonotonicity, fragmentation, and pulsation within gravity/buoyant destabilized two-phase/unsaturated flow

Water Resources Research

Glass, Robert J.; Yarrington, Lane Y.

Fingering, nonmonotonicity, fragmentation, and pulsation within gravity/buoyant destabilized two-phase/unsaturated flow systems has been widely observed with examples in homogeneous to heterogeneous porous media, in single fractures to fracture networks, and for both wetting and nonwetting invasion. To model this phenomena, we consider a mechanistic approach based on forms of modified invasion percolation (MIP) that include gravity, the influence of the local interfacial curvature along the phase-phase interface, and the simultaneous invasion and reinvasion of both wetting and nonwetting fluids. We present example simulations and compare them to experimental data for three very different situations: (1) downward gravity-driven fingering of water into a dry, homogeneous, water-wettable, porous medium; (2) upward buoyancy-driven migration of gas within a water saturated, heterogeneous, water-wettable, porous medium; and (3) downward gravity-driven fingering of water into a dry, water-wettable, rough-walled fracture.

More Details

On the porous continuum-scale modeling of gravity-driven fingers in unsaturated materials: Numerical solution of a hypodiffusive governing equation that incorporates a hold-back-pile-up effect

Water Resources Research

Eliassi, Mehdi E.; Glass, Robert J.

We consider the use of a hypodiffusive governing equation (HDE) for the porous-continuum modeling of gravity-driven fingers (GDF) as occur in initially dry, highly nonlinear, and hysteretic porous media. In addition to the capillary and gravity terms within the traditional Richards equation, the HDE contains a hypodiffusive term that models an experimentally observed hold-back-pile-up (HBPU) effect and thus imparts nonmonotonicity at the wetting front. In its dimensionless form the HDE contains the dimensionless hypodiffusion number, NHD. As NHD increases, one-dimensional (1D) numerical solutions transition from monotonic to nonmonotonic. Considering the experimentally observed controls on GDF occurrence, as either the initial moisture content and applied flux increase or the material nonlinearity decreases, solutions undergo the required transition back to monotonic. Additional tests for horizontal imbibition and capillary rise show the HDE to yield the required monotonie response but display sharper fronts for NHD > 0. Finally, two-dimensional (2D) numerical solutions illustrate that in parameter space where the 1D HDE yields nonmonotonicity, in 2D it forms nonmonotonic GDF.

More Details
Results 88001–88100 of 96,771
Results 88001–88100 of 96,771