This report summarizes experimental and test results from a two year LDRD project entitled Real Time Error Correction Using Electromagnetic Bearing Spindles. This project was designed to explore various control schemes for levitating magnetic bearings with the goal of obtaining high precision location of the spindle and exceptionally high rotational speeds. As part of this work, several adaptive control schemes were devised, analyzed, and implemented on an experimental magnetic bearing system. Measured results, which indicated precision positional control of the spindle was possible, agreed reasonably well with simulations. Testing also indicated that the magnetic bearing systems were capable of very high rotational speeds but were still not immune to traditional structural dynamic limitations caused by spindle flexibility effects.
This report describes the results of a study on stationary energy storage technologies for a range of applications that were categorized according to storage duration (discharge time): long or short. The study was funded by the U.S. Department of Energy through the Energy Storage Systems Program. A wide variety of storage technologies were analyzed according to performance capabilities, cost projects, and readiness to serve these many applications, and the advantages and disadvantages of each are presented.
The structural dynamics modeling of engineering structures must accommodate the energy dissipation due to microslip in mechanical joints. Given the nature of current hardware and software environments, this will require the development of constitutive models for joints that both adequately reproduce the important physics and lend themselves to efficient computational processes. The exploration of the properties of mechanical joints--either through fine resolution finite element modeling or through experiment--is itself an area of research, but some qualitative behavior appears to be established. The work presented here is the presentation of a formulation of idealized elements due to Iwan, that appears capable of reproducing the important joint properties as they are now understood. Further, methods for selecting parameters for that model by joining the results from experiments in regimes of small and large load are developed. The significance of this work is that a reduced order model is presented that is capable of reproducing the important qualitative properties of mechanical joints using only a small number of parameters.
Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies.
Two-phase flow and transport of reactants and products in the air cathode of proton exchange membrane (PEM) fuel cells is studied analytically and numerically. Single- and two-phase regimes of water distribution and transport are classified by a threshold current density corresponding to first appearance of liquid water at the membrane/cathode interface. When the cell operates above the threshold current density, liquid water appears and a two-phase zone forms within the porous cathode. A two-phase, multicomponent mixture model in conjunction with a finite-volume-based computational fluid dynamics (CFD) technique is applied to simulate the cathode operation in this regime. The model is able to handle the situation where a single-phase region co-exists with a two-phase zone in the air cathode. For the first time, the polarization curve as well as water and oxygen concentration distributions encompassing both single- and two-phase regimes of the air cathode are presented. Capillary action is found to be the dominant mechanism for water transport inside the two-phase zone of the hydrophilic structure. The liquid water saturation within the cathode is predicted to reach 6.3% at 1.4 A cm-2 for dry inlet air.
The effect of polymer architecture on macroscopic properties were investigated using the self-consistent integral equation theory. Using several types of polyolefin polymers, the results obtained using the self consistent polymer reference interaction site model (PRISM) and molecular dynamics (MD) simulations were compared. The results from the two methods were then compared with experimental X ray scattering data.
A wave-optical model that is coupled to a microscopic gain theory is used to investigate lateral mode behavior in group-III nitride quantum-well lasers. Beam filamentation due to self-focusing in the gain medium is found to limit fundamental-mode output to narrow stripe lasers or to operation close to lasing threshold. Differences between nitride and conventional near-infrared semiconductor lasers arise because of band structure differences, in particular, the presence of a strong quantum-confined Stark effect in the former. Increasing mirror reflectivities in plane-plane resonators to reduce lasing threshold current tends to exacerbate the filamentation problem. On the other hand, a negative-branch unstable resonator is found to mitigate filament effects, enabling fundamental-mode operation far above threshold in broad-area lasers.
Under ultrahigh vacuum conditions at 300 K, the applied electric field and/or resulting current from an STM tip creates nanoscale voids at the interface between an epitaxial, 7.0 angstroms thick Al2O3 film and a Ni3Al(1 1 1) substrate. This phenomenon is independent of tip polarity. Constant current (1 nA) images obtained at +0.1 V bias and +2.0 V bias voltage (sample positive) reveal that voids are within the metal at the interface and, when small, are capped by the oxide film. Void size increases with time of exposure. The rate of void growth increases with applied bias/field and tunneling current, and increases significantly for field strengths >5 MV/cm, well below the dielectric breakdown threshold of 12±1 MV/cm. Slower rates of void growth are, however, observed at lower applied field strengths. Continued growth of voids, to approximately 30 angstroms deep and approximately 500 angstroms wide, leads to the eventual failure of the oxide overlayer. Density functional theory calculations suggest a reduction-oxidation mechanism: interfacial metal atoms are oxidized via transport into the oxide, while oxide surface Al cations are reduced to admetal species which rapidly diffuse away. This is found to be exothermic in model calculations, regardless of the details of the oxide film structure; thus, the barriers to void formation are kinetic rather than thermodynamic. We discuss our results in terms of mechanisms for the localized pitting corrosion of aluminum, as our results suggest nanovoid formation requires just electric field and current, which are ubiquitous in environmental conditions.
Parameters in the heat conduction equation are frequently modeled as temperature dependent. Thermal conductivity, volumetric heat capacity, convection coefficients, emissivity, and volumetric source terms are parameters that may depend on temperature. Many applications, such as parameter estimation, optimal experimental design, optimization, and uncertainty analysis, require sensitivity to the parameters describing temperature-dependent properties. A general procedure to compute the sensitivity of the temperature field to model parameters for nonlinear heat conduction is studied. Parameters are modeled as arbitrary functions of temperature. Sensitivity equations are implemented in an unstructured grid, element-based numerical solver. The objectives of this study are to describe the methodology to derive sensitivity equations for the temperature-dependent parameters and present demonstration calculations. In addition to a verification problem, the design of an experiment to estimate temperature variable thermal properties is discussed.
The goal of this project is to predict the drawdown that will be observed in specific piezometers placed in the MIU-2 borehole due to pumping at a single location in the MIU-3 borehole. These predictions will be in the form of distributions obtained through multiple forward runs of a well-test model. Specifically, two distributions will be created for each pumping location--piezometer location pair: (1) the distribution of the times to 1.0 meter of drawdown and (2) the distribution of the drawdown predicted after 12 days of pumping at a discharge rates of 25, 50, 75 and 100 l/hr. Each of the steps in the pumping rate lasts for 3 days (259,200 seconds). This report is based on results that were presented at the Tono Geoscience Center on January 27th, 2000, which was approximately one week prior to the beginning of the interference tests. Hydraulic conductivity (K), specific storage (S{sub s}) and the length of the pathway (L{sub p}) are the input parameters to the well-test analysis model. Specific values of these input parameters are uncertain. This parameter uncertainty is accounted for in the modeling by drawing individual parameter values from distributions defined for each input parameter. For the initial set of runs, the fracture system is assumed to behave as an infinite, homogeneous, isotropic aquifer. These assumptions correspond to conceptualizing the aquifer as having Theis behavior and producing radial flow to the pumping well. A second conceptual model is also used in the drawdown calculations. This conceptual model considers that the fracture system may cause groundwater to move to the pumping well in a more linear (non-radial) manner. The effects of this conceptual model on the drawdown values are examined by casting the flow dimension (F{sub d}) of the fracture pathways as an uncertain variable between 1.0 (purely linear flow) and 2.0 (completely radial flow).
Geostatistical simulation is used to extrapolate data derived from site characterization activities at the MIU site into information describing the three-dimensional distribution of hydraulic conductivity at the site and the uncertainty in the estimates of hydraulic conductivity. This process is demonstrated for six different data sets representing incrementally increasing amounts of characterization data. Short horizontal ranges characterize the spatial variability of both the rock types (facies) and the hydraulic conductivity measurements. For each of the six data sets, 50 geostatistical realizations of the facies and 50 realizations of the hydraulic conductivity are combined to produce 50 final realizations of the hydraulic conductivity distribution. Analysis of these final realizations indicates that the mean hydraulic conductivity value increases with the addition of site characterization data. The average hydraulic conductivity as a function of elevation changes from a uniform profile to a profile showing relatively high hydraulic conductivity values near the top and bottom of the simulation domain. Three-dimensional uncertainty maps show the highest amount of uncertainty in the hydraulic conductivity distribution near the top and bottom of the model. These upper and lower areas of high uncertainty are interpreted to be due to the unconformity at the top of the granitic rocks and the Tsukyoshi fault respectively.
In this report, computable global bounds on errors due to the use of various mathematical models of physical phenomena are derived. The procedure involves identifying a so-called fine model among a class of models of certain events and then using that model as a datum with respect to which coarser models can be compared. The error inherent in a coarse model, compared to the fine datum, can be bounded by residual functionals unambiguously defined by solutions of the coarse model. Whenever there exist hierarchical classes of models in which levels of sophistication of various coarse models can be defined, an adaptive modeling strategy can be implemented to control modeling error. In the present work, the class of models is within those embodied in nonlinear continuum mechanics.
Ground mobile robots are much in the mind of defense planners at this time, being considered for a significant variety of missions with a diversity ranging from logistics supply to reconnaissance and surveillance. While there has been a very large amount of basic research funded in the last quarter century devoted to mobile robots and their supporting component technologies, little of this science base has been fully developed and deployed--notable exceptions being NASA's Mars rover and several terrestrial derivatives. The material in this paper was developed as a first exemplary step in the development of a more systematic approach to the R and D of ground mobile robots.
The overall objectives of this program are to (1) develop rapid and low-cost processes for manufacturing that can improve yield, throughput, and performance of silicon photovoltaic devices, (2) design and fabricate high-efficiency solar cells on promising low-cost materials, and (3) improve the fundamental understanding of advanced photovoltaic devices. Several rapid and potentially low-cost technologies are described in this report that were developed and applied toward the fabrication of high-efficiency silicon solar cells.
It is critically important, for the sake of credible computational predictions, that model-validation experiments be designed, conducted, and analyzed in ways that provide for measuring predictive capability. I first develop a conceptual framework for designing and conducting a suite of physical experiments and calculations (ranging from phenomenological to integral levels), then analyzing the results first to (statistically) measure predictive capability in the experimental situations then to provide a basis for inferring the uncertainty of a computational-model prediction of system or component performance in an application environment or configuration that cannot or will not be tested. Several attendant issues are discussed in general, then illustrated via a simple linear model and a shock physics example. The primary messages I wish to convey are: (1) The only way to measure predictive capability is via suites of experiments and corresponding computations in testable environments and configurations; (2) Any measurement of predictive capability is a function of experimental data and hence is statistical in nature; (3) A critical inferential link is required to connect observed prediction errors in experimental contexts to bounds on prediction errors in untested applications. Such a connection may require extrapolating both the computational model and the observed extra-model variability (the prediction errors: nature minus model); (4) Model validation is not binary. Passing a validation test does not mean that the model can be used as a surrogate for nature; (5) Model validation experiments should be designed and conducted in ways that permit a realistic estimate of prediction errors, or extra-model variability, in application environments; (6) Code uncertainty-propagation analyses do not (and cannot) characterize prediction error (nature vs. computational prediction); (7) There are trade-offs between model complexity and the ability to measure a computer model's predictive capability that need to be addressed in any particular application; and (8) Adequate quantification of predictive capability, even in greatly simplified situations, can require a substantial number of model-validation experiments.
A model is developed for the forces acting on a micrometer-size particle (dust) suspended within a plasma sheath. The significant forces acting on a single particle are gravity, neutral gas drag, electric field, and the ion wind due to ion flow to the electrode. It is shown that an instability in the small-amplitude dust oscillation might exist if the conditions are appropriate. In such a case the forcing term due to the ion wind exceeds the damping of the gas drag. The basic physical cause for the instability is that the ion wind force can be a decreasing function of the relative ion-particle velocity. However it seems very unlikely the appropriate conditions for instability are present in typical dusty plasmas.
In this paper the development of a gridless method to solve compressible flow problems is discussed. The governing evolution equations for velocity divergence {delta}, vorticity {omega}, density {rho}, and temperature T are obtained from the primitive variable Navier-Stokes equations. Simplifications to the equations resulting from assumptions of ideal gas behavior, adiabatic flow, and/or constant viscosity coefficients are given. A general solution technique is outlined with some discussion regarding alternative approaches. Two radial flow model problems are considered which are solved using both a finite difference method and a compressible particle method. The first of these is an isentropic inviscid 1D spherical flow which initially has a Gaussian temperature distribution with zero velocity everywhere. The second problem is an isentropic inviscid 2D radial flow which has an initial vorticity distribution with constant temperature everywhere. Results from the finite difference and compressible particle calculations are compared in each case. A summary of the results obtained herein is given along with recommendations for continuing the work.
A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function of temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating.
Smoke can cause interruptions and upsets in active electronics. Because nuclear power plants are replacing analog with digital instrumentation and control systems, qualification guidelines for new systems are being reviewed for severe environments such as smoke and electromagnetic interference. Active digital systems, individual components, and active circuits have been exposed to smoke in a program sponsored by the U.S. Nuclear Regulatory Commission. The circuits and systems were all monitored during the smoke exposure, indicating any immediate effects of the smoke. The major effect of smoke has been to increase leakage currents (through circuit bridging across contacts and leads) and to cause momentary upsets and failures in digital systems. This report summarizes two previous reports and presents new results from conformal coating, memory chip, and hard drive tests. The report describes practices for mitigation of smoke damage through digital system design, fire barriers, ventilation, fire suppressants, and post fire procedures.
Thin films of polymethylmethacrylate (PMMA) doped with perylene provide selective, robust and easily prepared optical sensor films for NO2 gas with suitable response times for materials aging applications. The materials are readily formed as 200 nm thin spin cast films on glass from chlorobenzene solution. The fluorescence emission of the films (λmax = 442 nm) is quenched upon exposure to NO2 gas through an irreversible reaction forming non-fluorescent nitroperylene. Infrared, UV-VIS and fluorescence spectroscopies confirmed the presence of the nitro adduct in the films. In other atmospheres examined, such as air and 1000 ppm concentrations of SO2, CO, Cl2 and NH3, the films exhibited no loss of fluorescence intensity over a period of days to weeks. Response curves were obtained for 1000, 100 and 10 ppm NO2 at room temperature with equilibration times varying from hours to weeks. The response curves were fit using a numerical solution to the coupled diffusion and a nonlinear chemical reaction problem assuming that the situation is reaction limiting. The forward reaction constant fitted to experimental data was kf to approximately 0.06 (ppm min)-1.
Fluid mechanics research related to fire is reviewed with a focus on canonical flows, multiphysics coupling aspects, and experimental and numerical techniques. Fire is a low-speed, chemically reacting flow in which buoyancy plays an important role. Fire research has focused on two canonical flows, the reacting boundary layer and the reacting free plume. There is rich, multilateral, bidirectional coupling among fluid mechanics and scalar transport, combustion, and radiation. There is only a limited experimental fluid mechanics database for fire owing to measurement difficulties in the harsh environment and to the focus within the fire community on thermal/chemical consequences. Increasingly, computational fluid dynamics techniques are being used to provide engineering guidance on thermal/chemical consequences and to study fire phenomenology.