When embarking on an experimental program for purposes of discovery and understanding, it is only prudent to use appropriate analysis tools to aid in the discovery process. Due to the limited scope of experimental measurement analytical results can significantly complement the data after a reasonable validation process has occurred. In this manner the analytical results can help to explain certain measurements, suggest other measurements to take and point to possible modifications to the experimental apparatus. For these reasons it was decided to create a detailed nonlinear finite element model of the Sandia Microslip Experiment. This experiment was designed to investigate energy dissipation due to microslip in bolted joints and to identify the critical parameters involved. In an attempt to limit the microslip to a single interface a complicated system of rollers and cables was devised to clamp the two slipping members together with a prescribed normal load without using a bolt. An oscillatory tangential load is supplied via a shaker. The finite element model includes the clamping device in addition to the sequence of steps taken in setting up the experiment. The interface is modeled using Coulomb friction requiring a modest validation procedure for estimating the coefficient of friction. Analysis results have indicated misalignment problems in the experimental procedure, identified transducer locations for more accurate measurements, predicted complex interface motions including the potential for galling, identified regions where microslip occurs and during which parts of the loading cycle it occurs, all this in addition to the energy dissipated per cycle. A number of these predictions have been experimentally corroborated in varying degrees and are presented in the paper along with the details of the finite element model.
In this paper the authors investigate the relationship between glassy and ferromagnetic phases in disordered Ising ferromagnets in the presence of transverse magnetic fields, {Lambda}. Iterative mean field simulations probe the free energy landscape and suggest the existence of a glass transition line in the {Lambda}, temperature T plane well within the ferromagnetic phase. New experimental field-cooled and zero-field-cooled data on LiHo{sub x} Y{sub 1{minus}x}F{sub 4} provide support for our theoretical picture.
As a joint is loaded, the tangent stiffness of the joint reduces due to slip at interfaces. This stiffness reduction continues until the direction of the applied load is reversed or the total interface slips. Total interface slippage in joints is called macro-slip. For joints not undergoing macro-slip, when load reversal occurs the tangent stiffness immediately rebounds to its maximum value. This occurs due to stiction effects at the interface. Thus, for periodic loads, a softening and rebound hardening cycle is produced which defines a hysteretic, energy absorbing trajectory. For many jointed sub-structures, this hysteretic trajectory can be approximated using simple polynomial representations. This allows for complex joint substructures to be represented using simple non-linear models. In this paper a simple one dimensional model is discussed.
Unattended monitoring systems are being studied as a means of reducing both the cost and intrusiveness of present nuclear safeguards approaches. Such systems present the classic information overload problem to anyone trying to interpret the resulting data not only because of the sheer quantity of data but also because of the problems inherent in trying to correlate information from more than one source. As a consequence, analysis efforts to date have mostly concentrated on checking thresholds or diagnosing failures. Clearly more sophisticated analysis techniques are required to enable automated verification of expected activities level concepts in order to make automated judgments about safety, sensor system integrity, sensor data quality, diversion, and accountancy.
Multiple techniques have been developed to model the temporal evolution of infectious diseases. Some of these techniques have also been adapted to model the spatial evolution of the disease. This report examines the application of one such technique, the SEIR model, to the spatial and temporal evolution of disease. Applications of the SEIR model are reviewed briefly and an adaptation to the traditional SEIR model is presented. This adaptation allows for modeling the spatial evolution of the disease stages at the individual level. The transmission of the disease between individuals is modeled explicitly through the use of exposure likelihood functions rather than the global transmission rate applied to populations in the traditional implementation of the SEIR model. These adaptations allow for the consideration of spatially variable (heterogeneous) susceptibility and immunity within the population. The adaptations also allow for modeling both contagious and non-contagious diseases. The results of a number of numerical experiments to explore the effect of model parameters on the spread of an example disease are presented.
This Safety Analysis Report (SAR) is prepared in compliance with the requirements of DOE Order 5480.23, Nuclear Safety Analysis Reports, and has been written to the format and content guide of DOE-STD-3009-94 Preparation Guide for U. S. Department of Energy Nonreactor Nuclear Safety Analysis Reports. The Hot Cell Facility is a Hazard Category 2 nonreactor nuclear facility, and is operated by Sandia National Laboratories for the Department of Energy. This SAR provides a description of the HCF and its operations, an assessment of the hazards and potential accidents which may occur in the facility. The potential consequences and likelihood of these accidents are analyzed and described. Using the process and criteria described in DOE-STD-3009-94, safety-related structures, systems and components are identified, and the important safety functions of each SSC are described. Additionally, information which describes the safety management programs at SNL are described in ancillary chapters of the SAR.
The magnetic implosion of a high-Z quasi-spherical shell filled with DT fuel by the 20-MA Z accelerator can heat the fuel to near-ignition temperature. The attainable implosion velocity on Z, 13-cm/{micro}s, is fast enough that thermal losses from the fuel to the shell are small. The high-Z shell traps radiation losses from the fuel, and the fuel reaches a high enough density to reabsorb the trapped radiation. The implosion is then nearly adiabatic. In this case the temperature of the fuel increases as the square of the convergence. The initial temperature of the fuel is set by the heating of an ion acoustic wave to be about 200-eV after a convergence of 4. To reach the ignition temperature of 5-keV an additional convergence of 5 is required. The implosion dynamics of the quasi-spherical implosion is modeled with the 2-D radiation hydrodynamic code LASNEX. LASNEX shows an 8-mm diameter quasi-spherical tungsten shell on Z driving 6-atmospheres of DT fuel nearly to ignition at 3.5-keV with a convergence of 20. The convergence is limited by mass flow along the surface of the quasi-spherical shell. With a convergence of 20 the final spot size is 400-{micro}m in diameter.
A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion sequel report, the CKP model is capable of closely matching plate impact measurements for porous materials.
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
This report presents research on public key, digital signature algorithms for cryptographic authentication in low-powered, low-computation environments. We assessed algorithms for suitability based on their signature size, and computation and storage requirements. We evaluated a variety of general purpose and special purpose computing platforms to address issues such as memory, voltage requirements, and special functionality for low-powered applications. In addition, we examined custom design platforms. We found that a custom design offers the most flexibility and can be optimized for specific algorithms. Furthermore, the entire platform can exist on a single Application Specific Integrated Circuit (ASIC) or can be integrated with commercially available components to produce the desired computing platform.
Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and unbended inclusions. A lack of interfacial bonding has a profound effect on inclusion-tip stress fields. A large radial compressive stress is generated in front of the inclusion-tip when the inclusion is well bonded, whereas a large tensile hoop stress is generated when the inclusion is unbended, and frictionless sliding is allowed. Consequently, an epoxy disk containing an unbended inclusion appears more likely to crack when cooled than a disk containing a fully bonded inclusion. A limited number of tests have been carried out to determine if encapsulant cracking can be induced by cooling a specimen fabricated by molding a square, steel insert within a thin, epoxy disk. Test results are in qualitative agreement with analysis. Cracks developed only in disks with mold-released inserts, and the tendency for cracking increased with inclusion size.
This paper looks at emerging technologies for converging voice and data networks and telephony transport over a data network using Internet Protocols. Considered are the benefits and drivers for this convergence. The paper describes these new technologies, how they are being used, and their application to Sandia.
The authors have successfully demonstrated an optical data interconnection from the output of a focal plane array to the downstream data acquisition electronics. The demonstrated approach included a continuous wave laser beam directed at a multiple quantum well reflectance modulator connected to the focal plane array analog output. The output waveform from the optical interconnect was observed on an oscilloscope to be a replica of the input signal. They fed the output of the optical data link to the same data acquisition system used to characterize focal plane array performance. Measurements of the signal to noise ratio at the input and output of the optical interconnection showed that the signal to noise ratio was reduced by a factor of 10 or more. Analysis of the noise and link gain showed that the primary contributors to the additional noise were laser intensity noise and photodetector receiver noise. Subsequent efforts should be able to reduce these noise sources considerably and should result in substantially improved signal to noise performance. They also observed significant photocurrent generation in the reflectance modulator that imposes a current load on the focal plane array output amplifier. This current loading is an issue with the demonstrated approach because it tends to negate the power saving feature of the reflectance modulator interconnection concept.
The HMX {beta}-{delta} solid-solid phase transition, which occurs as HMX is heated near 170 C, is linked to increased reactivity and sensitivity to initiation. Thermally damaged energetic materials (EMs) containing HMX therefore may present a safety concern. Information about the phase transition is vital to predictive safety models for HMX and HMX-containing EMs. We report work on monitoring the phase transition with real-time Raman spectroscopy aimed towards obtaining a better understanding of physical properties of HMX through the phase transition. HMX samples were confined in a cell of minimal free volume in a displacement-controlled or load-controlled arrangement. The cell was heated and then cooled at controlled rates while real-time Raman spectroscopic measurements were performed. Raman spectroscopy provides a clear distinction between the phases of HMX because the vibrational transitions of the molecule change with conformational changes associated with the phase transition. Temperature of phase transition versus load data are presented for both the heating and cooling cycles in the load-controlled apparatus, and general trends are discussed. A weak dependence of the temperature of phase transition on load was discovered during the heating cycle, with higher loads causing the phase transition to occur at a higher temperature. This was especially true in the temperature of completion of phase transition data as opposed to the temperature of onset of phase transition data. A stronger dependence on load was observed in the cooling cycle, with higher loads causing the reverse phase transitions to occur at a higher cooling temperature. Also, higher loads tended to cause the phase transition to occur over a longer period of time in the heating cycle and over a shorter period of time in the cooling cycle. All three of the pure HMX phases ({alpha}, {beta} and {delta}) were detected on cooling of the heated samples, either in pure form or as a mixture.
The goal of this LDRD Research project was to provide a preliminary examination of the use of infrared spectroscopy as a tool to detect the changes in cell cultures upon activation by an infectious agent. Due to a late arrival of funding, only 5 months were available to transfer and setup equipment at UTTM,develop cell culture lines, test methods of in-situ activation and collect kinetic data from activated cells. Using attenuated total reflectance (ATR) as a sampling method, live cell cultures were examined prior to and after activation. Spectroscopic data were collected from cells immediately after activation in situ and, in many cases for five successive hours. Additional data were collected from cells activated within a test tube (pre-activated), in both transmission mode as well as in ATR mode. Changes in the infrared data were apparent in the transmission data collected from the pre-activated cells as well in some of the pre-activated ATR data. Changes in the in-situ activated spectral data were only occasionally present due to (1) the limited time cells were studied and (2) incomplete activation. Comparison of preliminary data to infrared bands reported in the literature suggests the primary changes seen are due an increase in ribonucleic acid (RNA) production. This work will be continued as part of a 3 year DARPA grant.
The Rapid Terrain Visualization Advanced Concept Technology Demonstration (RTV-ACTD) is designed to demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies. The primary sensor for this mission is an interferometric synthetic aperture radar (IFSAR) designed at Sandia National Laboratories. This paper will outline the design of the system and its performance, and show some recent flight test results. The RTV IFSAR will meet DTED level III and IV specifications by using a multiple-baseline design and high-accuracy differential and carrier-phase GPS navigation. It includes innovative near-real-time DEM production on-board the aircraft. The system is being flown on a deHavilland DHC-7 Army aircraft.
The Factsheets web application was conceived out of the requirement to create, update, publish, and maintain a web site with dynamic research and development (R and D) content. Before creating the site, a requirements discovery process was done in order to accurately capture the purpose and functionality of the site. One of the high priority requirements for the site would be that no specialized training in web page authoring would be necessary. All functions of uploading, creation, and editing of factsheets needed to be accomplished by entering data directly into web form screens generated by the application. Another important requirement of the site was to allow for access to the factsheet web pages and data via the internal Sandia Restricted Network and Sandia Open Network based on the status of the input data. Important to the owners of the web site would be to allow the published factsheets to be accessible to all personnel within the department whether or not the sheets had completed the formal Review and Approval (R and A) process. Once the factsheets had gone through the formal review and approval process, they could then be published both internally and externally based on their individual publication status. An extended requirement and feature of the site would be to provide a keyword search capability to search through the factsheets. Also, since the site currently resides on both the internal and external networks, it would need to be registered with the Sandia search engines in order to allow access to the content of the site by the search engines. To date, all of the above requirements and features have been created and implemented in the Factsheet web application. These have been accomplished by the use of flat text databases, which are discussed in greater detail later in this paper.
In a total-system performance assessment (TSPA), uncertainty in the performance measure (e.g., radiation dose) is estimated by first estimating the uncertain y in the input variables and then propagating that uncertain y through the model system by means of Monte Carlo simulation. This paper discusses uncertainty in surface infiltration, which is one of the input variables needed for performance assessments of the Yucca Mountain site. Infiltration has been represented in recent TSPA simulations by using three discrete infiltration maps (i.e., spatial distributions of infiltration) for each climate state in the calculation of unsaturated-zone flow and transport. A detailed uncertainty analysis of infiltration was carried out for two purposes: to better quantify the possible range of infiltration, and to determine what probability weights should be assigned to the three infiltration cases in a TSPA simulation. The remainder of this paper presents the approach and methodology for the uncertainty analysis, along with a discussion of the results.
In this paper we have shown how the direction cosine method of stripmap-mode IFSAR maybe modified for use in the spotlight-mode case. Spotlight-mode IFSAR geometry dictates a common aperture phase center, velocity vector, and baseline vector for every pixel in an image. Angle with respect to the velocity vector is the same for every pixel in a given column and can be computed from the column index, the Doppler of the motion compensation point and the Doppler column sample spacing used in image formation. With these modifications, the direction cosines and length of the line of sight vector to every scatterer in the scene may be computed directly from the raw radar measurements of range, Doppler, and interferometric phase.
One of the key elements of the Stochastic Finite Element Method, namely the polynomial chaos expansion, has been utilized in a nonlinear shock and vibration application. As a result, the computed response was expressed as a random process, which is an approximation to the true solution process, and can be thought of as a generalization to solutions given as statistics only. This approximation to the response process was then used to derive an analytically-based design specification for component shock response that guarantees a balanced level of marginal reliability. Hence, this analytically-based reference SRS might lead to an improvement over the somewhat ad hoc test-based reference in the sense that it will not exhibit regions of conservativeness. nor lead to overtesting of the design.
Experienced experimentalists have gone through the process of attempting to identify a final set of modal parameters from several different sets of extracted parameters. Usually, this is done by visually examining the mode shapes. With the advent of automated modal parameter extraction algorithms such as SMAC (Synthesize Modes and Correlate), very accurate extractions can be made to high frequencies. However, this process may generate several hundred modes that then must be consolidated into a final set of modal information. This has motivated the authors to generate a set of tools to speed the process of consolidating modal parameters by mathematical (instead of visual) means. These tools help quickly identify the best modal parameter extraction associated with several extractions of the same mode. The tools also indicate how many different modes have been extracted in a nominal frequency range and from which references. The mathematics are presented to achieve the best modal extraction of multiple modes at the same nominal frequency. Improvements in the SMAC graphical user interface and database are discussed that speed and improve the entire extraction process.
The solution-mediated synthesis and single crystal structure of (CN{sub 3}H{sub 6}){sub 2} {center_dot} Zn(HPO{sub 3}){sub 2} are reported. This phase is built up from a three-dimensional framework of vertex-linked ZnO{sub 4} and HPO{sub 3} building units encapsulating the extra-framework guanidinium cations. The structure is stabilized by template-to-framework hydrogen bonding. The inorganic framework shows a surprising similarity to those of some known zinc phosphates. Crystal data: (CN{sub 3}H{sub 6}){sub 2} {center_dot} Zn(HPO{sub 3}){sub 2}, AI,= 345.50, orthorhombic, space group Fdd2 (No. 43), a = 15.2109 (6) {angstrom}, b = 11.7281 (5) {angstrom}, c = 14.1821 (6) {angstrom}, V = 2530.0 (4){angstrom}{sup 3}, Z = 8, T = 298 (2)K, R(F) = 0.020, wR(F) = 0.025.
The thermal-hydrologic (TH) and coupled process models describe the evolution of a potential geologic repository as heat is released from emplaced waste. The evolution (thermal, hydrologic, chemical, and mechanical) of the engineered barrier and geologic systems is heavily dependent on the heat released by the waste packages and how the heat is transferred from the emplaced wastes through the drifts and through the repository host rock. The essential elements of this process are extracted (or abstracted) from the process-level models that incorporate the basic energy and mass conservation principles and applied to the total system models used to describe the overall performance of the potential repository. The process of total system performance assessment (TSPA) abstraction is the following. First is a description of the parameter inputs used in the process-level models. A brief description is given hereof past inputs for the viability assessment (e.g., for TSPA-VA) and current inputs for the site recommendation (TSPA-SR). This is followed by a highlight of the process-level models from which the abstractions are made. These include descriptions of TH, thermal-hydrologic-chemical (THC), and thermal-mechanical (TM) processes used to describe the performance of individual waste packages and waste emplacement drifts as well as the repository as a whole. Next is a description of what (and how) information is abstracted from the process-level models. This also includes an accounting of the features, events, and processes (FEPs) that are important to both the regulators and the international repository community in general. Finally, an identification of the TSPA model components that utilize the abstracted information to characterize the overall performance of a potential geologic repository is given.
In this paper an optimization-based method of drift prevention is presented for learning control of underdetermined linear and weakly nonlinear time-varying dynamic systems. By defining a fictitious cost function and the associated model-based sub-optimality conditions, a new set of equations results, whose solution is unique, thus preventing large drifts from the initial input. Moreover, in the limiting case where the modeling error approaches zero, the input that the proposed method converges to is the unique feasible (zero error) input that minimizes the fictitious cost function, in the linear case, and locally minimizes it in the (weakly) nonlinear case. Otherwise, under mild restrictions on the modeling error, the method converges to a feasible sub-optimal input.
The abstraction model used for seepage into emplacement drifts in recent TSPA simulations has been presented. This model contributes to the calculation of the quantity of water that might contact waste if it is emplaced at Yucca Mountain. Other important components of that calculation not discussed here include models for climate, infiltration, unsaturated-zone flow, and thermohydrology; drip-shield and waste-package degradation; and flow around and through the drip shield and waste package. The seepage abstraction model is stochastic because predictions of seepage are necessarily quite uncertain. The model provides uncertainty distributions for seepage fraction fraction of waste-package locations flow rate as functions of percolation flux. In addition, effects of intermediate-scale flow with seepage and seep channeling are included by means of a flow-focusing factor, which is also represented by an uncertainty distribution.
A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.
Societal needs related to demographics, resources, and human behavior will drive technological advances over the next 20 years. Nanotechnology is anticipated to be an important enabler of these advances, and thus maybe anticipated to have significant influence on new systems approaches to solving societal problems as well as on extending current science and technology-based applications. To examine the potential implications of nanotechnology a societal needs-driven approach is taken. Thus the methodology is to present the definition of the problem, and then examine system concepts, technology issues, and promising future directions. We approach the problem definition from a national and global security perspective and identify three key areas involving the condition of the planet, the human condition, and global security. In anticipating societal issues in the context of revolutionary technologies, such as maybe enabled by nanoscience, the importance of working on the entire life cycle of any technological solution is stressed.
The moduli used in RSA (see [5]) can be generated by many different sources. The generator of that modulus (assuming a single entity generates the modulus) knows its factorization. They would have the ability to forge signatures or break any system based on this moduli. If a moduli and the RSA parameters associated with it were generated by a reputable source, the system would have higher value than if the parameters were generated by an unknown entity. So for tracking, security, confidence and financial reasons it would be beneficial to know who the generator of the RSA modulus was. This is where digital marking comes in. An RSA modulus ia digitally marked, or digitally trade marked, if the generator and other identifying features of the modulus (such as its intended user, the version number, etc.) can be identified and possibly verified by the modulus itself. The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the modulus to this tag. Thus anyone who sees the public modulus can tell who generated the modulus and who the generator believes the intended user/owner of the modulus is.
The free energy barriers of vapor tube formed in a metastable liquid confined between hydrophobic walls is investigated. Monte Carlo simulations, the transition state theory and constrained umbrella sampling techniques are performed to estimate the free energy barrier for vapor tube formation. Transmission coefficients calculated for the liquid layer show that capillary evaporation are also described from the size of a vapor pocket formed between the walls.
The capillary evaporation (cavitation) of water confined between two hydrophobic surfaces in close proximity is analyzed. The water is replaced by vapor due to the presence of bulk energetics and surface energetics. Monte Carlo simulations are performed to determine the effect of water confinement on the dynamics of surface-induced phase transitions. To relate the simulation rates to the experimental data, the mass-conserving Kawasaki algorithms are also performed.
Laser technology has advanced dramatically and is an integral part of today's healthcare delivery system. Lasers are used in the laboratory analysis of human blood samples and serve as surgical tools that kill, burn or cut tissue. Recent semiconductor microtechnology has reduced the size o f a laser to the size of a biological cell or even a virus particle. By integrating these ultra small lasers with biological systems, it is possible to create micro-electrical mechanical systems that may revolutionize health care delivery.
The rating and modeling of photovoltaic PW module performance has been of concern to manufacturers and system designers for over 20 years. Both the National Renewable Energy Laboratory (NREL) and Sandia National Laboratories (SNL) have developed methodologies to predict module and array performance under actual operating conditions. This paper compares the two methods of determining the performance of PV modules, The methods translate module performance to actual or reference conditions using slightly different approaches. The accuracy of both methods is compared for both hourly, daily, and annual energy production over a year of data recorded at NREL in Golden, CO. The comparison of the two methods will be presented for five different PV module technologies.
The Million Solar Roofs Initiative has motivated a renewed interest in the development of utility interconnected photovoltaic (UIPV) inverters. Government-sponsored programs (PVMaT, PVBONUS) and competition among utility interconnected inverter manufacturers have stimulated innovations and improved the performance of existing technologies. With this resurgence, Sandia National Laboratories (SNL) has developed a program to assist industry initiatives to overcome barriers to UIPV inverters. In accordance with newly adopted IEEE 929-2000, the utility interconnected PV inverters are required to cease energizing the utility grid when either a significant disturbance occurs or the utility experiences an interruption in service. Compliance with IEEE 929-2000 is being widely adopted by utilities as a minimum requirement for utility interconnection. This report summarizes work done at the SNL balance-of-systems laboratory to support the development of IEEE 929-2000 and to assist manufacturers in meeting its requirements.
For the last ten years, the Japanese High-Level Nuclear Waste (HLW) repository program has focused on assessing the feasibility of a basic repository concept, which resulted in the recently published H12 Report. As Japan enters the implementation phase, a new organization must identify, screen and choose potential repository sites. Thus, a rapid mechanism for determining the likelihood of site suitability is critical. The threshold approach, described here, is a simple mechanism for defining the likelihood that a site is suitable given estimates of several critical parameters. We rely on the results of a companion paper, which described a probabilistic performance assessment simulation of the HLW reference case in the H12 report. The most critical two or three input parameters are plotted against each other and treated as spatial variables. Geostatistics is used to interpret the spatial correlation, which in turn is used to simulate multiple realizations of the parameter value maps. By combining an array of realizations, we can look at the probability that a given site, as represented by estimates of this combination of parameters, would be good host for a repository site.