This report summarizes the results of a computer model that describes the behavior of pulsating heat pipes (PHP). The purpose of the project was to develop a highly efficient (as compared to the heat transfer capability of solid copper) thermal groundplane (TGP) using silicon carbide (SiC) as the substrate material and water as the working fluid. The objective of this project is to develop a multi-physics model for this complex phenomenon to assist with an understanding of how PHPs operate and to be able to understand how various parameters (geometry, fill ratio, materials, working fluid, etc.) affect its performance. The physical processes describing a PHP are highly coupled. Understanding its operation is further complicated by the non-equilibrium nature of the interplay between evaporation/condensation, bubble growth and collapse or coalescence, and the coupled response of the multiphase fluid dynamics among the different channels. A comprehensive theory of operation and design tools for PHPs is still an unrealized task. In the following we first analyze, in some detail, a simple model that has been proposed to describe PHP behavior. Although it includes fundamental features of a PHP, it also makes some assumptions to keep the model tractable. In an effort to improve on current modeling practice, we constructed a model for a PHP using some unique features available in FLOW-3D, version 9.2-3 (Flow Science, 2007). We believe that this flow modeling software retains more of the salient features of a PHP and thus, provides a closer representation of its behavior.
Global monitoring systems that have high spatial and temporal resolution, with long observational baselines, are needed to provide situational awareness of the Earth's climate system. Continuous monitoring is required for early warning of high-consequence climate change and to help anticipate and minimize the threat. Global climate has changed abruptly in the past and will almost certainly do so again, even in the absence of anthropogenic interference. It is possible that the Earth's climate could change dramatically and suddenly within a few years. An unexpected loss of climate stability would be equivalent to the failure of an engineered system on a grand scale, and would affect billions of people by causing agricultural, economic, and environmental collapses that would cascade throughout the world. The probability of such an abrupt change happening in the near future may be small, but it is nonzero. Because the consequences would be catastrophic, we argue that the problem should be treated with science-informed engineering conservatism, which focuses on various ways a system can fail and emphasizes inspection and early detection. Such an approach will require high-fidelity continuous global monitoring, informed by scientific modeling.
Gigabit Passive Optical Networks (GPON) is a networking technology which offers the potential to provide significant cost savings to Sandia National Laboratories in the area of network operations. However, a large scale GPON deployment requires a significant investment in equipment and infrastructure. Before a large scale GPON system was acquired and built, a small GPON system manufactured by Motorola was acquired and tested. The testing performed was to determine the suitability of GPON for use at SNL. This report documents that testing. This report presents test results of GPON system consisting of Motorola and Juniper equipment. The GPON system was tested in areas of data throughput, video conferencing, VOIP, security, and operations and management. The GPON system performed well in almost all areas. GPON will not meet the needs of the low percentage of users requiring a true 1-10 Gbps network connection. GPON will also most likely not meet the need of some servers requiring dedicated throughput of 1-10 Gbps. Because of that, there will be some legacy network connections that must remain. If these legacy network connections can not be reduced to a bare minimum and possibly consolidated to a few locations, any cost savings gained by switching to GPON will be negated by maintaining two networks. A contract has been recently awarded for new GPON equipment with larger buffers. This equipment should improve performance and further reduce the need for legacy network connections. Because GPON has fewer components than a typical hierarchical network, it should be easier to manage. For the system tested, the management was performed by using the AXSVison client. Access to the client must be tightly controlled, because if client/server communications are compromised, security will be an issue. As with any network, the reliability of individual components will determine overall system reliability. There were no failures with the routers, OLT, or Sun Workstation Management platform. There were however four ONTs that failed. Because of the small sample size of 64, and the fact that some of the ONTs were used units, no conclusions can be made. However, ONT reliability is an area of concern. Access to the fiber plant that GPON requires must be tightly controlled and all changes documented. The undocumented changes that were performed in the GPON test lab demonstrated the need for tight control and documentation. In summary, GPON should be able to meet the needs of most network users at Sandia National Laboratories. Because it supports voice, video, and data, it positions Sandia National Laboratories to deploy these services to the desktop. For the majority of corporate network users at Sandia National Laboratories GPON should be a suitable replacement for the legacy network.
The potential for liquid aluminum to dissolve an iridium solid is examined. Substantial uncertainties exist in material properties, and the available data for the iridium solubility and iridium diffusivity are discussed. The dissolution rate is expressed in terms of the regression velocity of the solid iridium when exposed to the solvent (aluminum). The temperature has the strongest influence in the dissolution rate. This dependence comes primarily from the solubility of iridium in aluminum and secondarily from the temperature dependence of the diffusion coefficient. This dissolution mass flux is geometry dependent and results are provided for simplified geometries at constant temperatures. For situations where there is negligible convective flow, simple time-dependent diffusion solutions are provided. Correlations for mass transfer are also given for natural convection and forced convection. These estimates suggest that dissolution of iridium can be significant for temperatures well below the melting temperature of iridium, but the uncertainties in actual rates are large because of uncertainties in the physical parameters and in the details of the relevant geometries.
The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.
As the core count of HPC machines continue to grow in size, issues such as fault tolerance and reliability are becoming limiting factors for application scalability. Current techniques to ensure progress across faults, for example coordinated checkpoint-restart, are unsuitable for machines of this scale due to their predicted high overheads. In this study, we present the design and implementation of a novel system for ensuring reliability which uses transparent, rank-level, redundant computation. Using this system, we show the overheads involved in redundant computation for a number of real-world HPC applications. Additionally, we relate the communication characteristics of an application to the overheads observed.
This report summarizes the work completed during FY2009 for the LDRD project 09-1332 'Molecule-Based Approach for Computing Chemical-Reaction Rates in Upper-Atmosphere Hypersonic Flows'. The goal of this project was to apply a recently proposed approach for the Direct Simulation Monte Carlo (DSMC) method to calculate chemical-reaction rates for high-temperature atmospheric species. The new DSMC model reproduces measured equilibrium reaction rates without using any macroscopic reaction-rate information. Since it uses only molecular properties, the new model is inherently able to predict reaction rates for arbitrary nonequilibrium conditions. DSMC non-equilibrium reaction rates are compared to Park's phenomenological non-equilibrium reaction-rate model, the predominant model for hypersonic-flow-field calculations. For near-equilibrium conditions, Park's model is in good agreement with the DSMC-calculated reaction rates. For far-from-equilibrium conditions, corresponding to a typical shock layer, the difference between the two models can exceed 10 orders of magnitude. The DSMC predictions are also found to be in very good agreement with measured and calculated non-equilibrium reaction rates. Extensions of the model to reactions typically found in combustion flows and ionizing reactions are also found to be in very good agreement with available measurements, offering strong evidence that this is a viable and reliable technique to predict chemical reaction rates.
Four conventional damage plasticity models for concrete, the Karagozian and Case model (K&C), the Riedel-Hiermaier-Thoma model (RHT), the Brannon-Fossum model (BF1), and the Continuous Surface Cap Model (CSCM) are compared. The K&C and RHT models have been used in commercial finite element programs many years, whereas the BF1 and CSCM models are relatively new. All four models are essentially isotropic plasticity models for which 'plasticity' is regarded as any form of inelasticity. All of the models support nonlinear elasticity, but with different formulations. All four models employ three shear strength surfaces. The 'yield surface' bounds an evolving set of elastically obtainable stress states. The 'limit surface' bounds stress states that can be reached by any means (elastic or plastic). To model softening, it is recognized that some stress states might be reached once, but, because of irreversible damage, might not be achievable again. In other words, softening is the process of collapse of the limit surface, ultimately down to a final 'residual surface' for fully failed material. The four models being compared differ in their softening evolution equations, as well as in their equations used to degrade the elastic stiffness. For all four models, the strength surfaces are cast in stress space. For all four models, it is recognized that scale effects are important for softening, but the models differ significantly in their approaches. The K&C documentation, for example, mentions that a particular material parameter affecting the damage evolution rate must be set by the user according to the mesh size to preserve energy to failure. Similarly, the BF1 model presumes that all material parameters are set to values appropriate to the scale of the element, and automated assignment of scale-appropriate values is available only through an enhanced implementation of BF1 (called BFS) that regards scale effects to be coupled to statistical variability of material properties. The RHT model appears to similarly support optional uncertainty and automated settings for scale-dependent material parameters. The K&C, RHT, and CSCM models support rate dependence by allowing the strength to be a function of strain rate, whereas the BF1 model uses Duvaut-Lion viscoplasticity theory to give a smoother prediction of transient effects. During softening, all four models require a certain amount of strain to develop before allowing significant damage accumulation. For the K&C, RHT, and CSCM models, the strain-to-failure is tied to fracture energy release, whereas a similar effect is achieved indirectly in the BF1 model by a time-based criterion that is tied to crack propagation speed.
A new method is introduced for real-time detection of transient change in scenes observed by staring sensors that are subject to platform jitter, pixel defects, variable focus, and other real-world challenges. The approach uses flexible statistical models for the scene background and its variability, which are continually updated to track gradual drift in the sensor's performance and the scene under observation. Two separate models represent temporal and spatial variations in pixel intensity. For the temporal model, each new frame is projected into a low-dimensional subspace designed to capture the behavior of the frame data over a recent observation window. Per-pixel temporal standard deviation estimates are based on projection residuals. The second approach employs a simple representation of jitter to generate pixelwise moment estimates from a single frame. These estimates rely on spatial characteristics of the scene, and are used gauge each pixel's susceptibility to jitter. The temporal model handles pixels that are naturally variable due to sensor noise or moving scene elements, along with jitter displacements comparable to those observed in the recent past. The spatial model captures jitter-induced changes that may not have been seen previously. Change is declared in pixels whose current values are inconsistent with both models.
High resolution radar systems generally require combining fast analog to digital converters and digital to analog converters with very high performance digital signal processing logic. These mixed analog and digital printed circuit boards present special challenges with respect to electromagnetic interference. This document first describes the mechanisms of interference on such boards then follows up with a discussion of prevention techniques and finally provides a checklist for designers to help avoid common mistakes.
Results from an experimental study of the aerodynamic and aeroacoustic properties of a ftatback version of the TU Delft DU97-W-300 airfoil are presented. Measurements were made for both the original DU97-W-300 and the flatback version. The chord Reynolds number varied from l.6 x 106 to 3.2 x 106. The data were gathered in the Virginia Tech Stability Wind Tunnel, which includes a special aeroacoustic test section to enable measurements of airfoil self-noise. Corrected wind tunnel aerodynamic measurements for the DU97-W-300 are compared to previous solid wall wind tunnel data and are shown to give good agreement. Force coefficient and surface pressure distributions are compared for the flatback and the original airfoil for both free-transition and tripped boundary layer configurations. Aeroacoustic data are presented for the flatback airfoil, with a focus on the amplitude and frequency of noise associated with the vortex-shedding tone from the blunt trailing edge wake. The effect of a splitter plate trailing edge attachment on both drag and noise of the ftacback airfoil is also investigated.
Methods are developed for finding an optimal model for a non-Gaussian stationary stochastic process or homogeneous random field under limited information. The available information consists of: (i) one or more finite length samples of the process or field; and (ii) knowledge that the process or field takes values in a bounded interval of the real line whose ends may or may not be known. The methods are developed and applied to the special case of non-Gaussian processes or fields belonging to the class of beta translation processes. Beta translation processes provide a flexible model for representing physical phenomena taking values in a bounded range, and are therefore useful for many applications. Numerical examples are presented to illustrate the utility of beta translation processes and the proposed methods for model selection.
Solid particle receivers have the potential to provide high-temperature heat for advanced power cycles, thermochemical processes, and thermal storage via direct particle absorption of concentrated solar energy. This paper presents two different models to evaluate the performance of these systems. One model is a detailed computational fluid dynamics model using FLUENT that includes irradiation from the concentrated solar flux, two-band re-radiation and emission within the cavity, discrete-phase particle transport and heat transfer, gas-phase convection, wall conduction, and radiative and convective heat losses. The second model is an easy-to-use and fast simulation code using Matlab that includes solar and thermal radiation exchange between the particle curtain, cavity walls, and aperture, but neglects convection. Both models were compared to unheated particle flow tests and to on-sun heating tests. Comparisons between measured and simulated particle velocities, opacity, particle volume fractions, particle temperatures, and thermal efficiencies were found to be in good agreement. Sensitivity studies were also performed with the models to identify parameters and modifications to improve the performance of the solid particle receiver.
This article is the second of two that consider the treatment of fluid-solid interaction problems where the solid experiences wave loading and large bulk Lagrangian displacements. In part-I, we presented the formulation for the edge-based unstructured-grid Euler solver in the context of a discontinuous- Galerkin framework with the extensions used to treat internal fluid-solid interfaces. A super-sampled L2 projection was used to construct level-set data from the Lagrangian interface, and a narrow-band approach was used to identify and construct appropriate ghost data and boundary conditions at the fluid-solid interface. A series of benchmark problems were used to verify the treatment of the fluid-solid interface conditions with a static interface position. In this paper, we consider the treatment of dynamic interfaces and the associated large bulk Lagrangian displacements of the solid.We present the coupled dynamic fluid-solid system, and develop an explicit, monolithic treatment of the fully-coupled system. The conditions associated with moving interfaces and their implementation are discussed. A comparison of moving vs. fixed reference frames is used to verify the dynamic interface treatment. Lastly, a series of two and and three-dimensional projectile and shock-body interaction calculations are presented. Ultimately, the use of the Lagrangian interface position and a super-sampled projection for fast level-set construction, the narrow-band extraction of ghost data, and monolithic explicit solution algorithm has proved to be a computationally efficient means for treating shock induced fluid-solid interaction problems.
Here, this paper is the first of two that consider the treatment of fluid-solid interaction problems under shock wave loading, where the solid experiences large bulk Lagrangian displacements. This work addresses the issues associated with using a level-set as a generalized interface for fluid-solid coupling where unstructured overlapping grids are used for the fluid and solid domains. In part-I of this work, we outline the formulation used for the edge-based unstructured-grid Euler solver in the context of the discontinuous-Galerkin method. The identification of the fluid-solid interface on the unstructured fluid mesh uses a super-sampled L2 projection technique, that in conjunction with a Lagrangian interface position, permits fast identification of the interface and the concomitant imposition of boundary conditions. The use of a narrow-band approach for the identification of the wetted interface is presented with the details of the construction of interface conditions. A series of computations are presented to demonstrate the validity of the current approach on problems with static interfaces. In part-II, we present the coupled dynamic fluid-solid system, and present an explicit monolithic algorithm for the treatment of the fully-coupled system. The interface conditions associated with moving interfaces is considered, and a comparison of moving vs. static reference frames is used to evaluate the dynamic interface treatment. Finally, a series of two and and three-dimensional projectile and shock-body calculations are presented.
The problem of understanding and modeling the complicated physics underlying the action and response of the interfaces in typical structures under dynamic loading conditions has occupied researchers for many decades. This handbook presents an integrated approach to the goal of dynamic modeling of typical jointed structures, beginning with a mathematical assessment of experimental or simulation data, development of constitutive models to account for load histories to deformation, establishment of kinematic models coupling to the continuum models, and application of finite element analysis leading to dynamic structural simulation. In addition, formulations are discussed to mitigate the very short simulation time steps that appear to be required in numerical simulation for problems such as this. This handbook satisfies the commitment to DOE that Sandia will develop the technical content and write a Joints Handbook. The content will include: (1) Methods for characterizing the nonlinear stiffness and energy dissipation for typical joints used in mechanical systems and components. (2) The methodology will include practical guidance on experiments, and reduced order models that can be used to characterize joint behavior. (3) Examples for typical bolted and screw joints will be provided.
In many applications, the thermal response of structures exposed to solar heat loads is of interest. Solar mechanics governing equations were developed and integrated with the Calore thermal response code via user subroutines to provide this computational simulation capability. Solar heat loads are estimated based on the latitude and day of the year. Vector algebra is used to determine the solar loading on each face of a finite element model based on its orientation relative to the sun as the earth rotates. Atmospheric attenuation is accounted for as the optical path length varies from sunrise to sunset. Both direct and diffuse components of solar flux are calculated. In addition, shadowing of structures by other structures can be accounted for. User subroutines were also developed to provide convective and radiative boundary conditions for the diurnal variations in air temperature and effective sky temperature. These temperature boundary conditions are based on available local weather data and depend on latitude and day of the year, consistent with the solar mechanics formulation. These user subroutines, coupled with the Calore three-dimensional thermal response code, provide a complete package for addressing complex thermal problems involving solar heating. The governing equations are documented in sufficient detail to facilitate implementation into other heat transfer codes. Suggestions for improvements to the approach are offered.
The International Data Centre of the Comprehensive Nuclear-Test-Ban Treaty Organization relies on automatic data processing as the first step in identifying seismic events from seismic waveform data. However, more than half of the automatically identified seismic events are eliminated by IDC analysts. Here, an IDC dataset is analyzed to determine if the number of automatically generated false positives could be reduced. Data that could be used to distinguish false positives from analyst-accepted seismic events includes the number of stations, the number of phases, the signal-to-noise ratio, and the pick error. An empirical method is devised to determine whether an automatically identified seismic event is acceptable, and the method is found to identify a significant number of the false positives in IDC data. This work could help reduce seismic analyst workload and could help improve the calibration of seismic monitoring stations. This work could also be extended to address identification of seismic events missed by automatic processing.
There are as many unique and disparate manifestations of border systems as there are borders to protect. Border Security is a highly complex system analysis problem with global, regional, national, sector, and border element dimensions for land, water, and air domains. The complexity increases with the multiple, and sometimes conflicting, missions for regulating the flow of people and goods across borders, while securing them for national security. These systems include frontier border surveillance, immigration management and customs functions that must operate in a variety of weather, terrain, operational conditions, cultural constraints, and geopolitical contexts. As part of a Laboratory Directed Research and Development Project 08-684 (Year 1), the team developed a reference framework to decompose this complex system into international/regional, national, and border elements levels covering customs, immigration, and border policing functions. This generalized architecture is relevant to both domestic and international borders. As part of year two of this project (09-1204), the team determined relevant relative measures to better understand border management performance. This paper describes those relative metrics and how they can be used to improve border management systems.
Two Multijunction Thermal Voltage Converters (MJTCs) were provided to the Sandia National Laboratory Primary Standards Laboratory (Sandia PSL) as part of an interlaboratory comparison (ILC). This report summarizes the results of the measurements of the devices (S 127D1 and S 127C2) measured at Sandia PSL from March 4 to March 15, 2009. The SNL/NM portion of an interlaboratory comparison of multijunction thermal convertors was successfully completed with a demonstrated measurement uncertainty of 60ppm (k=2).
Preliminary evaluation of deep borehole disposal of high-level radioactive waste and spent nuclear fuel indicates the potential for excellent long-term safety performance at costs competitive with mined repositories. Significant fluid flow through basement rock is prevented, in part, by low permeabilities, poorly connected transport pathways, and overburden self-sealing. Deep fluids also resist vertical movement because they are density stratified. Thermal hydrologic calculations estimate the thermal pulse from emplaced waste to be small (less than 20 C at 10 meters from the borehole, for less than a few hundred years), and to result in maximum total vertical fluid movement of {approx}100 m. Reducing conditions will sharply limit solubilities of most dose-critical radionuclides at depth, and high ionic strengths of deep fluids will prevent colloidal transport. For the bounding analysis of this report, waste is envisioned to be emplaced as fuel assemblies stacked inside drill casing that are lowered, and emplaced using off-the-shelf oilfield and geothermal drilling techniques, into the lower 1-2 km portion of a vertical borehole {approx}45 cm in diameter and 3-5 km deep, followed by borehole sealing. Deep borehole disposal of radioactive waste in the United States would require modifications to the Nuclear Waste Policy Act and to applicable regulatory standards for long-term performance set by the US Environmental Protection Agency (40 CFR part 191) and US Nuclear Regulatory Commission (10 CFR part 60). The performance analysis described here is based on the assumption that long-term standards for deep borehole disposal would be identical in the key regards to those prescribed for existing repositories (40 CFR part 197 and 10 CFR part 63).
The chemistries of reactants, plasticizers, solvents and additives in an epoxy paint are discussed. Polyamide additives may play an important role in the absorption of molecular iodine by epoxy paints. It is recommended that the unsaturation of the polyamide additive in the epoxy cure be determined. Experimental studies of water absorption by epoxy resins are discussed. These studies show that absorption can disrupt hydrogen bonds among segments of the polymers and cause swelling of the polymer. The water absorption increases the diffusion coefficient of water within the polymer. Permanent damage to the polymer can result if water causes hydrolysis of ether linkages. Water desorption studies are recommended to ascertain how water absorption affects epoxy paint.
The most energy efficient solid state white light source will likely be a combination of individually efficient red, green, and blue LED. For any multi-color approach to be successful the efficiency of deep green LEDs must be significantly improved. While traditional approaches to improve InGaN materials have yielded incremental success, we proposed a novel approach using group IIIA and IIIB nitride semiconductors to produce efficient green and high wavelength LEDs. To obtain longer wavelength LEDs in the nitrides, we attempted to combine scandium (Sc) and yttrium (Y) with gallium (Ga) to produce ScGaN and YGaN for the quantum well (QW) active regions. Based on linear extrapolation of the proposed bandgaps of ScN (2.15 eV), YN (0.8 eV) and GaN (3.4 eV), we expected that LEDs could be fabricated from the UV (410 nm) to the IR (1600 nm), and therefore cover all visible wavelengths. The growth of these novel alloys potentially provided several advantages over the more traditional InGaN QW regions including: higher growth temperatures more compatible with GaN growth, closer lattice matching to GaN, and reduced phase separation than is commonly observed in InGaN growth. One drawback to using ScGaN and YGaN films as the active regions in LEDs is that little research has been conducted on their growth, specifically, are there metalorganic precursors that are suitable for growth, are the bandgaps direct or indirect, can the materials be grown directly on GaN with a minimal defect formation, as well as other issues related to growth. The major impediment to the growth of ScGaN and YGaN alloys was the low volatility of metalorganic precursors. Despite this impediment some progress was made in incorporation of Sc and Y into GaN which is detailed in this report. Primarily, we were able to incorporate up to 5 x 10{sup 18} cm{sup -3} Y atoms into a GaN film, which are far below the alloy concentrations needed to evaluate the YGaN optical properties. After a no-cost extension was granted on this program, an additional more 'liquid-like' Sc precursor was evaluated and the nitridation of Sc metals on GaN were investigated. Using the Sc precursor, dopant level quantities of Sc were incorporated into GaN, thereby concluding the growth of ScGaN and YGaN films. Our remaining time during the no-cost extension was focused on pulsed laser deposition of Sc metal films on GaN, followed by nitridation in the MOCVD reactor to form ScN. Finally, GaN films were deposited on the ScN thin films in order to study possible GaN dislocation reduction.
With no lattice matched substrate available, sapphire continues as the substrate of choice for GaN growth, because of its reasonable cost and the extensive prior experience using it as a substrate for GaN. Surprisingly, the high dislocation density does not appear to limit UV and blue LED light intensity. However, dislocations may limit green LED light intensity and LED lifetime, especially as LEDs are pushed to higher current density for high end solid state lighting sources. To improve the performance for these higher current density LEDs, simple growth-enabled reductions in dislocation density would be highly prized. GaN nucleation layers (NLs) are not commonly thought of as an application of nano-structural engineering; yet, these layers evolve during the growth process to produce self-assembled, nanometer-scale structures. Continued growth on these nuclei ultimately leads to a fully coalesced film, and we show in this research program that their initial density is correlated to the GaN dislocation density. In this 18 month program, we developed MOCVD growth methods to reduce GaN dislocation densities on sapphire from 5 x 10{sup 8} cm{sup -2} using our standard delay recovery growth technique to 1 x 10{sup 8} cm{sup -2} using an ultra-low nucleation density technique. For this research, we firmly established a correlation between the GaN nucleation thickness, the resulting nucleation density after annealing, and dislocation density of full GaN films grown on these nucleation layers. We developed methods to reduce the nuclei density while still maintaining the ability to fully coalesce the GaN films. Ways were sought to improve the GaN nuclei orientation by improving the sapphire surface smoothness by annealing prior to the NL growth. Methods to eliminate the formation of additional nuclei once the majority of GaN nuclei were developed using a silicon nitride treatment prior to the deposition of the nucleation layer. Nucleation layer thickness was determined using optical reflectance and the nucleation density was determined using atomic force microscopy (AFM) and Nomarski microscopy. Dislocation density was measured using X-ray diffraction and AFM after coating the surface with silicon nitride to delineate all dislocation types. The program milestone of producing GaN films with dislocation densities of 1 x 10{sup 8} cm{sup -2} was met by silicon nitride treatment of annealed sapphire followed by the multiple deposition of a low density of GaN nuclei followed by high temperature GaN growth. Details of this growth process and the underlying science are presented in this final report along with problems encountered in this research and recommendations for future work.