The worst case bias during total dose irradiation of partially depleted SOI transistors (from SNL and from CEA/LETI) is correlated to the device architecture. Experiments and simulations are used to analyze SOI back transistor threshold voltage shift and charge trapping in the buried oxide.
The optimization of concentrated AlliedSignal GS-44 silicon nitride aqueous slurries for robocasting was investigated. The dispersion mechanisms of GS-44 Si{sub 3}N{sub 4} aqueous suspensions with and without polyacrylate were analyzed. The zero point of charge (ZPC) was at about pH 6. Well-dispersed GS-44 suspensions were obtained in the pH range from 7 to 11 by the addition of Darvan 821A. The influence of pH, amount of Darvan 821A and solids loading on the theological behavior of GS-44 aqueous suspensions was determined. A coagulant, aluminum nitrate, was used to control the yield stress and shear thinning behavior of highly loaded Si{sub 3}N{sub 4} slurries. Homogeneous and stable suspensions of 52 vol% GS-44 Si{sub 3}N{sub 4} were robocast successfully at pH 7.8 to pH 8.5. The sintering process, mechanical properties and microstructural characteristics of robocast GS-44 bars were determined.
The authors use force-probe microscopy to study the friction force and the adhesive interaction for molecular monolayer self-assembled on both Au probe tips and substrate surfaces. By systematically varying the chemical nature of the end groups on these monolayers the authors have, for the first time, delineated the mechanical and chemical origins of molecular-level friction. They use chemically inert {double_bond}CH{sub 3} groups on both interracial surfaces to establish the purely mechanical component of the friction and contrast the results with the findings for chemically active {double_bond}COOH end-groups. In addition, by using odd or even numbers of methylene groups in the alkyl backbones of the molecules they are able to determine the levels of inter-film and intra-film hydrogen bonding.
The authors use scanning probe microscopy to actuate and characterize the nanoscale mechanochromism of polydiacetylene monolayer on atomically-flat silicon oxide substrates. They find explicit evidence that the irreversible blue-to-red transformation is caused by shear forces exerted normal to the polydiacetylene polymer backbone. The anisotropic probe-induced transformation is characterized by a significant change in the tilt orientation of the side chains with respect to the surface normal. They also describe a new technique, based on shear force microscopy, that allows them to image friction anisotropy of polydiacetylene monolayer independent of scan direction. Finally, they discuss preliminary molecular mechanics modeling and electronic structure calculations that allow them to understand the correlation of mechanochromism with bond-angle changes in the conjugated polymer backbone.
Solar thermal-to-electric power plants have been tested and investigated at Sandia National Laboratories (SNL) since the late 1970s, and thermal storage has always been an area of key study because it affords an economical method of delivering solar-electricity during non-daylight hours. This paper describes the design considerations of a new, single-tank, thermal storage system and details the benefits of employing this technology in large-scale (10MW to 100MW) solar thermal power plants. Since December 1999, solar engineers at Sandia National Laboratories' National Solar Thermal Test Facility (NSTTF) have designed and are constructing a thermal storage test called the thermocline system. This technology, which employs a single thermocline tank, has the potential to replace the traditional and more expensive two-tank storage systems. The thermocline tank approach uses a mixture of silica sand and quartzite rock to displace a significant portion of the volume in the tank. Then it is filled with the heat transfer fluid, a molten nitrate salt. A thermal gradient separates the hot and cold salt. Loading the tank with the combination of sand, rock, and molten salt instead of just molten salt dramatically reduces the system cost. The typical cost of the molten nitrate salt is $800 per ton versus the cost of the sand and rock portion at $70 per ton. Construction of the thermocline system will be completed in August 2000, and testing will run for two to three months. The testing results will be used to determine the economic viability of the single-tank (thermocline) storage technology for large-scale solar thermal power plants. Also discussed in this paper are the safety issues involving molten nitrate salts and other heat transfer fluids, such as synthetic heat transfer oils, and the impact of these issues on the system design.
A figure of merit for optimization of a complete Stokes polarimeter based on its measurement matrix is described from the standpoint of singular value decomposition and analysis of variance. It is applied to optimize a system featuring a rotatable retarder and fixed polarizer, and to study the effects of non-ideal retarder properties. A retardance of 132{degree} (approximately three-eighths wave) and retarder orientation angles of {+-}51.7{degree} and {+-}15.1{degree} are favorable when four measurements are used. An achromatic, form-birefringent retarder for the 3--5 {micro}m spectral region has been fabricated and characterized. The effects of non-idealities in the form-birefringent retarder are moderate, and performance superior to that of a quarter-wave plate is expected.
The construction of inverse states in a finite field F{sub P{sub {alpha}}} enables the organization of the mass scale with fundamental octets in an eight-dimensional index space that identifies particle states with residue class designations. Conformance with both CPT invariance and the concept of supersymmetry follows as a direct consequence of this formulation. Based on two parameters (P{sub {alpha}} and g{sub {alpha}}) that are anchored on a concordance of physical data, this treatment leads to (1) a prospective mass for the muon neutrino of {approximately}27.68 meV, (2) a value of the unified strong-electroweak coupling constant {alpha}* = (34.26){sup {minus}1} that is physically defined by the ratio of the electron neutrino and muon neutrino masses, and (3) a see-saw congruence connecting the Higgs, the electron neutrino, and the muon neutrino masses. Specific evaluation of the masses of the corresponding supersymmetric Higgs pair reveals that both particles are superheavy (> 10{sup 18}GeV). No renormalization of the Higgs masses is introduced, since the calculational procedure yielding their magnitudes is intrinsically divergence-free. Further, the Higgs fulfills its conjectured role through the see-saw relation as the particle defining the origin of all particle masses, since the electron and muon neutrino systems, together with their supersymmetric partners, are the generators of the mass scale and establish the corresponding index space. Finally, since the computation of the Higgs masses is entirely determined by the modulus of the field P{sub {alpha}}, which is fully defined by the large-scale parameters of the universe through the value of the universal gravitational constant G and the requirement for perfect flatness ({Omega} = 1.0), the see-saw congruence fuses the concepts of mass and space and creates a new unified archetype.
Control objectives open an additional front in the survivability battle. A given set of control objectives is valuable if it represents good practices, it is complete (it covers all the necessary areas), and it is auditable. CobiT and BS 7799 are two examples of control objective sets.
The authors understanding of multiphase physics and the associated predictive capability for multi-phase systems are severely limited by current continuum modeling methods and experimental approaches. This research will deliver an unprecedented modeling capability to directly simulate three-dimensional multi-phase systems at the particle-scale. The model solves the fully coupled equations of motion governing the fluid phase and the individual particles comprising the solid phase using a newly discovered, highly efficient coupled numerical method based on the discrete-element method and the Lattice-Boltzmann method. A massively parallel implementation will enable the solution of large, physically realistic systems.
This summer, the author was tasked with the development of a design and prototype for a Programming Adapter (PA). This device must interface to a specialized cluster of computers at a US Air Force programming station. The PA is a command/response system capable of recognizing commands from a host Programming Computer (PC) generating a response to these commands according to design requirements. The PA must also route classified serial data between a programming station and any target devices on the PA without compromising the data. In this manner, classified data can pass through the adapter, but when data transfer is complete, the PA can be handled as an unclassified piece of hardware.
A series of inertial confinement fusion (ICF) capsule experiments were run on the Z machine at Sandia's Pulsed Power directorate. These experiments were designed specifically to implode a 2 mm diameter hollow plastic capsule filled with deuterium gas. The implosion of the capsule should raise the temperature (kinetic energy) of the deuterium gas ions, which will interact with each other and produce 2.45 MeV fusion neutrons. The author is reporting on one diagnostic technique used to measure the yield of these fusion neutrons. The technique chosen to measure the DD neutron yield is the use of lead (Pb) probe detectors. The assignment was to calibrate two detectors for the 2.50-MeV neutrons produced by the deuterium-deuterium fusion reactions on Z. The author introduces ICF, and then describes the theory, the design, and the calibration of the lead probe. Finally, she presents the results of the ICF experiments and explain the difficulties inherent in analyzing the data.
Saturn is a dual-purpose accelerator. It can be operated as a large-area flash x-ray source for simulation testing or as a Z-pinch driver especially for K-line x-ray production. In the first mode, the accelerator is fitted with three concentric-ring 2-MV electron diodes, while in the Z-pinch mode the current of all the modules is combined via a post-hole convolute arrangement and driven through a cylindrical array of very fine wires. We present here a point design for a new Saturn class driver based on a number of linear inductive voltage adders connected in parallel. A technology recently implemented at the Institute of High Current Electronics in Tomsk (Russia) is being utilized. In the present design we eliminate Marx generators and pulse-forming networks. Each inductive voltage adder cavity is directly fed by a number of fast 100-kV small-size capacitors arranged in a circular array around each accelerating gap. The number of capacitors connected in parallel to each cavity defines the total maximum current. By selecting low inductance switches, voltage pulses as short as 30-50-ns FWHM can be directly achieved. The voltage of each stage is low (100-200 kv). Many stages are required to achieve multi-megavolt accelerator output. However, since the length of each stage is very short (4-10 cm), accelerating gradients of higher than 1 MV/m can easily be obtained. The proposed new driver will be capable of delivering pulses of 15-MA, 36-TW, 1.2-MJ to the diode load, with a peak voltage of {minus}2.2 MV and FWHM of 40-ns. And although its performance will exceed the presently utilized driver, its size and cost could be much smaller ({approximately}1/3). In addition, no liquid dielectrics like oil or deionized water will be required. Even elimination of ferromagnetic material (by using air-core cavities) is a possibility.
This report describes the procedure and properties of the software upgrade for the Vibration Performance Recorder. The upgrade will check the 20 memory cards for proper read/write operation. The upgrade was successfully installed and uploaded into the Viper and the field laptop. The memory checking routine must run overnight to complete the test, although the laptop need only be connected to the Viper unit until the downloading routine is finished. The routine has limited ability to recognize incomplete or corrupt header and footer files. The routine requires 400 Megabytes of free hard disk space. There is one minor technical flaw detailed in the conclusion.
In order to exploit the information on surface wave propagation that is stored in large seismic event datasets, Sandia and Lawrence Livermore National Laboratories have developed a MatSeis interface for performing phase-matched filtering of Rayleigh arrivals. MatSeis is a Matlab-based seismic processing toolkit which provides graphical tools for analyzing seismic data from a network of stations. Tools are available for spectral and polarization measurements, as well as beam forming and f-k analysis with array data, to name just a few. Additionally, one has full access to the Matlab environment and any functions available there. Previously the authors reported the development of new MatSeis tools for calculating regional discrimination measurements. The first of these performs Lg coda analysis as developed by Mayeda and coworkers at Lawrence Livermore National Laboratory. A second tool measures regional phase amplitude ratios for an event and compares the results to ratios from known earthquakes and explosions. Release 1.5 of MatSeis includes the new interface for the analysis of surface wave arrivals. This effort involves the use of regionalized dispersion models from a repository of surface wave data and the construction of phase-matched filters to improve surface wave identification, detection, and magnitude calculation. The tool works as follows. First, a ray is traced from source to receiver through a user-defined grid containing different group velocity versus period values to determine the composite group velocity curve for the path. This curve is shown along with the upper and lower group velocity bounds for reference. Next, the curve is used to create a phase-matched filter, apply the filter, and show the resultant waveform. The application of the filter allows obscured Rayleigh arrivals to be more easily identified. Finally, after screening information outside the range of the phase-matched filter, an inverse version of the filter is applied to obtain a cleaned raw waveform which can be used for amplitude measurements. Because all the MatSeis tools have been written as Matlab functions, they can be easily modified to experiment with different processing details. The performance of the propagation models can be evaluated using any event available in the repository of surface wave events.
This is the author's third summer working at Sandia National Laboratories in organization 5712. He is a physics major at Reed College in Portland, Oregon. His work at Sandia began during his senior year at Eldorado High School, when he worked part time and received school credit for participating in the internship program. During that time and two ensuing summers he worked on a variety of projects. These experiences included testing a number of optical-electronic systems, performing such tasks as determining the spectral responsivity of photodiodes and placing optical/electronic systems in front of a variety of light-sources in order to generate calibration curves. He also contributed to the computer generation of data to model a hypothetical satellite-mounted detection system using SSGM (Synthetic Scene Generation Model) and the Khoros visual programming software Cantata on a UNIX operating system. Other experiences included pre-flight satellite testing, and work in the field deploying a suite of sensors and data collection equipment in Nevada. This summer he is involved in image analysis using the software development tools of the Khoros programming environment. He is working on a project whose goal is to identify superimposed spectra obtained from remote-sensing equipment. The spectra to be identified are those of chemical warfare agents and precursor chemicals from the industrial processes used to manufacture them. Identifying these spectra is a challenge when they are mixed with each other and with incident light from the ground and atmosphere--photons that are both reflected from the sun and emitted as blackbody radiation. In order to model this process, he is working on a Khoros program that will add noise to laboratory-obtained spectra from a variety of chemicals. This altered data will mimic what a remote sensing device is likely to record in the field. Given this example of likely field results, developing an ideal sensor and a method to identify spectra from such data will continue for a number of years.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
Fast Z-pinch technology developed on the Z machine at Sandia National Laboratories can produce up to 230 TW of thermal x-ray power for applications in inertial confinement fusion (ICF) and weapons physics experiments. During implosion, these Z-pinches develop Rayleigh-Taylor (R-T) instabilities which are very difficult to diagnose and which functionally diminish the overall pinch quality. The Power-Space-Time (PST) instrument is a newly configured diagnostic for measuring the pinch power as a function of both space and time in a Z-pinch. Placing the diagnostic at 90 degrees from the Z-pinch axis, the PST provides a new capability in collecting experimental data on R-T characteristics for making meaningful comparisons to magneto-hydrodynamic computer models. This paper is a summary of the PST diagnostic design. By slit-imaging the Z-pinch x-ray emissions onto a linear scintillator/fiber-optic array coupled to a streak camera system, the PST can achieve {approximately}100 {micro}m spatial resolution and {approximately}1.3 ns time resolution. Calculations indicate that a 20 {micro}m thick scintillating detection element filtered by 1,000 {angstrom} of Al is theoretically linear in response to Plankian x-ray distributions corresponding to plasma temperatures from 40 eV to 150 eV, By calibrating this detection element to x-ray energies up to 5,000 eV, the PST can provide pinch power as a function of height and time in a Z-pinch for temperatures ranging from {approximately}40 eV to {approximately}400 eV. With these system pm-meters, the PST can provide data for an experimental determination of the R-T mode number, amplitude, and growth rate during the late-time pinch implosion.
A performance evaluation of several computers was necessary, so an evaluation program, or benchmark, was run on each computer to determine maximum possible performance. The program was used to test the Computer Aided Drafting (CAD) ability of each computer by monitoring the speed with which several functions were executed. The main objective of the benchmarking program was to record assembly loading times and image regeneration times and then compile a composite score that could be compared with the same tests on other computers. The three computers that were tested were the Compaq AP550, the SGI 230, and the Hewlett-PackardP750C. The Compaq and SGI computers each had a Pentium III 733mhz processor, while the Hewlett-Packard had a Pentium III 750mhz processor. The size and speed of Random Access Memory (RAM) in each computer varied, as did the type of graphics card. Each computer that was tested was using Windows NT 4.0 and Pro/ENGINEER{trademark} 2000i CAD benchmark software provided by Standard Performance Evaluation Corporation (SPEC). The benchmarking program came with its own assembly, automatically loaded and ran tests on the assembly, then compiled the time each test took to complete. Due to the automation of the tests, any sort of user error affecting test scores was virtually eliminated. After all the tests were completed, scores were then compiled and compared. The Silicon Graphics 230 was by far the overall winner with a composite score of 8.57. The Compaq AP550 was next with a score of 5.19, while the Hewlett-Packard P750C performed dismally, achieving a score of 3.34. Several factors, including motherboard chipset, graphics card, and the size and speed of RAM, were involved in the differing scores of the three machines. Surprisingly the Hewlett-Packard, which had the fastest processor, came back with the lowest score. The above factors most likely contributed to the poor performance of the Hewlett-Packard. Based on the results of the benchmark test, the SGI 230 appears to be the best CAD software solution. The Hewlett-Packard most likely performed poorly due to the fact that it was only running a 100mhz Front Side Bus (FSB), while the SGI machine was running at a 133mhz. The Compaq was using a new type of RAM called RDRAM. While this RAM was at first perceived to be a great performer, various benchmarks, including this one, have found that the computers using RDRAM really only achieve average performance.
Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their efforts to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research into how to winnow the reference events in these large reconciled event sets, additional database query approaches have been developed to provide windows into these datasets. These custom built content analysis tools help identify dataset characteristics that can potentially aid in providing a basis for comparing similar reference events in these large reconciled event sets. Once these characteristics can be identified, algorithms can be developed to create and add to the reduced set of events used by the Event Search Engine. These content analysis tools have already been useful in providing information on station coverage of the referenced events and basic statistical, information on events in the research datasets. The tools can also provide researchers with a quick way to find interesting and useful events within the research datasets. The tools could also be used as a means to review reference event datasets as part of a dataset delivery verification process. There has also been an effort to explore the usefulness of commercially available web-based software to help with this problem. The advantages of using off-the-shelf software applications, such as Oracle's WebDB, to manipulate, customize and manage research data are being investigated. These types of applications are being examined to provide access to large integrated data sets for regional seismic research in Asia. All of these software tools would provide the researcher with unprecedented power without having to learn the intricacies and complexities of relational database systems.
The Federal Aviation Administration Airworthiness Assurance NDI Validation Center currently assesses the capability of various non-destructive inspection (NDI) methods used for analyzing aircraft components. The focus of one such exercise is to evaluate the sensitivity of fluorescent liquid penetrant inspection. A baseline procedure using the water-washable fluorescent penetrant method defines a foundation for comparing the brightness of low cycle fatigue cracks in titanium test panels. The analysis of deviations in the baseline procedure will determine an acceptable range of operation for the steps in the inspection process. The data also gives insight into the depth of each crack and which step(s) of the inspection process most affect penetrant sensitivities. A set of six low cycle fatigue cracks produced in 6.35-mm thick Ti-6Al-4V specimens was used to conduct the experiments to produce sensitivity data. The results will document the consistency of the crack readings and compare previous experiments to find the best parameters for water-washable penetrant.
This report provides (1) an overview of all tracer testing conducted in the Culebra Dolomite Member of the Rustler Formation at the Waste Isolation Pilot Plant (WPP) site, (2) a detailed description of the important information about the 1995-96 tracer tests and the current interpretations of the data, and (3) a summary of the knowledge gained to date through tracer testing in the Culebra. Tracer tests have been used to identify transport processes occurring within the Culebra and quantify relevant parameters for use in performance assessment of the WIPP. The data, especially those from the tests performed in 1995-96, provide valuable insight into transport processes within the Culebra. Interpretations of the tracer tests in combination with geologic information, hydraulic-test information, and laboratory studies have resulted in a greatly improved conceptual model of transport processes within the Culebra. At locations where the transmissivity of the Culebra is low (< 4 x 10{sup -6} m{sup 2}/s), we conceptualize the Culebra as a single-porosity medium in which advection occurs largely through the primary porosity of the dolomite matrix. At locations where the transmissivity of the Culebra is high (> 4 x 10{sup -6} m{sup 2}/s), we conceptualize the Culebra as a heterogeneous, layered, fractured medium in which advection occurs largely through fractures and solutes diffuse between fractures and matrix at multiple rates. The variations in diffusion rate can be attributed to both variations in fracture spacing (or the spacing of advective pathways) and matrix heterogeneity. Flow and transport appear to be concentrated in the lower Culebra. At all locations, diffusion is the dominant transport process in the portions of the matrix that tracer does not access by flow.
The Transportation Surety Center, 6300, has been conducting continuing research into and development of information systems for the Configurable Transportation Security and Information Management System (CTSS) project, an Object-Oriented Framework approach that uses Component-Based Software Development to facilitate rapid deployment of new systems while improving software cost containment, development reliability, compatibility, and extensibility. The direction has been to develop a Fleet Management System (FMS) framework using object-oriented technology. The goal for the current development is to provide a software and hardware environment that will demonstrate and support object-oriented development commonly in the FMS Central Command Center and Vehicle domains.
High-quality ultrathin poly(diacetylene) (PDA) films were produced by using a horizontal Langmuir deposition technique. The resultant films exhibit strong friction anisotropy that is correlated with the direction of the polymer backbone structure. Shear forces applied by atomic force microscopy (AFM) or near field scanning optical microscope (NSOM) tips locally induced the blue-to-red chromatic transition in the PDA films.
Interfacial Force Microscopy (IFM) is a scanning probe technique that employs a force-feedback sensor concept. This article discusses a few examples of IFM applications to polymer surfaces. Through these examples, the ability of IFM to obtain quantitative information on interfacial forces on a controllable manner is demonstrated.
This report focuses on Sandia National Laboratories' effort to create high-temperature logging tools for geothermal applications without the need for heat shielding. One of the mechanisms for failure in conventional downhole tools is temperature. They can only survive a limited number of hours in high temperature environments. For the first time since the evolution of integrated circuits, components are now commercially available that are qualified to 225 C with many continuing to work up to 300 C. These components are primarily based on Silicon-On-Insulator (SOI) technology. Sandia has developed and tested a simple data logger based on this technology that operates up to 300 C with a few limiting components operating to only 250 C without thermal protection. An actual well log to 240 C without shielding is discussed. The first prototype high-temperature tool measures pressure and temperature using a wire-line for power and communication. The tool is based around the HT83C51 microcontroller. A brief discussion of the background and status of the High Temperature Instrumentation program at Sandia, objectives, data logger development, and future project plans are given.
Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.
The design of field emission displays is severely constrained by the universally poor cathodoluminescence (CL) efficiency of most phosphors at low excitation energies. As part of the effort to understand this phenomenon, the authors have measured the time decay of spectrally-resolved, pulsed CL and photoluminescence (PL) in several phosphors activated by rare earth and transition metal impurities, including Y{sub 2}O{sub 3}:Eu, Y{sub 2}SiO{sub 5}:Tb, and Zn{sub 2}SiO{sub 4}:Mn. Activator concentrations ranged from {approximately}0.25 to 10%. The CL decay curves are always non-linear on a log(CL)-linear(time) plot--i.e. they deviate from first order decay kinetics. These deviations are always more pronounced at short times and larger activator concentrations and are largest at low beam energies where the decay rates are noticeably faster. PL decay is always slower than that seen for CL, but these differences disappear after most of the excited species have decayed. They have also measured the dependence of steady state CL efficiency on beam energy. They find that larger activator concentrations accelerate the drop in CL efficiency seen at low beam energies. These effects are largest for the activators which interact more strongly with the host lattice. While activator-activator interactions are known to limit PL and CL efficiency in most phosphors, the present data suggest that a more insidious version of this mechanism is partly responsible for poor CL efficiency at low beam energies. This enhanced concentration quenching is due to the interaction of nearby excited activators. These interactions can lead to non-radiative activator decay, hence lower steady state CL efficiency. Excited state clustering, which may be caused by the large energy loss rate of low energy primary electrons, appears to enhance these interactions. In support of this idea, they find that PL decays obtained at high laser pulse energies resemble the non-linear decays seen in the CL data.
Studies of the influences of temperature, hydrostatic pressure, dc biasing field and frequency on the dielectric constant ({epsilon}{prime}) and loss (tan {delta}) of single crystal [pb (Zn{sub 1/3}Nb{sub 2/3})O{sub 3}]{sub 0.905} (PbTiO{sub 3}){sub 0.095}, or PZN-9.5PT for short, have provided a detailed view of the ferroelectric (FE) response and phase transitions of this technologically important material. While at 1 bar, the crystal exhibits on cooling a cubic-to-tetragonal FE transition followed by a second transition to a rhombohedral phase, pressure induces a FE-to-relaxer crossover, the relaxer phase becoming the ground state at pressures {ge}5 kbar. Analogy with earlier results suggests that this crossover is a common feature of compositionally-disordered soft mode ferroelectrics and can be understood in terms of a decrease in the correlation length among polar domains with increasing pressure. Application of a dc biasing electric field at 1 bar strengthens FE correlations, and can at high pressure re-stabilize the FE response. The pressure-temperature-electric field phase diagram was established. In the absence of dc bias the tetragonal phase vanishes at high pressure, the crystal exhibiting classic relaxor behavior. The dynamics of dipolar motion and the strong deviation from Curie-Weiss behavior of the susceptibility in the high temperature cubic phase are discussed.