Charge collection by capacitive influence through isolation oxides
Proposed for publication in IEEE Transactions on Nuclear Science.
Proposed for publication in IEEE Transactions on Nuclear Science.
Vizkelethy, Gyorgy; Schwank, James R.; Shaneyfelt, Marty R.
This paper analyzes the collected charge in heavy ion irradiated MOS structures. The charge generated in the substrate induces a displacement effect which strongly depends on the capacitor structure. Networks of capacitors are particularly sensitive to charge sharing effects. This has important implications for the reliability of SOI and DRAMs which use isolation oxides as a key elementary structure. The buried oxide of present day and future SOI technologies is thick enough to avoid a significant collection from displacement effects. On the other hand, the retention capacitors of trench DRAMs are particularly sensitive to charge release in the substrate. Charge collection on retention capacitors participate to the MBU sensitivity of DRAM.
Proposed for publication in Applied Physics Letters.
We report operation of a terahertz quantum-cascade laser at 3.8 THz ({lambda} {approx} 79 {micro}m) up to a heat-sink temperature of 137 K. A resonant phonon depopulation design was used with a low-loss metal-metal waveguide, which provided a confinement factor of nearly unity. A threshold current density of 625 A/cm{sup 2} was obtained in pulsed mode at 5 K. Devices fabricated using a conventional semi-insulating surface-plasmon waveguide lased up to 92 K with a threshold current density of 670 A/cm{sup 2} at 5 K.
Proposed for publication in IEEE Transactions on Nuclear Science.
Vizkelethy, Gyorgy; Dodd, Paul E.
This paper presents the first 3-D simulation of heavy-ion induced charge collection in a SiGe HBT, together with microbeam testing data. The charge collected by the terminals is a strong function of the ion striking position. The sensitive area of charge collection for each terminal is identified based on analysis of the device structure and simulation results. For a normal strike between the deep trench edges, most of the electrons and holes are collected by the collector and substrate terminals, respectively. For an ion strike between the shallow trench edges surrounding the emitter, the base collects appreciable amount of charge. Emitter collects negligible amount of charge. Good agreement is achieved between the experimental and simulated data. Problems encountered with mesh generation and charge collection simulation are also discussed.
Hipp, James R.; Simons, Randall W.; Jensen, Lee A.
Abstract not provided.
Seismic event location is made challenging by the difficulty of describing event location uncertainty in multidimensions, by the non-linearity of the Earth models used as input to the location algorithm, and by the presence of local minima which can prevent a location code from finding the global minimum. Techniques to deal with these issues will be described. Since some of these techniques are computationally expensive or require more analysis by human analysts, users need a flexible location code that allows them to select from a variety of solutions that span a range of computational efficiency and simplicity of interpretation. A new location code, LocOO, has been developed to deal with these issues. A seismic event location is comprised of a point in 4-dimensional (4D) space-time, surrounded by a 4D uncertainty boundary. The point location is useless without the uncertainty that accompanies it. While it is mathematically straightforward to reduce the dimensionality of the 4D uncertainty limits, the number of dimensions that should be retained depends on the dimensionality of the location to which the calculated event location is to be compared. In nuclear explosion monitoring, when an event is to be compared to a known or suspected test site location, the three spatial components of the test site and event location are to be compared and 3 dimensional uncertainty boundaries should be considered. With LocOO, users can specify a location to which the calculated seismic event location is to be compared and the dimensionality of the uncertainty is tailored to that of the location specified by the user. The code also calculates the probability that the two locations in fact coincide. The non-linear travel time curves that constrain calculated event locations present two basic difficulties. The first is that the non-linearity can cause least squares inversion techniques to fail to converge. LocOO implements a nonlinear Levenberg-Marquardt least squares inversion technique that is guaranteed to converge in a finite number of iterations for tractable problems. The second difficulty is that a high degree of non-linearity causes the uncertainty boundaries around the event location to deviate significantly from elliptical shapes. LocOO can optionally calculate and display non-elliptical uncertainty boundaries at the cost of a minimal increase in computation time and complexity of interpretation. All location codes are plagued by the possibility of having local minima obscuring the single global minimum. No code can guarantee that it will find the global minimum in a finite number of computations. Grid search algorithms have been developed to deal with this problem, but have a high computational cost. In order to improve the likelihood of finding the global minimum in a timely manner, LocOO implements a hybrid least squares-grid search algorithm. Essentially, many least squares solutions are computed starting from a user-specified number of initial locations; and the solution with the smallest sum squared weighted residual is assumed to be the optimal location. For events of particular interest, analysts can display contour plots of gridded residuals in a selected region around the best-fit location, improving the probability that the global minimum will not be missed and also providing much greater insight into the character and quality of the calculated solution.
Young, Christopher J.; Herrington, Preston B.; Harris, James M.
Abstract not provided.
Merchant, Bion J.; Chael, Eric P.; Hart, Darren M.; Young, Christopher J.
To improve the nuclear event monitoring capability of the U.S., the NNSA Ground-based Nuclear Explosion Monitoring Research & Engineering (GNEM R&E) program has been developing a collection of products known as the Knowledge Base (KB). Though much of the focus for the KB has been on the development of calibration data, we have also developed numerous software tools for various purposes. The Matlab-based MatSeis package and the associated suite of regional seismic analysis tools were developed to aid in the testing and evaluation of some Knowledge Base products for which existing applications were either not available or ill-suited. This presentation will provide brief overviews of MatSeis and each of the tools, emphasizing features added in the last year. MatSeis was begun in 1996 and is now a fairly mature product. It is a highly flexible seismic analysis package that provides interfaces to read data from either flatfiles or an Oracle database. All of the standard seismic analysis tasks are supported (e.g. filtering, 3 component rotation, phase picking, event location, magnitude calculation), as well as a variety of array processing algorithms (beaming, FK, coherency analysis, vespagrams). The simplicity of Matlab coding and the tremendous number of available functions make MatSeis/Matlab an ideal environment for developing new monitoring research tools (see the regional seismic analysis tools below). New MatSeis features include: addition of evid information to events in MatSeis, options to screen picks by author, input and output of origerr information, improved performance in reading flatfiles, improved speed in FK calculations, and significant improvements to Measure Tool (filtering, multiple phase display), Free Plot (filtering, phase display and alignment), Mag Tool (maximum likelihood options), and Infra Tool (improved calculation speed, display of an F statistic stream). Work on the regional seismic analysis tools (CodaMag, EventID, PhaseMatch, and Dendro) began in 1999 and the tools vary in their level of maturity. All rely on MatSeis to provide necessary data (waveforms, arrivals, origins, and travel time curves). CodaMag Tool implements magnitude calculation by scaling to fit the envelope shape of the coda for a selected phase type (Mayeda, 1993; Mayeda and Walter, 1996). New tool features include: calculation of a yield estimate based on the source spectrum, display of a filtered version of the seismogram based on the selected band, and the output of codamag data records for processed events. EventID Tool implements event discrimination using phase ratios of regional arrivals (Hartse et al., 1997; Walter et al., 1999). New features include: bandpass filtering of displayed waveforms, screening of reference events based on SNR, multivariate discriminants, use of libcgi to access correction surfaces, and the output of discrim{_}data records for processed events. PhaseMatch Tool implements match filtering to isolate surface waves (Herrin and Goforth, 1977). New features include: display of the signal's observed dispersion and an option to use a station-based dispersion surface. Dendro Tool implements agglomerative hierarchical clustering using dendrograms to identify similar events based on waveform correlation (Everitt, 1993). New features include: modifications to include arrival information within the tool, and the capability to automatically add/re-pick arrivals based on the picked arrivals for similar events.
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e., the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].
Sumali, Hartono (Anton); Walsh, Timothy W.
Abstract not provided.
Tikare, Veena; Braginsky, Michael V.; Garino, Terry J.; Arguello, Jose G.
Sintering is one of the oldest processes used by man to manufacture materials dating as far back as 12,000 BC. While it is an ancient process, it is also necessary for many modern technologies such a multilayered ceramic packages, wireless communication devices, and many others. The process consists of thermally treating a powder or compact at a temperature below the melting point of the main constituent, for the purpose of increasing its strength by bonding together of the particles. During sintering, the individual particles bond, the pore space between particles is eliminated, the resulting component can shrinks by as much as 30 to 50% by volume, and it can distort its shape tremendously. Being able to control and predict the shrinkage and shape distortions during sintering has been the goal of much research in material science. And it has been achieved to varying degrees by many. The object of this project was to develop models that could simulate sintering at the mesoscale and at the macroscale to more accurately predict the overall shrinkage and shape distortions in engineering components. The mesoscale model simulates microstructural evolution during sintering by modeling grain growth, pore migration and coarsening, and vacancy formation, diffusion and annihilation. In addition to studying microstructure, these simulation can be used to generate the constitutive equations describing shrinkage and deformation during sintering. These constitutive equations are used by continuum finite element simulations to predict the overall shrinkage and shape distortions of a sintering crystalline powder compact. Both models will be presented. Application of these models to study sintering will be demonstrated and discussed. Finally, the limitations of these models will be reviewed.
Scott, Marion W.; Walsh, Steven T.; Sumpter, Carol W.
Abstract not provided.
Bouchard, Ann M.; Osbourn, Gordon C.
We describe stochastic agent-based simulations of protein-emulating agents to perform computation via dynamic self-assembly. The binding and actuation properties of the types of agents required to construct a RAM machine (equivalent to a Turing machine) are described. We present an example computation and describe the molecular biology and non-equilibrium statistical mechanics, and information science properties of this system.
Proposed for publication in Topics in Catalysis.
Abstract not provided.
Materials Research Society Symposium - Proceedings
Wang, Yifeng; Bryan, C.R.; Gao, Huizhen
Acid-base titration and metal sorption experiments were performed on both mesoporous alumina and alumina particles under various ionic strengths. It has been demonstrated that surface chemistry and ion sorption within nanopores can be significantly modified by a nano-scale space confinement. As the pore size is reduced to a few nanometers, the difference between surface acidity constants (ΔpK = pK2 - pK1) decreases, giving rise to a higher surface charge density on a nanopore surface than that on an unconfined solid-solution interface. The change in surface acidity constants results in a shift of ion sorption edges and enhances ion sorption on that nanopore surfaces.
A Simple PolyUrethane Foam (SPUF) mass loss and response model has been developed to predict the behavior of unconfined, rigid, closed-cell, polyurethane foam-filled systems exposed to fire-like heat fluxes. The model, developed for the B61 and W80-0/1 fireset foam, is based on a simple two-step mass loss mechanism using distributed reaction rates. The initial reaction step assumes that the foam degrades into a primary gas and a reactive solid. The reactive solid subsequently degrades into a secondary gas. The SPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE [1] and CALORE [2], which support chemical kinetics and dynamic enclosure radiation using 'element death.' A discretization bias correction model was parameterized using elements with characteristic lengths ranging from 1-mm to 1-cm. Bias corrected solutions using the SPUF response model with large elements gave essentially the same results as grid independent solutions using 100-{micro}m elements. The SPUF discretization bias correction model can be used with 2D regular quadrilateral elements, 2D paved quadrilateral elements, 2D triangular elements, 3D regular hexahedral elements, 3D paved hexahedral elements, and 3D tetrahedron elements. Various effects to efficiently recalculate view factors were studied -- the element aspect ratio, the element death criterion, and a 'zombie' criterion. Most of the solutions using irregular, large elements were in agreement with the 100-{micro}m grid-independent solutions. The discretization bias correction model did not perform as well when the element aspect ratio exceeded 5:1 and the heated surface was on the shorter side of the element. For validation, SPUF predictions using various sizes and types of elements were compared to component-scale experiments of foam cylinders that were heated with lamps. The SPUF predictions of the decomposition front locations were compared to the front locations determined from real-time X-rays. SPUF predictions of the 19 radiant heat experiments were also compared to a more complex chemistry model (CPUF) predictions made with 1-mm elements. The SPUF predictions of the front locations were closer to the measured front locations than the CPUF predictions, reflecting the more accurate SPUF prediction of mass loss. Furthermore, the computational time for the SPUF predictions was an order of magnitude less than for the CPUF predictions.
Ackermann, Mark R.; Drummond, Timothy J.; Wilcoxon, Jess P.
Presented within this report are the results of a brief examination of optical tagging technologies funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. The work was performed during the summer months of 2002 with total funding of $65k. The intent of the project was to briefly examine a broad range of approaches to optical tagging concentrating on the wavelength range between ultraviolet (UV) and the short wavelength infrared (SWIR, {lambda} < 2{micro}m). Tagging approaches considered include such things as simple combinations of reflective and absorptive materials closely spaced in wavelength to give a high contrast over a short range of wavelengths, rare-earth oxides in transparent binders to produce a narrow absorption line hyperspectral tag, and fluorescing materials such as phosphors, dies and chemically precipitated particles. One technical approach examined in slightly greater detail was the use of fluorescing nano particles of metals and semiconductor materials. The idea was to embed such nano particles in an oily film or transparent paint binder. When pumped with a SWIR laser such as that produced by laser diodes at {lambda}=1.54{micro}m, the particles would fluoresce at slightly longer wavelengths, thereby giving a unique signal. While it is believed that optical tags are important for military, intelligence and even law enforcement applications, as a business area, tags do not appear to represent a high on return investment. Other government agencies frequently shop for existing or mature tag technologies but rarely are interested enough to pay for development of an untried technical approach. It was hoped that through a relatively small investment of laboratory R&D funds, enough technologies could be identified that a potential customers requirements could be met with a minimum of additional development work. Only time will tell if this proves to be correct.
Hobbs, Michael L.; Erickson, Kenneth L.; Chu, Tze Y.; Borek, Theodore T.; Thompson, Kyle; Dowding, Kevin J.
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.
Arris, Howard W.; Trujillo, Manuel O.; Sanchez, Robert O.
Sandia National Laboratories has been encapsulating magnetic components for over 40 years. The reliability of magnetic component assemblies that must withstand a variety of environments and then function correctly is dependent on the use of appropriate encapsulating formulations. Specially developed formulations are critical and enable us to provide high reliability magnetic components. This paper discuss epoxy, urethane, and silicone formulations for several of our magnetic components.
Lockwood, Steven J.; Wright, Emily D.; Voigt, James A.; Sipola, Diana L.
Niobium doped PZT 95/5 (lead zirconate-lead titanate) is the material used in voltage bars for all ferroelectric neutron generator power supplies. In June of 1999, the transfer and scale-up of the Sandia Process from Department 1846 to Department 14192 was initiated. The laboratory-scale process of 1.6 kg has been successfully scaled to a production batch quantity of 10 kg. This report documents efforts to characterize and optimize the production-scale process utilizing Design of Experiments methodology. Of the 34 factors identified in the powder preparation sub-process, 11 were initially selected for the screening design. Additional experiments and safety analysis subsequently reduced the screening design to six factors. Three of the six factors (Milling Time, Media Size, and Pyrolysis Air Flow) were identified as statistically significant for one or more responses and were further investigated through a full factorial interaction design. Analysis of the interaction design resulted in developing models for Powder Bulk Density, Powder Tap Density, and +20 Mesh Fraction. Subsequent batches validated the models. The initial baseline powder preparation conditions were modified, resulting in improved powder yield by significantly reducing the +20 mesh waste fraction. Response variation analysis indicated additional investigation of the powder preparation sub-process steps was necessary to identify and reduce the sources of variation to further optimize the process.
Rutherford, Brian; Dowding, Kevin J.
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.
Weiner, Ruth F.; Kanipe, Frances L.
This User Guide for the RADTRAN 5 computer code for transportation risk analysis describes basic risk concepts and provides the user with step-by-step directions for creating input files by means of either the RADDOG input file generator software or a text editor. It also contains information on how to interpret RADTRAN 5 output, how to obtain and use several types of important input data, and how to select appropriate analysis methods. Appendices include a glossary of terms, a listing of error messages, data-plotting information, images of RADDOG screens, and a table of all data in the internal radionuclide library.
Bickel, Douglas L.; Graham, Robert H.; Hensley, William H.
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to 'demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies.' This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Proposed for publication in Acta Materialia, 50th Anniversary.
Romig Jr., Alton D.; Dugger, Michael T.
Abstract not provided.
Okandan, Murat; James, Conrad D.; Mani, Seethambal; Draper, Bruce L.
Fast and quantitative analysis of cellular activity, signaling and responses to external stimuli is a crucial capability and it has been the goal of several projects focusing on patch clamp measurements. To provide the maximum functionality and measurement options, we have developed a patch clamp array device that incorporates on-chip electronics, mechanical, optical and microfluidic coupling as well as cell localization through fluid flow. The preliminary design, which integrated microfluidics, electrodes and optical access, was fabricated and tested. In addition, new designs which further combine mechanical actuation, on-chip electronics and various electrode materials with the previous designs are currently being fabricated.
Glaser, Ronald F.; Pritchard, Daniel
Abstract not provided.