This paper analyzes the collected charge in heavy ion irradiated MOS structures. The charge generated in the substrate induces a displacement effect which strongly depends on the capacitor structure. Networks of capacitors are particularly sensitive to charge sharing effects. This has important implications for the reliability of SOI and DRAMs which use isolation oxides as a key elementary structure. The buried oxide of present day and future SOI technologies is thick enough to avoid a significant collection from displacement effects. On the other hand, the retention capacitors of trench DRAMs are particularly sensitive to charge release in the substrate. Charge collection on retention capacitors participate to the MBU sensitivity of DRAM.
We report operation of a terahertz quantum-cascade laser at 3.8 THz ({lambda} {approx} 79 {micro}m) up to a heat-sink temperature of 137 K. A resonant phonon depopulation design was used with a low-loss metal-metal waveguide, which provided a confinement factor of nearly unity. A threshold current density of 625 A/cm{sup 2} was obtained in pulsed mode at 5 K. Devices fabricated using a conventional semi-insulating surface-plasmon waveguide lased up to 92 K with a threshold current density of 670 A/cm{sup 2} at 5 K.
This paper presents the first 3-D simulation of heavy-ion induced charge collection in a SiGe HBT, together with microbeam testing data. The charge collected by the terminals is a strong function of the ion striking position. The sensitive area of charge collection for each terminal is identified based on analysis of the device structure and simulation results. For a normal strike between the deep trench edges, most of the electrons and holes are collected by the collector and substrate terminals, respectively. For an ion strike between the shallow trench edges surrounding the emitter, the base collects appreciable amount of charge. Emitter collects negligible amount of charge. Good agreement is achieved between the experimental and simulated data. Problems encountered with mesh generation and charge collection simulation are also discussed.
Seismic event location is made challenging by the difficulty of describing event location uncertainty in multidimensions, by the non-linearity of the Earth models used as input to the location algorithm, and by the presence of local minima which can prevent a location code from finding the global minimum. Techniques to deal with these issues will be described. Since some of these techniques are computationally expensive or require more analysis by human analysts, users need a flexible location code that allows them to select from a variety of solutions that span a range of computational efficiency and simplicity of interpretation. A new location code, LocOO, has been developed to deal with these issues. A seismic event location is comprised of a point in 4-dimensional (4D) space-time, surrounded by a 4D uncertainty boundary. The point location is useless without the uncertainty that accompanies it. While it is mathematically straightforward to reduce the dimensionality of the 4D uncertainty limits, the number of dimensions that should be retained depends on the dimensionality of the location to which the calculated event location is to be compared. In nuclear explosion monitoring, when an event is to be compared to a known or suspected test site location, the three spatial components of the test site and event location are to be compared and 3 dimensional uncertainty boundaries should be considered. With LocOO, users can specify a location to which the calculated seismic event location is to be compared and the dimensionality of the uncertainty is tailored to that of the location specified by the user. The code also calculates the probability that the two locations in fact coincide. The non-linear travel time curves that constrain calculated event locations present two basic difficulties. The first is that the non-linearity can cause least squares inversion techniques to fail to converge. LocOO implements a nonlinear Levenberg-Marquardt least squares inversion technique that is guaranteed to converge in a finite number of iterations for tractable problems. The second difficulty is that a high degree of non-linearity causes the uncertainty boundaries around the event location to deviate significantly from elliptical shapes. LocOO can optionally calculate and display non-elliptical uncertainty boundaries at the cost of a minimal increase in computation time and complexity of interpretation. All location codes are plagued by the possibility of having local minima obscuring the single global minimum. No code can guarantee that it will find the global minimum in a finite number of computations. Grid search algorithms have been developed to deal with this problem, but have a high computational cost. In order to improve the likelihood of finding the global minimum in a timely manner, LocOO implements a hybrid least squares-grid search algorithm. Essentially, many least squares solutions are computed starting from a user-specified number of initial locations; and the solution with the smallest sum squared weighted residual is assumed to be the optimal location. For events of particular interest, analysts can display contour plots of gridded residuals in a selected region around the best-fit location, improving the probability that the global minimum will not be missed and also providing much greater insight into the character and quality of the calculated solution.
To improve the nuclear event monitoring capability of the U.S., the NNSA Ground-based Nuclear Explosion Monitoring Research & Engineering (GNEM R&E) program has been developing a collection of products known as the Knowledge Base (KB). Though much of the focus for the KB has been on the development of calibration data, we have also developed numerous software tools for various purposes. The Matlab-based MatSeis package and the associated suite of regional seismic analysis tools were developed to aid in the testing and evaluation of some Knowledge Base products for which existing applications were either not available or ill-suited. This presentation will provide brief overviews of MatSeis and each of the tools, emphasizing features added in the last year. MatSeis was begun in 1996 and is now a fairly mature product. It is a highly flexible seismic analysis package that provides interfaces to read data from either flatfiles or an Oracle database. All of the standard seismic analysis tasks are supported (e.g. filtering, 3 component rotation, phase picking, event location, magnitude calculation), as well as a variety of array processing algorithms (beaming, FK, coherency analysis, vespagrams). The simplicity of Matlab coding and the tremendous number of available functions make MatSeis/Matlab an ideal environment for developing new monitoring research tools (see the regional seismic analysis tools below). New MatSeis features include: addition of evid information to events in MatSeis, options to screen picks by author, input and output of origerr information, improved performance in reading flatfiles, improved speed in FK calculations, and significant improvements to Measure Tool (filtering, multiple phase display), Free Plot (filtering, phase display and alignment), Mag Tool (maximum likelihood options), and Infra Tool (improved calculation speed, display of an F statistic stream). Work on the regional seismic analysis tools (CodaMag, EventID, PhaseMatch, and Dendro) began in 1999 and the tools vary in their level of maturity. All rely on MatSeis to provide necessary data (waveforms, arrivals, origins, and travel time curves). CodaMag Tool implements magnitude calculation by scaling to fit the envelope shape of the coda for a selected phase type (Mayeda, 1993; Mayeda and Walter, 1996). New tool features include: calculation of a yield estimate based on the source spectrum, display of a filtered version of the seismogram based on the selected band, and the output of codamag data records for processed events. EventID Tool implements event discrimination using phase ratios of regional arrivals (Hartse et al., 1997; Walter et al., 1999). New features include: bandpass filtering of displayed waveforms, screening of reference events based on SNR, multivariate discriminants, use of libcgi to access correction surfaces, and the output of discrim{_}data records for processed events. PhaseMatch Tool implements match filtering to isolate surface waves (Herrin and Goforth, 1977). New features include: display of the signal's observed dispersion and an option to use a station-based dispersion surface. Dendro Tool implements agglomerative hierarchical clustering using dendrograms to identify similar events based on waveform correlation (Everitt, 1993). New features include: modifications to include arrival information within the tool, and the capability to automatically add/re-pick arrivals based on the picked arrivals for similar events.
Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e., the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].
Sintering is one of the oldest processes used by man to manufacture materials dating as far back as 12,000 BC. While it is an ancient process, it is also necessary for many modern technologies such a multilayered ceramic packages, wireless communication devices, and many others. The process consists of thermally treating a powder or compact at a temperature below the melting point of the main constituent, for the purpose of increasing its strength by bonding together of the particles. During sintering, the individual particles bond, the pore space between particles is eliminated, the resulting component can shrinks by as much as 30 to 50% by volume, and it can distort its shape tremendously. Being able to control and predict the shrinkage and shape distortions during sintering has been the goal of much research in material science. And it has been achieved to varying degrees by many. The object of this project was to develop models that could simulate sintering at the mesoscale and at the macroscale to more accurately predict the overall shrinkage and shape distortions in engineering components. The mesoscale model simulates microstructural evolution during sintering by modeling grain growth, pore migration and coarsening, and vacancy formation, diffusion and annihilation. In addition to studying microstructure, these simulation can be used to generate the constitutive equations describing shrinkage and deformation during sintering. These constitutive equations are used by continuum finite element simulations to predict the overall shrinkage and shape distortions of a sintering crystalline powder compact. Both models will be presented. Application of these models to study sintering will be demonstrated and discussed. Finally, the limitations of these models will be reviewed.
We describe stochastic agent-based simulations of protein-emulating agents to perform computation via dynamic self-assembly. The binding and actuation properties of the types of agents required to construct a RAM machine (equivalent to a Turing machine) are described. We present an example computation and describe the molecular biology and non-equilibrium statistical mechanics, and information science properties of this system.
Acid-base titration and metal sorption experiments were performed on both mesoporous alumina and alumina particles under various ionic strengths. It has been demonstrated that surface chemistry and ion sorption within nanopores can be significantly modified by a nano-scale space confinement. As the pore size is reduced to a few nanometers, the difference between surface acidity constants (ΔpK = pK2 - pK1) decreases, giving rise to a higher surface charge density on a nanopore surface than that on an unconfined solid-solution interface. The change in surface acidity constants results in a shift of ion sorption edges and enhances ion sorption on that nanopore surfaces.
A Simple PolyUrethane Foam (SPUF) mass loss and response model has been developed to predict the behavior of unconfined, rigid, closed-cell, polyurethane foam-filled systems exposed to fire-like heat fluxes. The model, developed for the B61 and W80-0/1 fireset foam, is based on a simple two-step mass loss mechanism using distributed reaction rates. The initial reaction step assumes that the foam degrades into a primary gas and a reactive solid. The reactive solid subsequently degrades into a secondary gas. The SPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE [1] and CALORE [2], which support chemical kinetics and dynamic enclosure radiation using 'element death.' A discretization bias correction model was parameterized using elements with characteristic lengths ranging from 1-mm to 1-cm. Bias corrected solutions using the SPUF response model with large elements gave essentially the same results as grid independent solutions using 100-{micro}m elements. The SPUF discretization bias correction model can be used with 2D regular quadrilateral elements, 2D paved quadrilateral elements, 2D triangular elements, 3D regular hexahedral elements, 3D paved hexahedral elements, and 3D tetrahedron elements. Various effects to efficiently recalculate view factors were studied -- the element aspect ratio, the element death criterion, and a 'zombie' criterion. Most of the solutions using irregular, large elements were in agreement with the 100-{micro}m grid-independent solutions. The discretization bias correction model did not perform as well when the element aspect ratio exceeded 5:1 and the heated surface was on the shorter side of the element. For validation, SPUF predictions using various sizes and types of elements were compared to component-scale experiments of foam cylinders that were heated with lamps. The SPUF predictions of the decomposition front locations were compared to the front locations determined from real-time X-rays. SPUF predictions of the 19 radiant heat experiments were also compared to a more complex chemistry model (CPUF) predictions made with 1-mm elements. The SPUF predictions of the front locations were closer to the measured front locations than the CPUF predictions, reflecting the more accurate SPUF prediction of mass loss. Furthermore, the computational time for the SPUF predictions was an order of magnitude less than for the CPUF predictions.
Presented within this report are the results of a brief examination of optical tagging technologies funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. The work was performed during the summer months of 2002 with total funding of $65k. The intent of the project was to briefly examine a broad range of approaches to optical tagging concentrating on the wavelength range between ultraviolet (UV) and the short wavelength infrared (SWIR, {lambda} < 2{micro}m). Tagging approaches considered include such things as simple combinations of reflective and absorptive materials closely spaced in wavelength to give a high contrast over a short range of wavelengths, rare-earth oxides in transparent binders to produce a narrow absorption line hyperspectral tag, and fluorescing materials such as phosphors, dies and chemically precipitated particles. One technical approach examined in slightly greater detail was the use of fluorescing nano particles of metals and semiconductor materials. The idea was to embed such nano particles in an oily film or transparent paint binder. When pumped with a SWIR laser such as that produced by laser diodes at {lambda}=1.54{micro}m, the particles would fluoresce at slightly longer wavelengths, thereby giving a unique signal. While it is believed that optical tags are important for military, intelligence and even law enforcement applications, as a business area, tags do not appear to represent a high on return investment. Other government agencies frequently shop for existing or mature tag technologies but rarely are interested enough to pay for development of an untried technical approach. It was hoped that through a relatively small investment of laboratory R&D funds, enough technologies could be identified that a potential customers requirements could be met with a minimum of additional development work. Only time will tell if this proves to be correct.
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.
Sandia National Laboratories has been encapsulating magnetic components for over 40 years. The reliability of magnetic component assemblies that must withstand a variety of environments and then function correctly is dependent on the use of appropriate encapsulating formulations. Specially developed formulations are critical and enable us to provide high reliability magnetic components. This paper discuss epoxy, urethane, and silicone formulations for several of our magnetic components.
Niobium doped PZT 95/5 (lead zirconate-lead titanate) is the material used in voltage bars for all ferroelectric neutron generator power supplies. In June of 1999, the transfer and scale-up of the Sandia Process from Department 1846 to Department 14192 was initiated. The laboratory-scale process of 1.6 kg has been successfully scaled to a production batch quantity of 10 kg. This report documents efforts to characterize and optimize the production-scale process utilizing Design of Experiments methodology. Of the 34 factors identified in the powder preparation sub-process, 11 were initially selected for the screening design. Additional experiments and safety analysis subsequently reduced the screening design to six factors. Three of the six factors (Milling Time, Media Size, and Pyrolysis Air Flow) were identified as statistically significant for one or more responses and were further investigated through a full factorial interaction design. Analysis of the interaction design resulted in developing models for Powder Bulk Density, Powder Tap Density, and +20 Mesh Fraction. Subsequent batches validated the models. The initial baseline powder preparation conditions were modified, resulting in improved powder yield by significantly reducing the +20 mesh waste fraction. Response variation analysis indicated additional investigation of the powder preparation sub-process steps was necessary to identify and reduce the sources of variation to further optimize the process.
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concerning what was needed for this aspect of the analysis. The resulting predictions and corresponding uncertainty assessment demonstrate the flexibility of this approach.
This User Guide for the RADTRAN 5 computer code for transportation risk analysis describes basic risk concepts and provides the user with step-by-step directions for creating input files by means of either the RADDOG input file generator software or a text editor. It also contains information on how to interpret RADTRAN 5 output, how to obtain and use several types of important input data, and how to select appropriate analysis methods. Appendices include a glossary of terms, a listing of error messages, data-plotting information, images of RADDOG screens, and a table of all data in the internal radionuclide library.
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to 'demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies.' This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Fast and quantitative analysis of cellular activity, signaling and responses to external stimuli is a crucial capability and it has been the goal of several projects focusing on patch clamp measurements. To provide the maximum functionality and measurement options, we have developed a patch clamp array device that incorporates on-chip electronics, mechanical, optical and microfluidic coupling as well as cell localization through fluid flow. The preliminary design, which integrated microfluidics, electrodes and optical access, was fabricated and tested. In addition, new designs which further combine mechanical actuation, on-chip electronics and various electrode materials with the previous designs are currently being fabricated.
Silane adhesion promoters are commonly used to improve the adhesion, durability, and corrosion resistance of polymer-oxide interfaces. The current study investigates a model interface consisting of the natural oxide of 100 Si and an epoxy cured from diglycidyl ether of bisphenol A (DGEBA) and triethylenetetraamine (TETA). The thickness of (3-glycidoxypropyl)trimethoxysilane (GPS) films placed between the two materials provided the structural variable. Five surface treatments were investigated: a bare interface, a rough monolayer film, a smooth monolayer film, a 5 nm thick film, and a 10 nm thick film. Previous neutron reflection experiments revealed large extension ratios (>2) when the 5 and 10 nm thick GPS films were exposed to deuterated nitrobenzene vapor. Despite the larger extension ratio for the 5 nm thick film, the epoxy/Si fracture energy (G{sub c}) was equal to that of the 10 nm thick film under ambient conditions. Even the smooth monolayer exhibited the same G{sub c}. Only when the monolayer included a significant number of agglomerates did the G{sub c} drop to levels closer to that of the bare interface. When immersed in water at room temperature for 1 week, the threshold energy release rate (G{sub th}) was nearly equal to G{sub c} for the smooth monolayer, 5 nm thick film, and 10 nm thick film. While the G{sub th} for all three films decreased with increasing water temperature, the G{sub th} of the smooth monolayer decreased more rapidly. The bare interface was similarly sensitive to temperature; however, the G{sub th} of the rough monolayer did not change significantly as the temperature was raised. Despite the influence of pH on hydrolysis, the G{sub th} was insensitive to the pH of the water for all surface treatments.
Boron carbide displays a rich response to dynamic compression that is not well understood. To address poorly understood aspects of behavior, including dynamic strength and the possibility of phase transformations, a series of plate impact experiments was performed that also included reshock and release configurations. Hugoniot data were obtained from the elastic limit (15-18 GPa) to 70 GPa and were found to agree reasonably well with the somewhat limited data in the literature. Using the Hugoniot data, as well as the reshock and release data, the possibility of the existence of one or more phase transitions was examined. There is tantalizing evidence, but at this time no phase transition can be conclusively demonstrated. However, the experimental data are consistent with a phase transition at a shock stress of about 40 GPa, though the volume change associated with it would have to be small. The reshock and release experiments also provide estimates of the shear stress and strength in the shocked state as well as a dynamic mean stress curve for the material. The material supports only a small shear stress in the shocked (Hugoniot) state, but it can support a much larger shear stress when loaded or unloaded from the shocked state. This strength in the shocked state is initially lower than the strength at the elastic limit but increases with pressure to about the same level. Also, the dynamic mean-stress curve estimated from reshock and release differs significantly from the hydrostate constructed from low-pressure data. Finally, a spatially resolved interferometer was used to directly measure spatial variations in particle velocity during the shock event. These spatially resolved measurements are consistent with previous work and suggest a nonuniform failure mode occurring in the material.
This paper describes an integrated experimental and computational framework for developing 3-D structural models for humic acids (HAs). This approach combines experimental characterization, computer assisted structure elucidation (CASE), and atomistic simulations to generate all 3-D structural models or a representative sample of these models consistent with the analytical data and bulk thermodynamic/structural properties of HAs. To illustrate this methodology, structural data derived from elemental analysis, diffuse reflectance FT-IR spectroscopy, 1-D/2-D {sup 1}H and {sup 13}C solution NMR spectroscopy, and electrospray ionization quadrupole time-of-flight mass spectrometry (ESI QqTOF MS) are employed as input to the CASE program SIGNATURE to generate all 3-D structural models for Chelsea soil humic acid (HA). These models are subsequently used as starting 3-D structures to carry out constant temperature-constant pressure molecular dynamics simulations to estimate their bulk densities and Hildebrand solubility parameters. Surprisingly, only a few model isomers are found to exhibit molecular compositions and bulk thermodynamic properties consistent with the experimental data. The simulated {sup 13}C NMR spectrum of an equimolar mixture of these model isomers compares favorably with the measured spectrum of Chelsea soil HA.
Inertial confinement fusion capsule implosions absorbing up to 35 kJ of x-rays from a {approx}220 eV dynamic hohlraum on the Z accelerator at Sandia National Laboratories have produced thermonuclear D-D neutron yields of (2.6 {+-} 1.3) x 10{sup 10}. Argon spectra confirm a hot fuel with Te {approx} 1 keV and n{sub e} {approx} (1-2) x 10{sup 23} cm{sup -3}. Higher performance implosions will require radiation symmetry control improvements. Capsule implosions in a {approx}70 eV double-Z-pinch-driven secondary hohlraum have been radiographed by 6.7 keV x-rays produced by the Z-beamlet laser (ZBL), demonstrating a drive symmetry of about 3% and control of P{sub 2} radiation asymmetries to {+-}2%. Hemispherical capsule implosions have also been radiographed in Z in preparation for future experiments in fast ignition physics. Z-pinch-driven inertial fusion energy concepts are being developed. The refurbished Z machine (ZR) will begin providing scaling information on capsule and Z-pinch in 2006. The addition of a short pulse capability to ZBL will enable research into fast ignition physics in the combination of ZR and ZBL-petawatt. ZR could provide a test bed to study NIF-relevant double-shell ignition concepts using dynamic hohlraums and advanced symmetry control techniques in the double-pinch hohlraum backlit by ZBL.
Two-dimensional processes of nickel electrodeposition in LIGA microfabrication were modeled using the finite-element method and a fully coupled implicit solution scheme via Newtons technique. Species concentrations, electrolyte potential, flow field, and positions of the moving deposition surfaces were computed by solving the species-mass, charge, and momentum conservation equations as well as pseudo-solid mesh-motion equations that employ an arbitrary Lagrangian-Eulerian (ALE) formulation. Coupling this ALE approach with repeated re-meshing and re-mapping makes it possible to track the entire transient deposition processes from start of deposition until the trenches are filled, thus enabling the computation of local current densities that influence the microstructure and functional/mechanical properties of the deposit.
Using shock wave reverberation experiments, water samples were quasi-isentropically compressed between silica and sapphire plates to peak pressures of 1-5 GPa on nanosecond time scales. Real time optical transmission measurements were used to examine changes in the compressed samples. Although the ice VII phase is thermodynamically favored above 2 GPa, the liquid state was initially preserved and subsequent freezing occurred over hundreds of nanoseconds only for the silica cells. Images detailing the formation and growth of the solid phase were obtained. These results provide unambiguous evidence of bulk water freezing on such short time scales.
Combined XRD/neutron Rietveld refinements were performed on PbZr{sub 0.30}Ti{sub 0.70}O{sub 3} powder samples doped with nominally 4% Ln (where Ln = Ce, Nd, Tb, Y, or Yb). Resulting refined structural parameters indicated that the lattice parameters and volume changes in the tetragonal perovskite unit cell were consistent with A and/or B-site doping of the structure. Ce doping is inconsistent with respect to its rather large atomic radius, but is understood in terms of its oxidation to the Ce{sup +4} oxidation state in the structure. Results of the B-site displacement values for the Ti/Zr site indicate that amphoteric doping of Ln cations in the structure results in superior properties for PLnZT materials.
Blastwalls are often assumed to be the answer for facility protection from malevolent explosive assault, particularly from large vehicle bombs (LVB's). The assumption is made that the blastwall, if it is built strong enough to survive, will provide substantial protection to facilities and people on the side opposite the LVB. This paper will demonstrate through computer simulations and experimental data the behavior of explosively induced air blasts during interaction with blastwalls. It will be shown that air blasts can effectively wrap around and over blastwalls. Significant pressure reduction can be expected on the downstream side of the blastwall but substantial pressure will continue to propagate. The effectiveness of the blastwall to reduce blast overpressure depends on the geometry of the blastwall and the location of the explosive relative to the blastwall.
Poole-Frenkel emission in Si-rich nitride and silicon oxynitride thin films is studied in conjunction with compositional aspects of their elastic properties. For Si-rich nitrides varying in composition from SiN{sub 1.33} to SiN{sub 0.54}, the Poole-Frenkel trap depth ({Phi}{sub B}) decreases from 1.08 to 0.52 eV as the intrinsic film strain ({Epsilon}{sub i}) decreases from 0.0036 to -0.0016. For oxynitrides varying in composition from SiN{sub 1.33} to SiO{sub 1.49}N{sub 0.35}, {Phi}{sub B} increases from 1.08 to 1.53 eV as {Epsilon}{sub i} decreases from 0.0036 to 0.0006. In both material systems, a direct correlation is observed between {Phi}{sub B} and {Epsilon}{sub i}. Compositionally induced strain relief as a mechanism for regulating {Phi}{sub B} is discussed.
Annular wire array implosions on the Sandia Z-machine can produce >200 TW and 1-2 MJ of soft x rays in the 0.1-10 keV range. The x-ray flux and debris in this environment present significant challenges for radiographic diagnostics. X-ray backlighting diagnostics at 1865 and 6181 eV using spherically-bent crystals have been fielded on the Z-machine, each with a {approx}0.6 eVspectral bandpass, 10 {micro}m spatial resolution, and a 4 mm by 20mm field of view. The Z-Beamlet laser, a 2-TW, 2-kJ Nd:glass laser({lambda} = 527 nm), is used to produce 0.1-1 J x-ray sources for radiography. The design, calibration, and performance of these diagnostics is presented.
The purpose of this study was to investigate the impact of instructions on aircraft visual inspection performance and strategy. Forty-two inspectors from industry were asked to perform inspections of six areas of a Boeing 737. Six different instruction versions were developed for each inspection task, varying in the number and type of directed inspections. The amount of time spent inspecting, the number of calls made, and the number of the feedback calls detected all varied widely across the inspectors. However, inspectors who used instructions with a higher number of directed inspections referred to the instructions more often during and after the task, and found a higher percentage of a selected set of feedback cracks than inspectors using other instruction versions. This suggests that specific instructions can help overall inspection performance, not just performance on the defects specified. Further, instructions were shown to change the way an inspector approaches a task.
The dynamic compression of molten metals including Sn is of current interest. In particular, experiments on the compression of molten Sn by Davis and Hayes will be described at this conference. Supporting calculations of the equation of state and structure of molten Sn as a function of temperature and pressure are in progress. The calculations presented are ab initio molecular dynamics simulations based on electronic density functional theory within the local density approximation. The equation of state and liquid structure factors for zero pressure are compared with existing experimental results. The good agreement in this case provides validation of the calculations.
The pulsed-power Z machine, in an isentropic compression experiment (ICE) mode, will allow the dynamic characterization of porous materials - here various ceramic powders, e.g., Al{sub 2}O{sub 3}, WC, ZrO{sub 2} - at roughly half their solid densities. A cylindrical configuration can provide megabar-level loads on an annulus of the sample material. Data will be provided by velocity interferometers that measure free-surface (or possibly interface) particle velocities. Differing sample thicknesses using stepped or conical geometries yield experimental efficiency by allowing multiple data records on single shots. With the p/{alpha} model for porous materials, the one-dimensional Lagrangian hydrocode WONDY provides the needed analyses. Based on static data, both power-law and quadratic crush curves are employed. Within the model constraints, we suggest that the most important parameter for characterizing the material is the crush strength, p{sub s}. With adequate sample thicknesses, the planned velocity measurements differentiate among the various assumptions for p{sub s}.
The first vacuum-ultraviolet spectrum of a polysilylene (chain-type polysilane) with aromatic substituents is presented. Assignments of the absorption bands of the model compound poly(methylphenylsilylene) are based on previous experimental data and theoretical electronic band structure calculations for poly(alkylsilylenes) and on ultraviolet spectra of phenyl-containing monomers and polymers. Although aryl orbitals mix with the {sigma}-conjugated orbitals located along the catenated silicon backbone, some transitions are largely localized on the phenyl groups. These assignments elucidate the nature of the bonding in polysilylenes and should be useful in understanding photodegradation mechanisms and in the design of related new optical materials.
The effect of cross-linker functionality and interfacial bond density on the fracture behavior of highly cross-linked polymer networks bonded to a solid surface is studied using large-scale molecular dynamics simulations. Three different cross-linker functionalities (f = 3, 4, and 6) are considered. The polymer networks are created between two solid surfaces with the number of bonds to the surfaces varying from zero to full bonding to the network. Stress?strain curves are determined for each system from tensile pull and shear deformations. At full interfacial bond density the failure mode is cohesive. The cohesive failure stress is almost identical for shear and tensile modes. The simulations directly show that cohesive failure occurs when the number of interfacial bonds is greater than in the bulk. Decreasing the number of interfacial bonds results in cohesive to adhesive transition consistent with recent experimental results. The correspondence between the stress?strain curves at different f and the sequence of molecular deformations is obtained. The failure stress decreases with smaller f while failure strain increases with smaller f.